00:00:00.000 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 2375 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3640 00:00:00.000 originally caused by: 00:00:00.000 Started by timer 00:00:00.096 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.097 The recommended git tool is: git 00:00:00.097 using credential 00000000-0000-0000-0000-000000000002 00:00:00.098 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.159 Fetching changes from the remote Git repository 00:00:00.161 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.224 Using shallow fetch with depth 1 00:00:00.224 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.224 > git --version # timeout=10 00:00:00.273 > git --version # 'git version 2.39.2' 00:00:00.273 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.309 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.309 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.109 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.120 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.132 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:07.132 > git config core.sparsecheckout # timeout=10 00:00:07.143 > git read-tree -mu HEAD # timeout=10 00:00:07.161 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:07.180 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:07.180 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:07.280 [Pipeline] Start of Pipeline 00:00:07.295 [Pipeline] library 00:00:07.297 Loading library shm_lib@master 00:00:07.297 Library shm_lib@master is cached. Copying from home. 00:00:07.312 [Pipeline] node 00:00:07.332 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.335 [Pipeline] { 00:00:07.346 [Pipeline] catchError 00:00:07.348 [Pipeline] { 00:00:07.362 [Pipeline] wrap 00:00:07.370 [Pipeline] { 00:00:07.378 [Pipeline] stage 00:00:07.380 [Pipeline] { (Prologue) 00:00:07.615 [Pipeline] sh 00:00:08.388 + logger -p user.info -t JENKINS-CI 00:00:08.410 [Pipeline] echo 00:00:08.412 Node: GP11 00:00:08.421 [Pipeline] sh 00:00:08.755 [Pipeline] setCustomBuildProperty 00:00:08.763 [Pipeline] echo 00:00:08.764 Cleanup processes 00:00:08.767 [Pipeline] sh 00:00:09.052 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.052 4743 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.065 [Pipeline] sh 00:00:09.348 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.348 ++ grep -v 'sudo pgrep' 00:00:09.348 ++ awk '{print $1}' 00:00:09.348 + sudo kill -9 00:00:09.348 + true 00:00:09.366 [Pipeline] cleanWs 00:00:09.378 [WS-CLEANUP] Deleting project workspace... 00:00:09.378 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.390 [WS-CLEANUP] done 00:00:09.396 [Pipeline] setCustomBuildProperty 00:00:09.415 [Pipeline] sh 00:00:09.701 + sudo git config --global --replace-all safe.directory '*' 00:00:09.802 [Pipeline] httpRequest 00:00:11.898 [Pipeline] echo 00:00:11.901 Sorcerer 10.211.164.20 is alive 00:00:11.912 [Pipeline] retry 00:00:11.915 [Pipeline] { 00:00:11.930 [Pipeline] httpRequest 00:00:11.935 HttpMethod: GET 00:00:11.936 URL: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:11.937 Sending request to url: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:11.949 Response Code: HTTP/1.1 200 OK 00:00:11.949 Success: Status code 200 is in the accepted range: 200,404 00:00:11.950 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:15.471 [Pipeline] } 00:00:15.503 [Pipeline] // retry 00:00:15.516 [Pipeline] sh 00:00:15.802 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:15.818 [Pipeline] httpRequest 00:00:16.248 [Pipeline] echo 00:00:16.250 Sorcerer 10.211.164.20 is alive 00:00:16.260 [Pipeline] retry 00:00:16.262 [Pipeline] { 00:00:16.276 [Pipeline] httpRequest 00:00:16.282 HttpMethod: GET 00:00:16.282 URL: http://10.211.164.20/packages/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:16.283 Sending request to url: http://10.211.164.20/packages/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:16.306 Response Code: HTTP/1.1 200 OK 00:00:16.306 Success: Status code 200 is in the accepted range: 200,404 00:00:16.307 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:54.347 [Pipeline] } 00:00:54.366 [Pipeline] // retry 00:00:54.375 [Pipeline] sh 00:00:54.673 + tar --no-same-owner -xf spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:57.223 [Pipeline] sh 00:00:57.514 + git -C spdk log --oneline -n5 00:00:57.514 83e8405e4 nvmf/fc: Qpair disconnect callback: Serialize FC delete connection & close qpair process 00:00:57.515 0eab4c6fb nvmf/fc: Validate the ctrlr pointer inside nvmf_fc_req_bdev_abort() 00:00:57.515 4bcab9fb9 correct kick for CQ full case 00:00:57.515 8531656d3 test/nvmf: Interrupt test for local pcie nvme device 00:00:57.515 318515b44 nvme/perf: interrupt mode support for pcie controller 00:00:57.533 [Pipeline] withCredentials 00:00:57.546 > git --version # timeout=10 00:00:57.559 > git --version # 'git version 2.39.2' 00:00:57.584 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:57.587 [Pipeline] { 00:00:57.595 [Pipeline] retry 00:00:57.597 [Pipeline] { 00:00:57.611 [Pipeline] sh 00:00:58.095 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:00:58.367 [Pipeline] } 00:00:58.385 [Pipeline] // retry 00:00:58.390 [Pipeline] } 00:00:58.407 [Pipeline] // withCredentials 00:00:58.417 [Pipeline] httpRequest 00:00:58.864 [Pipeline] echo 00:00:58.865 Sorcerer 10.211.164.20 is alive 00:00:58.871 [Pipeline] retry 00:00:58.873 [Pipeline] { 00:00:58.881 [Pipeline] httpRequest 00:00:58.886 HttpMethod: GET 00:00:58.886 URL: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:58.887 Sending request to url: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:58.909 Response Code: HTTP/1.1 200 OK 00:00:58.910 Success: Status code 200 is in the accepted range: 200,404 00:00:58.910 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:20.179 [Pipeline] } 00:01:20.196 [Pipeline] // retry 00:01:20.205 [Pipeline] sh 00:01:20.490 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:22.432 [Pipeline] sh 00:01:22.716 + git -C dpdk log --oneline -n5 00:01:22.716 caf0f5d395 version: 22.11.4 00:01:22.716 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:22.716 dc9c799c7d vhost: fix missing spinlock unlock 00:01:22.716 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:22.716 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:22.728 [Pipeline] } 00:01:22.743 [Pipeline] // stage 00:01:22.752 [Pipeline] stage 00:01:22.755 [Pipeline] { (Prepare) 00:01:22.776 [Pipeline] writeFile 00:01:22.792 [Pipeline] sh 00:01:23.079 + logger -p user.info -t JENKINS-CI 00:01:23.087 [Pipeline] sh 00:01:23.364 + logger -p user.info -t JENKINS-CI 00:01:23.378 [Pipeline] sh 00:01:23.662 + cat autorun-spdk.conf 00:01:23.662 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:23.662 SPDK_TEST_NVMF=1 00:01:23.662 SPDK_TEST_NVME_CLI=1 00:01:23.662 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:23.662 SPDK_TEST_NVMF_NICS=e810 00:01:23.662 SPDK_TEST_VFIOUSER=1 00:01:23.662 SPDK_RUN_UBSAN=1 00:01:23.662 NET_TYPE=phy 00:01:23.662 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:23.662 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:23.672 RUN_NIGHTLY=1 00:01:23.677 [Pipeline] readFile 00:01:23.707 [Pipeline] withEnv 00:01:23.710 [Pipeline] { 00:01:23.723 [Pipeline] sh 00:01:24.015 + set -ex 00:01:24.015 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:24.015 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:24.015 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:24.015 ++ SPDK_TEST_NVMF=1 00:01:24.015 ++ SPDK_TEST_NVME_CLI=1 00:01:24.015 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:24.015 ++ SPDK_TEST_NVMF_NICS=e810 00:01:24.015 ++ SPDK_TEST_VFIOUSER=1 00:01:24.015 ++ SPDK_RUN_UBSAN=1 00:01:24.015 ++ NET_TYPE=phy 00:01:24.015 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:24.015 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:24.015 ++ RUN_NIGHTLY=1 00:01:24.015 + case $SPDK_TEST_NVMF_NICS in 00:01:24.015 + DRIVERS=ice 00:01:24.015 + [[ tcp == \r\d\m\a ]] 00:01:24.015 + [[ -n ice ]] 00:01:24.015 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:24.015 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:27.313 rmmod: ERROR: Module irdma is not currently loaded 00:01:27.313 rmmod: ERROR: Module i40iw is not currently loaded 00:01:27.313 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:27.313 + true 00:01:27.313 + for D in $DRIVERS 00:01:27.313 + sudo modprobe ice 00:01:27.313 + exit 0 00:01:27.320 [Pipeline] } 00:01:27.329 [Pipeline] // withEnv 00:01:27.332 [Pipeline] } 00:01:27.341 [Pipeline] // stage 00:01:27.347 [Pipeline] catchError 00:01:27.348 [Pipeline] { 00:01:27.355 [Pipeline] timeout 00:01:27.356 Timeout set to expire in 1 hr 0 min 00:01:27.359 [Pipeline] { 00:01:27.369 [Pipeline] stage 00:01:27.370 [Pipeline] { (Tests) 00:01:27.380 [Pipeline] sh 00:01:27.666 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:27.666 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:27.666 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:27.666 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:27.666 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:27.666 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:27.666 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:27.666 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:27.666 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:27.666 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:27.666 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:27.666 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:27.666 + source /etc/os-release 00:01:27.666 ++ NAME='Fedora Linux' 00:01:27.666 ++ VERSION='39 (Cloud Edition)' 00:01:27.666 ++ ID=fedora 00:01:27.666 ++ VERSION_ID=39 00:01:27.666 ++ VERSION_CODENAME= 00:01:27.666 ++ PLATFORM_ID=platform:f39 00:01:27.666 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:27.666 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:27.666 ++ LOGO=fedora-logo-icon 00:01:27.666 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:27.666 ++ HOME_URL=https://fedoraproject.org/ 00:01:27.666 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:27.666 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:27.666 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:27.666 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:27.666 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:27.666 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:27.666 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:27.666 ++ SUPPORT_END=2024-11-12 00:01:27.667 ++ VARIANT='Cloud Edition' 00:01:27.667 ++ VARIANT_ID=cloud 00:01:27.667 + uname -a 00:01:27.667 Linux spdk-gp-11 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:27.667 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:28.603 Hugepages 00:01:28.603 node hugesize free / total 00:01:28.603 node0 1048576kB 0 / 0 00:01:28.603 node0 2048kB 0 / 0 00:01:28.603 node1 1048576kB 0 / 0 00:01:28.603 node1 2048kB 0 / 0 00:01:28.603 00:01:28.603 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:28.603 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:28.603 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:28.603 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:28.603 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:28.603 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:28.603 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:28.603 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:28.603 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:28.603 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:28.603 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:28.603 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:28.603 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:28.603 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:28.603 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:28.603 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:28.603 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:28.603 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:28.603 + rm -f /tmp/spdk-ld-path 00:01:28.603 + source autorun-spdk.conf 00:01:28.603 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:28.603 ++ SPDK_TEST_NVMF=1 00:01:28.603 ++ SPDK_TEST_NVME_CLI=1 00:01:28.603 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:28.603 ++ SPDK_TEST_NVMF_NICS=e810 00:01:28.603 ++ SPDK_TEST_VFIOUSER=1 00:01:28.603 ++ SPDK_RUN_UBSAN=1 00:01:28.603 ++ NET_TYPE=phy 00:01:28.603 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:28.603 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:28.603 ++ RUN_NIGHTLY=1 00:01:28.603 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:28.603 + [[ -n '' ]] 00:01:28.603 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:28.603 + for M in /var/spdk/build-*-manifest.txt 00:01:28.603 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:28.603 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:28.603 + for M in /var/spdk/build-*-manifest.txt 00:01:28.603 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:28.603 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:28.603 + for M in /var/spdk/build-*-manifest.txt 00:01:28.603 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:28.603 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:28.603 ++ uname 00:01:28.604 + [[ Linux == \L\i\n\u\x ]] 00:01:28.604 + sudo dmesg -T 00:01:28.604 + sudo dmesg --clear 00:01:28.863 + dmesg_pid=6057 00:01:28.863 + [[ Fedora Linux == FreeBSD ]] 00:01:28.863 + sudo dmesg -Tw 00:01:28.863 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:28.863 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:28.863 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:28.863 + [[ -x /usr/src/fio-static/fio ]] 00:01:28.863 + export FIO_BIN=/usr/src/fio-static/fio 00:01:28.863 + FIO_BIN=/usr/src/fio-static/fio 00:01:28.863 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:28.863 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:28.863 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:28.863 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:28.863 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:28.863 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:28.863 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:28.863 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:28.863 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:28.863 00:06:52 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:28.863 00:06:52 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:28.863 00:06:52 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:28.863 00:06:52 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:28.863 00:06:52 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:28.863 00:06:52 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:28.863 00:06:52 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:28.863 00:06:52 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:28.863 00:06:52 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:28.863 00:06:52 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:28.863 00:06:52 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:28.863 00:06:52 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@10 -- $ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:28.863 00:06:52 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@11 -- $ RUN_NIGHTLY=1 00:01:28.864 00:06:52 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:28.864 00:06:52 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:28.864 00:06:52 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:28.864 00:06:52 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:28.864 00:06:52 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:28.864 00:06:52 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:28.864 00:06:52 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:28.864 00:06:52 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:28.864 00:06:52 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:28.864 00:06:52 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:28.864 00:06:52 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:28.864 00:06:52 -- paths/export.sh@5 -- $ export PATH 00:01:28.864 00:06:52 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:28.864 00:06:52 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:28.864 00:06:52 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:28.864 00:06:52 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731884812.XXXXXX 00:01:28.864 00:06:52 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731884812.FdMu12 00:01:28.864 00:06:52 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:28.864 00:06:52 -- common/autobuild_common.sh@492 -- $ '[' -n v22.11.4 ']' 00:01:28.864 00:06:52 -- common/autobuild_common.sh@493 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:28.864 00:06:52 -- common/autobuild_common.sh@493 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:28.864 00:06:52 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:28.864 00:06:52 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:28.864 00:06:52 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:28.864 00:06:52 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:28.864 00:06:52 -- common/autotest_common.sh@10 -- $ set +x 00:01:28.864 00:06:52 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:28.864 00:06:52 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:28.864 00:06:52 -- pm/common@17 -- $ local monitor 00:01:28.864 00:06:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:28.864 00:06:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:28.864 00:06:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:28.864 00:06:52 -- pm/common@21 -- $ date +%s 00:01:28.864 00:06:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:28.864 00:06:52 -- pm/common@21 -- $ date +%s 00:01:28.864 00:06:52 -- pm/common@25 -- $ sleep 1 00:01:28.864 00:06:52 -- pm/common@21 -- $ date +%s 00:01:28.864 00:06:52 -- pm/common@21 -- $ date +%s 00:01:28.864 00:06:52 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731884812 00:01:28.864 00:06:52 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731884812 00:01:28.864 00:06:52 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731884812 00:01:28.864 00:06:52 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731884812 00:01:28.864 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731884812_collect-cpu-load.pm.log 00:01:28.864 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731884812_collect-vmstat.pm.log 00:01:28.864 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731884812_collect-cpu-temp.pm.log 00:01:28.864 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731884812_collect-bmc-pm.bmc.pm.log 00:01:29.800 00:06:53 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:29.800 00:06:53 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:29.800 00:06:53 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:29.800 00:06:53 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:29.800 00:06:53 -- spdk/autobuild.sh@16 -- $ date -u 00:01:29.800 Sun Nov 17 11:06:53 PM UTC 2024 00:01:29.800 00:06:53 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:29.800 v25.01-pre-189-g83e8405e4 00:01:29.800 00:06:53 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:29.800 00:06:53 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:29.800 00:06:53 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:29.800 00:06:53 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:29.800 00:06:53 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:29.801 00:06:53 -- common/autotest_common.sh@10 -- $ set +x 00:01:29.801 ************************************ 00:01:29.801 START TEST ubsan 00:01:29.801 ************************************ 00:01:29.801 00:06:53 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:29.801 using ubsan 00:01:29.801 00:01:29.801 real 0m0.000s 00:01:29.801 user 0m0.000s 00:01:29.801 sys 0m0.000s 00:01:29.801 00:06:53 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:29.801 00:06:53 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:29.801 ************************************ 00:01:29.801 END TEST ubsan 00:01:29.801 ************************************ 00:01:30.061 00:06:53 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:01:30.061 00:06:53 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:30.061 00:06:53 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:30.061 00:06:53 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:01:30.061 00:06:53 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:30.061 00:06:53 -- common/autotest_common.sh@10 -- $ set +x 00:01:30.061 ************************************ 00:01:30.061 START TEST build_native_dpdk 00:01:30.061 ************************************ 00:01:30.061 00:06:53 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:01:30.061 00:06:53 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:30.061 00:06:53 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:30.061 00:06:53 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:30.061 00:06:53 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:30.061 00:06:53 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:30.061 00:06:53 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:30.061 00:06:53 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:30.061 00:06:53 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:30.061 00:06:53 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:30.061 00:06:53 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:30.061 00:06:53 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:30.061 00:06:53 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:30.061 00:06:53 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:30.061 00:06:53 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:30.061 00:06:53 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:30.061 00:06:53 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:30.061 00:06:53 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:30.061 00:06:53 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:30.061 00:06:53 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:30.061 00:06:53 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:30.061 caf0f5d395 version: 22.11.4 00:01:30.061 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:30.061 dc9c799c7d vhost: fix missing spinlock unlock 00:01:30.061 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:30.061 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:30.061 00:06:53 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:30.061 00:06:53 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:30.061 00:06:53 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:01:30.061 00:06:53 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:30.061 00:06:53 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:30.061 00:06:53 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:30.061 00:06:53 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:30.061 00:06:53 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:30.061 00:06:53 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:30.061 00:06:53 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:30.061 00:06:53 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:30.061 00:06:53 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:30.061 00:06:53 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:30.061 00:06:53 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:30.061 00:06:53 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:30.061 00:06:53 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:30.061 00:06:53 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:30.061 00:06:53 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:01:30.061 00:06:53 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:01:30.061 00:06:53 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:30.061 00:06:53 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:30.061 00:06:53 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:30.061 00:06:53 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:30.061 00:06:53 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:30.061 00:06:53 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:30.061 00:06:53 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:01:30.061 00:06:53 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:30.061 00:06:53 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:30.061 00:06:53 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:30.061 00:06:53 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:30.061 00:06:53 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:01:30.061 00:06:53 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:30.061 00:06:53 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:30.061 00:06:53 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:01:30.061 00:06:53 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:01:30.061 00:06:53 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:30.061 00:06:53 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:01:30.061 00:06:53 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:01:30.061 00:06:53 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:01:30.061 00:06:53 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:01:30.061 00:06:53 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:30.061 00:06:53 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:01:30.062 00:06:53 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:01:30.062 00:06:53 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:30.062 00:06:53 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:01:30.062 00:06:53 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:30.062 patching file config/rte_config.h 00:01:30.062 Hunk #1 succeeded at 60 (offset 1 line). 00:01:30.062 00:06:53 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:01:30.062 00:06:53 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:01:30.062 00:06:53 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:30.062 00:06:53 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:30.062 00:06:53 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:30.062 00:06:53 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:30.062 00:06:53 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:30.062 00:06:53 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:30.062 00:06:53 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:01:30.062 00:06:53 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:30.062 00:06:53 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:30.062 00:06:53 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:30.062 00:06:53 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:30.062 00:06:53 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:01:30.062 00:06:53 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:30.062 00:06:53 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:30.062 00:06:53 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:01:30.062 00:06:53 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:01:30.062 00:06:53 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:30.062 00:06:53 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:01:30.062 00:06:53 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:01:30.062 00:06:53 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:01:30.062 00:06:53 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:30.062 00:06:53 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:30.062 00:06:53 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:30.062 00:06:53 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:01:30.062 00:06:53 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:30.062 00:06:53 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:01:30.062 00:06:53 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:01:30.062 00:06:53 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:01:30.062 patching file lib/pcapng/rte_pcapng.c 00:01:30.062 Hunk #1 succeeded at 110 (offset -18 lines). 00:01:30.062 00:06:53 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 22.11.4 24.07.0 00:01:30.062 00:06:53 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 22.11.4 '>=' 24.07.0 00:01:30.062 00:06:53 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:30.062 00:06:53 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:30.062 00:06:53 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:30.062 00:06:53 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:30.062 00:06:53 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:30.062 00:06:53 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:30.062 00:06:53 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:01:30.062 00:06:53 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:30.062 00:06:53 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:30.062 00:06:53 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:30.062 00:06:53 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:30.062 00:06:53 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:01:30.062 00:06:53 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:30.062 00:06:53 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:30.062 00:06:53 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:01:30.062 00:06:53 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:01:30.062 00:06:53 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:30.062 00:06:53 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:01:30.062 00:06:53 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:01:30.062 00:06:53 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:01:30.062 00:06:53 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:30.062 00:06:53 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:30.062 00:06:53 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:30.062 00:06:53 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:01:30.062 00:06:53 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:30.062 00:06:53 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:01:30.062 00:06:53 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:01:30.062 00:06:53 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:01:30.062 00:06:53 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:01:30.062 00:06:53 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:01:30.062 00:06:53 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:30.062 00:06:53 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:36.670 The Meson build system 00:01:36.670 Version: 1.5.0 00:01:36.670 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:36.670 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:36.670 Build type: native build 00:01:36.670 Program cat found: YES (/usr/bin/cat) 00:01:36.670 Project name: DPDK 00:01:36.670 Project version: 22.11.4 00:01:36.670 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:36.670 C linker for the host machine: gcc ld.bfd 2.40-14 00:01:36.670 Host machine cpu family: x86_64 00:01:36.670 Host machine cpu: x86_64 00:01:36.670 Message: ## Building in Developer Mode ## 00:01:36.670 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:36.670 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:36.670 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:36.670 Program objdump found: YES (/usr/bin/objdump) 00:01:36.670 Program python3 found: YES (/usr/bin/python3) 00:01:36.670 Program cat found: YES (/usr/bin/cat) 00:01:36.670 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:36.670 Checking for size of "void *" : 8 00:01:36.670 Checking for size of "void *" : 8 (cached) 00:01:36.670 Library m found: YES 00:01:36.670 Library numa found: YES 00:01:36.670 Has header "numaif.h" : YES 00:01:36.670 Library fdt found: NO 00:01:36.670 Library execinfo found: NO 00:01:36.670 Has header "execinfo.h" : YES 00:01:36.670 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:36.670 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:36.670 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:36.670 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:36.670 Run-time dependency openssl found: YES 3.1.1 00:01:36.670 Run-time dependency libpcap found: YES 1.10.4 00:01:36.670 Has header "pcap.h" with dependency libpcap: YES 00:01:36.670 Compiler for C supports arguments -Wcast-qual: YES 00:01:36.670 Compiler for C supports arguments -Wdeprecated: YES 00:01:36.670 Compiler for C supports arguments -Wformat: YES 00:01:36.670 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:36.670 Compiler for C supports arguments -Wformat-security: NO 00:01:36.670 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:36.670 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:36.670 Compiler for C supports arguments -Wnested-externs: YES 00:01:36.670 Compiler for C supports arguments -Wold-style-definition: YES 00:01:36.670 Compiler for C supports arguments -Wpointer-arith: YES 00:01:36.670 Compiler for C supports arguments -Wsign-compare: YES 00:01:36.670 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:36.670 Compiler for C supports arguments -Wundef: YES 00:01:36.670 Compiler for C supports arguments -Wwrite-strings: YES 00:01:36.670 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:36.670 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:36.670 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:36.670 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:36.670 Compiler for C supports arguments -mavx512f: YES 00:01:36.670 Checking if "AVX512 checking" compiles: YES 00:01:36.670 Fetching value of define "__SSE4_2__" : 1 00:01:36.670 Fetching value of define "__AES__" : 1 00:01:36.670 Fetching value of define "__AVX__" : 1 00:01:36.670 Fetching value of define "__AVX2__" : (undefined) 00:01:36.670 Fetching value of define "__AVX512BW__" : (undefined) 00:01:36.670 Fetching value of define "__AVX512CD__" : (undefined) 00:01:36.670 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:36.670 Fetching value of define "__AVX512F__" : (undefined) 00:01:36.670 Fetching value of define "__AVX512VL__" : (undefined) 00:01:36.670 Fetching value of define "__PCLMUL__" : 1 00:01:36.670 Fetching value of define "__RDRND__" : 1 00:01:36.670 Fetching value of define "__RDSEED__" : (undefined) 00:01:36.670 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:36.670 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:36.670 Message: lib/kvargs: Defining dependency "kvargs" 00:01:36.670 Message: lib/telemetry: Defining dependency "telemetry" 00:01:36.670 Checking for function "getentropy" : YES 00:01:36.670 Message: lib/eal: Defining dependency "eal" 00:01:36.670 Message: lib/ring: Defining dependency "ring" 00:01:36.670 Message: lib/rcu: Defining dependency "rcu" 00:01:36.670 Message: lib/mempool: Defining dependency "mempool" 00:01:36.670 Message: lib/mbuf: Defining dependency "mbuf" 00:01:36.670 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:36.670 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:36.670 Compiler for C supports arguments -mpclmul: YES 00:01:36.670 Compiler for C supports arguments -maes: YES 00:01:36.670 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:36.670 Compiler for C supports arguments -mavx512bw: YES 00:01:36.670 Compiler for C supports arguments -mavx512dq: YES 00:01:36.670 Compiler for C supports arguments -mavx512vl: YES 00:01:36.670 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:36.670 Compiler for C supports arguments -mavx2: YES 00:01:36.670 Compiler for C supports arguments -mavx: YES 00:01:36.670 Message: lib/net: Defining dependency "net" 00:01:36.670 Message: lib/meter: Defining dependency "meter" 00:01:36.670 Message: lib/ethdev: Defining dependency "ethdev" 00:01:36.670 Message: lib/pci: Defining dependency "pci" 00:01:36.670 Message: lib/cmdline: Defining dependency "cmdline" 00:01:36.670 Message: lib/metrics: Defining dependency "metrics" 00:01:36.670 Message: lib/hash: Defining dependency "hash" 00:01:36.670 Message: lib/timer: Defining dependency "timer" 00:01:36.670 Fetching value of define "__AVX2__" : (undefined) (cached) 00:01:36.670 Compiler for C supports arguments -mavx2: YES (cached) 00:01:36.670 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:36.670 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:36.670 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:36.670 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:36.670 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:36.670 Message: lib/acl: Defining dependency "acl" 00:01:36.670 Message: lib/bbdev: Defining dependency "bbdev" 00:01:36.670 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:36.670 Run-time dependency libelf found: YES 0.191 00:01:36.670 Message: lib/bpf: Defining dependency "bpf" 00:01:36.670 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:36.670 Message: lib/compressdev: Defining dependency "compressdev" 00:01:36.670 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:36.670 Message: lib/distributor: Defining dependency "distributor" 00:01:36.670 Message: lib/efd: Defining dependency "efd" 00:01:36.670 Message: lib/eventdev: Defining dependency "eventdev" 00:01:36.670 Message: lib/gpudev: Defining dependency "gpudev" 00:01:36.670 Message: lib/gro: Defining dependency "gro" 00:01:36.670 Message: lib/gso: Defining dependency "gso" 00:01:36.670 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:36.670 Message: lib/jobstats: Defining dependency "jobstats" 00:01:36.670 Message: lib/latencystats: Defining dependency "latencystats" 00:01:36.670 Message: lib/lpm: Defining dependency "lpm" 00:01:36.670 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:36.670 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:36.670 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:36.670 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:36.670 Message: lib/member: Defining dependency "member" 00:01:36.670 Message: lib/pcapng: Defining dependency "pcapng" 00:01:36.670 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:36.670 Message: lib/power: Defining dependency "power" 00:01:36.670 Message: lib/rawdev: Defining dependency "rawdev" 00:01:36.670 Message: lib/regexdev: Defining dependency "regexdev" 00:01:36.670 Message: lib/dmadev: Defining dependency "dmadev" 00:01:36.670 Message: lib/rib: Defining dependency "rib" 00:01:36.670 Message: lib/reorder: Defining dependency "reorder" 00:01:36.670 Message: lib/sched: Defining dependency "sched" 00:01:36.670 Message: lib/security: Defining dependency "security" 00:01:36.670 Message: lib/stack: Defining dependency "stack" 00:01:36.670 Has header "linux/userfaultfd.h" : YES 00:01:36.670 Message: lib/vhost: Defining dependency "vhost" 00:01:36.670 Message: lib/ipsec: Defining dependency "ipsec" 00:01:36.670 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:36.670 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:36.670 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:36.670 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:36.670 Message: lib/fib: Defining dependency "fib" 00:01:36.670 Message: lib/port: Defining dependency "port" 00:01:36.670 Message: lib/pdump: Defining dependency "pdump" 00:01:36.670 Message: lib/table: Defining dependency "table" 00:01:36.670 Message: lib/pipeline: Defining dependency "pipeline" 00:01:36.670 Message: lib/graph: Defining dependency "graph" 00:01:36.670 Message: lib/node: Defining dependency "node" 00:01:36.670 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:36.670 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:36.670 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:36.670 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:36.670 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:36.670 Compiler for C supports arguments -Wno-unused-value: YES 00:01:37.611 Compiler for C supports arguments -Wno-format: YES 00:01:37.611 Compiler for C supports arguments -Wno-format-security: YES 00:01:37.611 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:37.611 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:37.611 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:37.611 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:37.611 Fetching value of define "__AVX2__" : (undefined) (cached) 00:01:37.611 Compiler for C supports arguments -mavx2: YES (cached) 00:01:37.611 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:37.611 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:37.611 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:37.611 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:37.611 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:37.611 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:37.611 Configuring doxy-api.conf using configuration 00:01:37.611 Program sphinx-build found: NO 00:01:37.611 Configuring rte_build_config.h using configuration 00:01:37.611 Message: 00:01:37.611 ================= 00:01:37.611 Applications Enabled 00:01:37.611 ================= 00:01:37.611 00:01:37.611 apps: 00:01:37.611 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:01:37.611 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:01:37.611 test-security-perf, 00:01:37.611 00:01:37.611 Message: 00:01:37.611 ================= 00:01:37.611 Libraries Enabled 00:01:37.611 ================= 00:01:37.611 00:01:37.611 libs: 00:01:37.611 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:01:37.611 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:01:37.611 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:01:37.611 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:01:37.611 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:01:37.611 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:01:37.611 table, pipeline, graph, node, 00:01:37.611 00:01:37.611 Message: 00:01:37.611 =============== 00:01:37.611 Drivers Enabled 00:01:37.611 =============== 00:01:37.611 00:01:37.611 common: 00:01:37.611 00:01:37.611 bus: 00:01:37.611 pci, vdev, 00:01:37.611 mempool: 00:01:37.611 ring, 00:01:37.611 dma: 00:01:37.611 00:01:37.611 net: 00:01:37.611 i40e, 00:01:37.611 raw: 00:01:37.611 00:01:37.611 crypto: 00:01:37.611 00:01:37.611 compress: 00:01:37.611 00:01:37.611 regex: 00:01:37.611 00:01:37.611 vdpa: 00:01:37.611 00:01:37.611 event: 00:01:37.611 00:01:37.611 baseband: 00:01:37.611 00:01:37.611 gpu: 00:01:37.611 00:01:37.611 00:01:37.611 Message: 00:01:37.611 ================= 00:01:37.611 Content Skipped 00:01:37.611 ================= 00:01:37.611 00:01:37.611 apps: 00:01:37.611 00:01:37.611 libs: 00:01:37.611 kni: explicitly disabled via build config (deprecated lib) 00:01:37.611 flow_classify: explicitly disabled via build config (deprecated lib) 00:01:37.611 00:01:37.611 drivers: 00:01:37.611 common/cpt: not in enabled drivers build config 00:01:37.611 common/dpaax: not in enabled drivers build config 00:01:37.611 common/iavf: not in enabled drivers build config 00:01:37.611 common/idpf: not in enabled drivers build config 00:01:37.611 common/mvep: not in enabled drivers build config 00:01:37.611 common/octeontx: not in enabled drivers build config 00:01:37.611 bus/auxiliary: not in enabled drivers build config 00:01:37.611 bus/dpaa: not in enabled drivers build config 00:01:37.611 bus/fslmc: not in enabled drivers build config 00:01:37.611 bus/ifpga: not in enabled drivers build config 00:01:37.611 bus/vmbus: not in enabled drivers build config 00:01:37.611 common/cnxk: not in enabled drivers build config 00:01:37.611 common/mlx5: not in enabled drivers build config 00:01:37.611 common/qat: not in enabled drivers build config 00:01:37.611 common/sfc_efx: not in enabled drivers build config 00:01:37.611 mempool/bucket: not in enabled drivers build config 00:01:37.611 mempool/cnxk: not in enabled drivers build config 00:01:37.611 mempool/dpaa: not in enabled drivers build config 00:01:37.611 mempool/dpaa2: not in enabled drivers build config 00:01:37.611 mempool/octeontx: not in enabled drivers build config 00:01:37.611 mempool/stack: not in enabled drivers build config 00:01:37.611 dma/cnxk: not in enabled drivers build config 00:01:37.611 dma/dpaa: not in enabled drivers build config 00:01:37.611 dma/dpaa2: not in enabled drivers build config 00:01:37.611 dma/hisilicon: not in enabled drivers build config 00:01:37.611 dma/idxd: not in enabled drivers build config 00:01:37.611 dma/ioat: not in enabled drivers build config 00:01:37.611 dma/skeleton: not in enabled drivers build config 00:01:37.611 net/af_packet: not in enabled drivers build config 00:01:37.611 net/af_xdp: not in enabled drivers build config 00:01:37.611 net/ark: not in enabled drivers build config 00:01:37.611 net/atlantic: not in enabled drivers build config 00:01:37.611 net/avp: not in enabled drivers build config 00:01:37.611 net/axgbe: not in enabled drivers build config 00:01:37.611 net/bnx2x: not in enabled drivers build config 00:01:37.611 net/bnxt: not in enabled drivers build config 00:01:37.611 net/bonding: not in enabled drivers build config 00:01:37.611 net/cnxk: not in enabled drivers build config 00:01:37.611 net/cxgbe: not in enabled drivers build config 00:01:37.611 net/dpaa: not in enabled drivers build config 00:01:37.611 net/dpaa2: not in enabled drivers build config 00:01:37.611 net/e1000: not in enabled drivers build config 00:01:37.611 net/ena: not in enabled drivers build config 00:01:37.611 net/enetc: not in enabled drivers build config 00:01:37.611 net/enetfec: not in enabled drivers build config 00:01:37.611 net/enic: not in enabled drivers build config 00:01:37.611 net/failsafe: not in enabled drivers build config 00:01:37.611 net/fm10k: not in enabled drivers build config 00:01:37.611 net/gve: not in enabled drivers build config 00:01:37.611 net/hinic: not in enabled drivers build config 00:01:37.611 net/hns3: not in enabled drivers build config 00:01:37.611 net/iavf: not in enabled drivers build config 00:01:37.611 net/ice: not in enabled drivers build config 00:01:37.611 net/idpf: not in enabled drivers build config 00:01:37.611 net/igc: not in enabled drivers build config 00:01:37.611 net/ionic: not in enabled drivers build config 00:01:37.611 net/ipn3ke: not in enabled drivers build config 00:01:37.611 net/ixgbe: not in enabled drivers build config 00:01:37.611 net/kni: not in enabled drivers build config 00:01:37.611 net/liquidio: not in enabled drivers build config 00:01:37.611 net/mana: not in enabled drivers build config 00:01:37.611 net/memif: not in enabled drivers build config 00:01:37.611 net/mlx4: not in enabled drivers build config 00:01:37.612 net/mlx5: not in enabled drivers build config 00:01:37.612 net/mvneta: not in enabled drivers build config 00:01:37.612 net/mvpp2: not in enabled drivers build config 00:01:37.612 net/netvsc: not in enabled drivers build config 00:01:37.612 net/nfb: not in enabled drivers build config 00:01:37.612 net/nfp: not in enabled drivers build config 00:01:37.612 net/ngbe: not in enabled drivers build config 00:01:37.612 net/null: not in enabled drivers build config 00:01:37.612 net/octeontx: not in enabled drivers build config 00:01:37.612 net/octeon_ep: not in enabled drivers build config 00:01:37.612 net/pcap: not in enabled drivers build config 00:01:37.612 net/pfe: not in enabled drivers build config 00:01:37.612 net/qede: not in enabled drivers build config 00:01:37.612 net/ring: not in enabled drivers build config 00:01:37.612 net/sfc: not in enabled drivers build config 00:01:37.612 net/softnic: not in enabled drivers build config 00:01:37.612 net/tap: not in enabled drivers build config 00:01:37.612 net/thunderx: not in enabled drivers build config 00:01:37.612 net/txgbe: not in enabled drivers build config 00:01:37.612 net/vdev_netvsc: not in enabled drivers build config 00:01:37.612 net/vhost: not in enabled drivers build config 00:01:37.612 net/virtio: not in enabled drivers build config 00:01:37.612 net/vmxnet3: not in enabled drivers build config 00:01:37.612 raw/cnxk_bphy: not in enabled drivers build config 00:01:37.612 raw/cnxk_gpio: not in enabled drivers build config 00:01:37.612 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:37.612 raw/ifpga: not in enabled drivers build config 00:01:37.612 raw/ntb: not in enabled drivers build config 00:01:37.612 raw/skeleton: not in enabled drivers build config 00:01:37.612 crypto/armv8: not in enabled drivers build config 00:01:37.612 crypto/bcmfs: not in enabled drivers build config 00:01:37.612 crypto/caam_jr: not in enabled drivers build config 00:01:37.612 crypto/ccp: not in enabled drivers build config 00:01:37.612 crypto/cnxk: not in enabled drivers build config 00:01:37.612 crypto/dpaa_sec: not in enabled drivers build config 00:01:37.612 crypto/dpaa2_sec: not in enabled drivers build config 00:01:37.612 crypto/ipsec_mb: not in enabled drivers build config 00:01:37.612 crypto/mlx5: not in enabled drivers build config 00:01:37.612 crypto/mvsam: not in enabled drivers build config 00:01:37.612 crypto/nitrox: not in enabled drivers build config 00:01:37.612 crypto/null: not in enabled drivers build config 00:01:37.612 crypto/octeontx: not in enabled drivers build config 00:01:37.612 crypto/openssl: not in enabled drivers build config 00:01:37.612 crypto/scheduler: not in enabled drivers build config 00:01:37.612 crypto/uadk: not in enabled drivers build config 00:01:37.612 crypto/virtio: not in enabled drivers build config 00:01:37.612 compress/isal: not in enabled drivers build config 00:01:37.612 compress/mlx5: not in enabled drivers build config 00:01:37.612 compress/octeontx: not in enabled drivers build config 00:01:37.612 compress/zlib: not in enabled drivers build config 00:01:37.612 regex/mlx5: not in enabled drivers build config 00:01:37.612 regex/cn9k: not in enabled drivers build config 00:01:37.612 vdpa/ifc: not in enabled drivers build config 00:01:37.612 vdpa/mlx5: not in enabled drivers build config 00:01:37.612 vdpa/sfc: not in enabled drivers build config 00:01:37.612 event/cnxk: not in enabled drivers build config 00:01:37.612 event/dlb2: not in enabled drivers build config 00:01:37.612 event/dpaa: not in enabled drivers build config 00:01:37.612 event/dpaa2: not in enabled drivers build config 00:01:37.612 event/dsw: not in enabled drivers build config 00:01:37.612 event/opdl: not in enabled drivers build config 00:01:37.612 event/skeleton: not in enabled drivers build config 00:01:37.612 event/sw: not in enabled drivers build config 00:01:37.612 event/octeontx: not in enabled drivers build config 00:01:37.612 baseband/acc: not in enabled drivers build config 00:01:37.612 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:37.612 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:37.612 baseband/la12xx: not in enabled drivers build config 00:01:37.612 baseband/null: not in enabled drivers build config 00:01:37.612 baseband/turbo_sw: not in enabled drivers build config 00:01:37.612 gpu/cuda: not in enabled drivers build config 00:01:37.612 00:01:37.612 00:01:37.612 Build targets in project: 316 00:01:37.612 00:01:37.612 DPDK 22.11.4 00:01:37.612 00:01:37.612 User defined options 00:01:37.612 libdir : lib 00:01:37.612 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:37.612 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:37.612 c_link_args : 00:01:37.612 enable_docs : false 00:01:37.612 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:37.612 enable_kmods : false 00:01:37.612 machine : native 00:01:37.612 tests : false 00:01:37.612 00:01:37.612 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:37.612 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:37.612 00:07:01 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:01:37.612 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:37.880 [1/745] Generating lib/rte_kvargs_mingw with a custom command 00:01:37.880 [2/745] Generating lib/rte_telemetry_mingw with a custom command 00:01:37.880 [3/745] Generating lib/rte_kvargs_def with a custom command 00:01:37.880 [4/745] Generating lib/rte_telemetry_def with a custom command 00:01:37.880 [5/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:37.880 [6/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:37.880 [7/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:37.880 [8/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:37.880 [9/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:37.880 [10/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:37.880 [11/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:37.880 [12/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:37.880 [13/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:37.880 [14/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:37.880 [15/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:37.880 [16/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:37.880 [17/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:37.880 [18/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:37.880 [19/745] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:37.880 [20/745] Linking static target lib/librte_kvargs.a 00:01:37.880 [21/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:37.880 [22/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:37.880 [23/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:37.880 [24/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:37.880 [25/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:37.880 [26/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:37.880 [27/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:37.880 [28/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:38.144 [29/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:01:38.144 [30/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:38.144 [31/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:38.144 [32/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:38.144 [33/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:38.144 [34/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:38.144 [35/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:38.144 [36/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:38.144 [37/745] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:38.144 [38/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:38.144 [39/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:38.144 [40/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:38.144 [41/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:38.144 [42/745] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:38.144 [43/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:38.144 [44/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:38.144 [45/745] Generating lib/rte_eal_mingw with a custom command 00:01:38.144 [46/745] Generating lib/rte_eal_def with a custom command 00:01:38.145 [47/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:38.145 [48/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:38.145 [49/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:38.145 [50/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:38.145 [51/745] Generating lib/rte_ring_def with a custom command 00:01:38.145 [52/745] Generating lib/rte_ring_mingw with a custom command 00:01:38.145 [53/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:38.145 [54/745] Generating lib/rte_rcu_mingw with a custom command 00:01:38.145 [55/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:38.145 [56/745] Generating lib/rte_mempool_mingw with a custom command 00:01:38.145 [57/745] Generating lib/rte_rcu_def with a custom command 00:01:38.145 [58/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:01:38.145 [59/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:38.145 [60/745] Generating lib/rte_mempool_def with a custom command 00:01:38.145 [61/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:38.145 [62/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:38.145 [63/745] Generating lib/rte_mbuf_mingw with a custom command 00:01:38.145 [64/745] Generating lib/rte_mbuf_def with a custom command 00:01:38.145 [65/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:38.145 [66/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:38.145 [67/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:38.145 [68/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:38.145 [69/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:38.145 [70/745] Generating lib/rte_net_mingw with a custom command 00:01:38.145 [71/745] Generating lib/rte_meter_def with a custom command 00:01:38.145 [72/745] Generating lib/rte_net_def with a custom command 00:01:38.145 [73/745] Generating lib/rte_meter_mingw with a custom command 00:01:38.145 [74/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:38.145 [75/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:38.145 [76/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:38.415 [77/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:38.415 [78/745] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:38.415 [79/745] Linking static target lib/librte_ring.a 00:01:38.415 [80/745] Generating lib/rte_ethdev_def with a custom command 00:01:38.415 [81/745] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.416 [82/745] Generating lib/rte_ethdev_mingw with a custom command 00:01:38.416 [83/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:38.416 [84/745] Linking target lib/librte_kvargs.so.23.0 00:01:38.416 [85/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:38.416 [86/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:38.416 [87/745] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:38.416 [88/745] Linking static target lib/librte_meter.a 00:01:38.416 [89/745] Generating lib/rte_pci_def with a custom command 00:01:38.416 [90/745] Generating lib/rte_pci_mingw with a custom command 00:01:38.684 [91/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:38.684 [92/745] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:38.684 [93/745] Linking static target lib/librte_pci.a 00:01:38.684 [94/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:38.684 [95/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:38.684 [96/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:38.684 [97/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:38.948 [98/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:38.948 [99/745] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.948 [100/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:38.948 [101/745] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.948 [102/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:38.948 [103/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:38.948 [104/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:38.948 [105/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:38.948 [106/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:38.948 [107/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:38.948 [108/745] Linking static target lib/librte_telemetry.a 00:01:38.948 [109/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:38.948 [110/745] Generating lib/rte_cmdline_def with a custom command 00:01:38.948 [111/745] Generating lib/rte_cmdline_mingw with a custom command 00:01:38.948 [112/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:38.948 [113/745] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.948 [114/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:38.948 [115/745] Generating lib/rte_metrics_mingw with a custom command 00:01:38.948 [116/745] Generating lib/rte_metrics_def with a custom command 00:01:38.948 [117/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:38.948 [118/745] Generating lib/rte_hash_def with a custom command 00:01:38.948 [119/745] Generating lib/rte_hash_mingw with a custom command 00:01:38.948 [120/745] Generating lib/rte_timer_mingw with a custom command 00:01:38.948 [121/745] Generating lib/rte_timer_def with a custom command 00:01:39.213 [122/745] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:39.213 [123/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:39.213 [124/745] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:39.213 [125/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:39.213 [126/745] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:39.213 [127/745] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:39.214 [128/745] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:39.477 [129/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:39.477 [130/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:39.477 [131/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:39.477 [132/745] Generating lib/rte_acl_def with a custom command 00:01:39.477 [133/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:39.477 [134/745] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:39.477 [135/745] Generating lib/rte_acl_mingw with a custom command 00:01:39.477 [136/745] Generating lib/rte_bbdev_def with a custom command 00:01:39.477 [137/745] Generating lib/rte_bbdev_mingw with a custom command 00:01:39.477 [138/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:39.477 [139/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:39.477 [140/745] Generating lib/rte_bitratestats_mingw with a custom command 00:01:39.477 [141/745] Generating lib/rte_bitratestats_def with a custom command 00:01:39.477 [142/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:39.477 [143/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:39.477 [144/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:39.477 [145/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:39.477 [146/745] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:39.477 [147/745] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.477 [148/745] Generating lib/rte_bpf_def with a custom command 00:01:39.740 [149/745] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:39.740 [150/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:39.740 [151/745] Generating lib/rte_bpf_mingw with a custom command 00:01:39.740 [152/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:39.740 [153/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:39.741 [154/745] Linking target lib/librte_telemetry.so.23.0 00:01:39.741 [155/745] Generating lib/rte_cfgfile_def with a custom command 00:01:39.741 [156/745] Generating lib/rte_cfgfile_mingw with a custom command 00:01:39.741 [157/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:39.741 [158/745] Generating lib/rte_compressdev_mingw with a custom command 00:01:39.741 [159/745] Generating lib/rte_compressdev_def with a custom command 00:01:39.741 [160/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:39.741 [161/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:39.741 [162/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:39.741 [163/745] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:39.741 [164/745] Generating lib/rte_cryptodev_mingw with a custom command 00:01:39.741 [165/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:39.741 [166/745] Linking static target lib/librte_rcu.a 00:01:40.005 [167/745] Generating lib/rte_cryptodev_def with a custom command 00:01:40.005 [168/745] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:40.005 [169/745] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:40.005 [170/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:40.005 [171/745] Generating lib/rte_distributor_def with a custom command 00:01:40.005 [172/745] Linking static target lib/librte_timer.a 00:01:40.005 [173/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:40.005 [174/745] Linking static target lib/librte_net.a 00:01:40.005 [175/745] Linking static target lib/librte_cmdline.a 00:01:40.005 [176/745] Generating lib/rte_distributor_mingw with a custom command 00:01:40.005 [177/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:40.005 [178/745] Generating lib/rte_efd_def with a custom command 00:01:40.005 [179/745] Generating lib/rte_efd_mingw with a custom command 00:01:40.269 [180/745] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:40.270 [181/745] Linking static target lib/librte_cfgfile.a 00:01:40.270 [182/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:40.270 [183/745] Linking static target lib/librte_mempool.a 00:01:40.270 [184/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:40.270 [185/745] Linking static target lib/librte_metrics.a 00:01:40.270 [186/745] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.537 [187/745] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.537 [188/745] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:40.537 [189/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:40.537 [190/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:40.537 [191/745] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.537 [192/745] Linking static target lib/librte_eal.a 00:01:40.537 [193/745] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:40.537 [194/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:40.537 [195/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:40.817 [196/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:40.817 [197/745] Generating lib/rte_eventdev_def with a custom command 00:01:40.817 [198/745] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.817 [199/745] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:40.817 [200/745] Generating lib/rte_eventdev_mingw with a custom command 00:01:40.817 [201/745] Linking static target lib/librte_bitratestats.a 00:01:40.817 [202/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:40.817 [203/745] Generating lib/rte_gpudev_def with a custom command 00:01:40.817 [204/745] Generating lib/rte_gpudev_mingw with a custom command 00:01:40.817 [205/745] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:40.817 [206/745] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.817 [207/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:40.817 [208/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:40.817 [209/745] Generating lib/rte_gro_def with a custom command 00:01:41.091 [210/745] Generating lib/rte_gro_mingw with a custom command 00:01:41.091 [211/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:41.091 [212/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:41.091 [213/745] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.091 [214/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:41.091 [215/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:41.091 [216/745] Generating lib/rte_gso_def with a custom command 00:01:41.091 [217/745] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:01:41.091 [218/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:41.091 [219/745] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:01:41.091 [220/745] Generating lib/rte_gso_mingw with a custom command 00:01:41.394 [221/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:41.394 [222/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:41.394 [223/745] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.394 [224/745] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:41.394 [225/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:41.394 [226/745] Linking static target lib/librte_bbdev.a 00:01:41.394 [227/745] Generating lib/rte_ip_frag_def with a custom command 00:01:41.394 [228/745] Generating lib/rte_ip_frag_mingw with a custom command 00:01:41.394 [229/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:41.394 [230/745] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.394 [231/745] Generating lib/rte_jobstats_def with a custom command 00:01:41.394 [232/745] Generating lib/rte_jobstats_mingw with a custom command 00:01:41.394 [233/745] Generating lib/rte_latencystats_def with a custom command 00:01:41.394 [234/745] Generating lib/rte_latencystats_mingw with a custom command 00:01:41.394 [235/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:41.664 [236/745] Linking static target lib/librte_compressdev.a 00:01:41.664 [237/745] Generating lib/rte_lpm_def with a custom command 00:01:41.665 [238/745] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:41.665 [239/745] Linking static target lib/librte_jobstats.a 00:01:41.665 [240/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:41.665 [241/745] Generating lib/rte_lpm_mingw with a custom command 00:01:41.665 [242/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:41.665 [243/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:41.949 [244/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:41.949 [245/745] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:41.949 [246/745] Generating lib/rte_member_def with a custom command 00:01:41.949 [247/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:41.949 [248/745] Linking static target lib/librte_distributor.a 00:01:41.949 [249/745] Generating lib/rte_member_mingw with a custom command 00:01:42.214 [250/745] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.214 [251/745] Generating lib/rte_pcapng_def with a custom command 00:01:42.214 [252/745] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:42.214 [253/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:42.214 [254/745] Linking static target lib/librte_bpf.a 00:01:42.214 [255/745] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:42.214 [256/745] Generating lib/rte_pcapng_mingw with a custom command 00:01:42.214 [257/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:42.214 [258/745] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.484 [259/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:42.484 [260/745] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:42.484 [261/745] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:42.484 [262/745] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:42.484 [263/745] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:42.484 [264/745] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:42.484 [265/745] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:42.484 [266/745] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:42.484 [267/745] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.484 [268/745] Generating lib/rte_power_def with a custom command 00:01:42.484 [269/745] Linking static target lib/librte_gpudev.a 00:01:42.484 [270/745] Generating lib/rte_power_mingw with a custom command 00:01:42.484 [271/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:42.484 [272/745] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:42.484 [273/745] Generating lib/rte_rawdev_def with a custom command 00:01:42.484 [274/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:42.484 [275/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:42.484 [276/745] Generating lib/rte_rawdev_mingw with a custom command 00:01:42.484 [277/745] Linking static target lib/librte_gro.a 00:01:42.484 [278/745] Generating lib/rte_regexdev_def with a custom command 00:01:42.484 [279/745] Generating lib/rte_regexdev_mingw with a custom command 00:01:42.484 [280/745] Generating lib/rte_dmadev_def with a custom command 00:01:42.484 [281/745] Generating lib/rte_dmadev_mingw with a custom command 00:01:42.749 [282/745] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:42.749 [283/745] Generating lib/rte_rib_def with a custom command 00:01:42.749 [284/745] Generating lib/rte_rib_mingw with a custom command 00:01:42.749 [285/745] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.749 [286/745] Generating lib/rte_reorder_def with a custom command 00:01:42.749 [287/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:42.749 [288/745] Generating lib/rte_reorder_mingw with a custom command 00:01:42.749 [289/745] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:01:43.014 [290/745] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:43.014 [291/745] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.014 [292/745] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:43.014 [293/745] Generating lib/rte_sched_def with a custom command 00:01:43.014 [294/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:43.014 [295/745] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:43.014 [296/745] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:43.014 [297/745] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:43.014 [298/745] Generating lib/rte_sched_mingw with a custom command 00:01:43.014 [299/745] Linking static target lib/librte_latencystats.a 00:01:43.014 [300/745] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:43.014 [301/745] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:43.014 [302/745] Generating lib/rte_security_def with a custom command 00:01:43.014 [303/745] Generating lib/rte_security_mingw with a custom command 00:01:43.014 [304/745] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:43.014 [305/745] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:43.014 [306/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:43.014 [307/745] Generating lib/rte_stack_mingw with a custom command 00:01:43.014 [308/745] Generating lib/rte_stack_def with a custom command 00:01:43.014 [309/745] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:43.282 [310/745] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.282 [311/745] Linking static target lib/librte_rawdev.a 00:01:43.282 [312/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:43.282 [313/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:43.282 [314/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:43.282 [315/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:43.282 [316/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:43.282 [317/745] Linking static target lib/librte_stack.a 00:01:43.282 [318/745] Generating lib/rte_vhost_def with a custom command 00:01:43.282 [319/745] Generating lib/rte_vhost_mingw with a custom command 00:01:43.282 [320/745] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:43.282 [321/745] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:43.282 [322/745] Linking static target lib/librte_dmadev.a 00:01:43.282 [323/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:43.550 [324/745] Linking static target lib/librte_ip_frag.a 00:01:43.550 [325/745] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.550 [326/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:43.550 [327/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:43.550 [328/745] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.550 [329/745] Generating lib/rte_ipsec_def with a custom command 00:01:43.550 [330/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:43.550 [331/745] Generating lib/rte_ipsec_mingw with a custom command 00:01:43.550 [332/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:43.816 [333/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:43.817 [334/745] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:01:43.817 [335/745] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.817 [336/745] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.817 [337/745] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.817 [338/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:44.078 [339/745] Generating lib/rte_fib_def with a custom command 00:01:44.078 [340/745] Generating lib/rte_fib_mingw with a custom command 00:01:44.078 [341/745] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:44.078 [342/745] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:44.078 [343/745] Linking static target lib/librte_gso.a 00:01:44.078 [344/745] Linking static target lib/librte_regexdev.a 00:01:44.078 [345/745] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:44.341 [346/745] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:44.341 [347/745] Linking static target lib/librte_efd.a 00:01:44.341 [348/745] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:44.341 [349/745] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.341 [350/745] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.341 [351/745] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:44.341 [352/745] Linking static target lib/librte_pcapng.a 00:01:44.341 [353/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:44.603 [354/745] Linking static target lib/librte_lpm.a 00:01:44.603 [355/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:44.603 [356/745] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:44.603 [357/745] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:44.603 [358/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:44.603 [359/745] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:44.603 [360/745] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:44.603 [361/745] Linking static target lib/librte_reorder.a 00:01:44.603 [362/745] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.878 [363/745] Generating lib/rte_port_def with a custom command 00:01:44.878 [364/745] Generating lib/rte_port_mingw with a custom command 00:01:44.878 [365/745] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:44.878 [366/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:44.878 [367/745] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:44.878 [368/745] Linking static target lib/acl/libavx2_tmp.a 00:01:44.878 [369/745] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:01:44.878 [370/745] Linking static target lib/fib/libtrie_avx512_tmp.a 00:01:44.878 [371/745] Generating lib/rte_pdump_mingw with a custom command 00:01:44.878 [372/745] Generating lib/rte_pdump_def with a custom command 00:01:44.878 [373/745] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.878 [374/745] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:01:44.878 [375/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:44.878 [376/745] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:01:44.878 [377/745] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:44.878 [378/745] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:45.145 [379/745] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.145 [380/745] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:45.145 [381/745] Linking static target lib/librte_security.a 00:01:45.145 [382/745] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.145 [383/745] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:45.145 [384/745] Linking static target lib/librte_hash.a 00:01:45.145 [385/745] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:45.145 [386/745] Linking static target lib/librte_power.a 00:01:45.145 [387/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:45.145 [388/745] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:45.408 [389/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:45.408 [390/745] Linking static target lib/librte_rib.a 00:01:45.408 [391/745] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.408 [392/745] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:01:45.408 [393/745] Linking static target lib/acl/libavx512_tmp.a 00:01:45.408 [394/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:45.408 [395/745] Linking static target lib/librte_acl.a 00:01:45.669 [396/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:45.669 [397/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:45.669 [398/745] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:45.669 [399/745] Linking static target lib/librte_ethdev.a 00:01:45.669 [400/745] Generating lib/rte_table_def with a custom command 00:01:45.669 [401/745] Generating lib/rte_table_mingw with a custom command 00:01:45.932 [402/745] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.932 [403/745] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.932 [404/745] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.201 [405/745] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:46.201 [406/745] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:46.201 [407/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:46.201 [408/745] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:46.201 [409/745] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:46.201 [410/745] Generating lib/rte_pipeline_def with a custom command 00:01:46.201 [411/745] Generating lib/rte_pipeline_mingw with a custom command 00:01:46.201 [412/745] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:46.201 [413/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:46.201 [414/745] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:46.201 [415/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:46.201 [416/745] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.465 [417/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:46.465 [418/745] Generating lib/rte_graph_def with a custom command 00:01:46.465 [419/745] Generating lib/rte_graph_mingw with a custom command 00:01:46.465 [420/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:46.465 [421/745] Linking static target lib/librte_mbuf.a 00:01:46.465 [422/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:46.465 [423/745] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:46.465 [424/745] Linking static target lib/librte_fib.a 00:01:46.465 [425/745] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:46.465 [426/745] Linking static target lib/librte_member.a 00:01:46.730 [427/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:46.730 [428/745] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:46.730 [429/745] Linking static target lib/librte_eventdev.a 00:01:46.730 [430/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:46.730 [431/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:46.730 [432/745] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:46.730 [433/745] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:46.730 [434/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:46.730 [435/745] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:46.730 [436/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:46.730 [437/745] Generating lib/rte_node_mingw with a custom command 00:01:46.730 [438/745] Generating lib/rte_node_def with a custom command 00:01:46.990 [439/745] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.990 [440/745] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:46.990 [441/745] Linking static target lib/librte_sched.a 00:01:46.990 [442/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:46.990 [443/745] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.990 [444/745] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:46.990 [445/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:46.990 [446/745] Generating drivers/rte_bus_pci_def with a custom command 00:01:47.253 [447/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:47.253 [448/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:47.253 [449/745] Generating drivers/rte_bus_pci_mingw with a custom command 00:01:47.253 [450/745] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.253 [451/745] Linking static target lib/librte_cryptodev.a 00:01:47.253 [452/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:47.254 [453/745] Generating drivers/rte_bus_vdev_def with a custom command 00:01:47.254 [454/745] Generating drivers/rte_bus_vdev_mingw with a custom command 00:01:47.254 [455/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:47.254 [456/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:47.254 [457/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:47.254 [458/745] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.254 [459/745] Generating drivers/rte_mempool_ring_def with a custom command 00:01:47.254 [460/745] Generating drivers/rte_mempool_ring_mingw with a custom command 00:01:47.254 [461/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:47.516 [462/745] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:47.516 [463/745] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:47.516 [464/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:47.516 [465/745] Linking static target lib/librte_pdump.a 00:01:47.516 [466/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:47.516 [467/745] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:47.516 [468/745] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:47.516 [469/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:47.516 [470/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:47.516 [471/745] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:47.516 [472/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:47.784 [473/745] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:47.784 [474/745] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:47.784 [475/745] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:47.784 [476/745] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.784 [477/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:47.784 [478/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:47.784 [479/745] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:48.055 [480/745] Generating drivers/rte_net_i40e_mingw with a custom command 00:01:48.055 [481/745] Generating drivers/rte_net_i40e_def with a custom command 00:01:48.055 [482/745] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:48.055 [483/745] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.055 [484/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:48.055 [485/745] Linking static target lib/librte_ipsec.a 00:01:48.055 [486/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:48.055 [487/745] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:48.055 [488/745] Linking static target lib/librte_table.a 00:01:48.055 [489/745] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:48.055 [490/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:48.324 [491/745] Linking static target drivers/librte_bus_vdev.a 00:01:48.324 [492/745] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:48.324 [493/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:48.324 [494/745] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:48.592 [495/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:48.592 [496/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:48.592 [497/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:48.592 [498/745] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.592 [499/745] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:48.592 [500/745] Linking static target lib/librte_graph.a 00:01:48.592 [501/745] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.592 [502/745] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:48.592 [503/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:48.592 [504/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:48.855 [505/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:48.855 [506/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:48.855 [507/745] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:48.855 [508/745] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:48.855 [509/745] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:48.855 [510/745] Linking static target drivers/librte_bus_pci.a 00:01:48.855 [511/745] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:48.855 [512/745] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:49.122 [513/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:49.387 [514/745] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.387 [515/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:49.387 [516/745] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:49.387 [517/745] Linking static target lib/librte_port.a 00:01:49.387 [518/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:49.649 [519/745] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.649 [520/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:49.649 [521/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:49.919 [522/745] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.919 [523/745] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:49.919 [524/745] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:49.919 [525/745] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:49.919 [526/745] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:50.185 [527/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:50.185 [528/745] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.185 [529/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:50.185 [530/745] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:50.185 [531/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:50.186 [532/745] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:50.186 [533/745] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:50.186 [534/745] Linking static target drivers/librte_mempool_ring.a 00:01:50.452 [535/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:50.452 [536/745] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:50.452 [537/745] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.452 [538/745] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:50.718 [539/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:50.718 [540/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:50.718 [541/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:50.985 [542/745] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.985 [543/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:51.250 [544/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:51.250 [545/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:51.250 [546/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:51.250 [547/745] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:51.250 [548/745] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:51.250 [549/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:51.250 [550/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:51.250 [551/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:51.829 [552/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:51.829 [553/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:51.829 [554/745] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:51.829 [555/745] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:51.829 [556/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:52.094 [557/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:52.094 [558/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:52.366 [559/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:52.366 [560/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:52.366 [561/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:52.366 [562/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:52.628 [563/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:52.628 [564/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:52.628 [565/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:52.628 [566/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:52.628 [567/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:52.628 [568/745] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:52.628 [569/745] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:52.897 [570/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:52.897 [571/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:53.160 [572/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:53.160 [573/745] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:53.160 [574/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:53.160 [575/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:53.426 [576/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:53.426 [577/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:53.426 [578/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:53.426 [579/745] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:53.426 [580/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:53.426 [581/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:53.426 [582/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:53.692 [583/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:53.692 [584/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:53.692 [585/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:53.956 [586/745] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.218 [587/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:54.218 [588/745] Linking target lib/librte_eal.so.23.0 00:01:54.218 [589/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:54.218 [590/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:54.487 [591/745] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.487 [592/745] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:01:54.487 [593/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:54.487 [594/745] Linking target lib/librte_meter.so.23.0 00:01:54.487 [595/745] Linking target lib/librte_ring.so.23.0 00:01:54.487 [596/745] Linking target lib/librte_pci.so.23.0 00:01:54.487 [597/745] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:54.487 [598/745] Linking target lib/librte_timer.so.23.0 00:01:54.487 [599/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:54.487 [600/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:54.487 [601/745] Linking target lib/librte_acl.so.23.0 00:01:54.487 [602/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:54.487 [603/745] Linking target lib/librte_cfgfile.so.23.0 00:01:54.487 [604/745] Linking target lib/librte_rawdev.so.23.0 00:01:54.487 [605/745] Linking target lib/librte_jobstats.so.23.0 00:01:54.757 [606/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:54.758 [607/745] Linking target lib/librte_dmadev.so.23.0 00:01:54.758 [608/745] Linking target lib/librte_stack.so.23.0 00:01:54.758 [609/745] Linking target drivers/librte_bus_vdev.so.23.0 00:01:54.758 [610/745] Linking target lib/librte_graph.so.23.0 00:01:54.758 [611/745] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:01:54.758 [612/745] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:01:54.758 [613/745] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:01:54.758 [614/745] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:54.758 [615/745] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:54.758 [616/745] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:01:54.758 [617/745] Linking target lib/librte_rcu.so.23.0 00:01:54.758 [618/745] Linking target lib/librte_mempool.so.23.0 00:01:54.758 [619/745] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:54.758 [620/745] Linking target drivers/librte_bus_pci.so.23.0 00:01:54.758 [621/745] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:01:54.758 [622/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:55.018 [623/745] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:55.018 [624/745] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:55.018 [625/745] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:01:55.018 [626/745] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:01:55.018 [627/745] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:01:55.018 [628/745] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:55.018 [629/745] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:01:55.018 [630/745] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:01:55.018 [631/745] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:01:55.018 [632/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:55.018 [633/745] Linking target drivers/librte_mempool_ring.so.23.0 00:01:55.018 [634/745] Linking target lib/librte_rib.so.23.0 00:01:55.018 [635/745] Linking target lib/librte_mbuf.so.23.0 00:01:55.018 [636/745] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:55.018 [637/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:55.018 [638/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:55.277 [639/745] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:01:55.277 [640/745] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:01:55.277 [641/745] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:55.277 [642/745] Linking target lib/librte_distributor.so.23.0 00:01:55.277 [643/745] Linking target lib/librte_cryptodev.so.23.0 00:01:55.277 [644/745] Linking target lib/librte_gpudev.so.23.0 00:01:55.277 [645/745] Linking target lib/librte_reorder.so.23.0 00:01:55.277 [646/745] Linking target lib/librte_compressdev.so.23.0 00:01:55.277 [647/745] Linking target lib/librte_bbdev.so.23.0 00:01:55.277 [648/745] Linking target lib/librte_net.so.23.0 00:01:55.277 [649/745] Linking target lib/librte_fib.so.23.0 00:01:55.277 [650/745] Linking target lib/librte_regexdev.so.23.0 00:01:55.277 [651/745] Linking target lib/librte_sched.so.23.0 00:01:55.277 [652/745] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:55.277 [653/745] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:55.277 [654/745] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:55.277 [655/745] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:01:55.277 [656/745] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:55.277 [657/745] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:01:55.538 [658/745] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:01:55.538 [659/745] Linking target lib/librte_security.so.23.0 00:01:55.538 [660/745] Linking target lib/librte_hash.so.23.0 00:01:55.538 [661/745] Linking target lib/librte_cmdline.so.23.0 00:01:55.538 [662/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:55.538 [663/745] Linking target lib/librte_ethdev.so.23.0 00:01:55.538 [664/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:55.538 [665/745] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:01:55.538 [666/745] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:01:55.538 [667/745] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:55.538 [668/745] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:01:55.538 [669/745] Linking target lib/librte_lpm.so.23.0 00:01:55.538 [670/745] Linking target lib/librte_member.so.23.0 00:01:55.538 [671/745] Linking target lib/librte_efd.so.23.0 00:01:55.538 [672/745] Linking target lib/librte_gro.so.23.0 00:01:55.538 [673/745] Linking target lib/librte_ipsec.so.23.0 00:01:55.538 [674/745] Linking target lib/librte_bpf.so.23.0 00:01:55.538 [675/745] Linking target lib/librte_gso.so.23.0 00:01:55.538 [676/745] Linking target lib/librte_ip_frag.so.23.0 00:01:55.538 [677/745] Linking target lib/librte_pcapng.so.23.0 00:01:55.799 [678/745] Linking target lib/librte_metrics.so.23.0 00:01:55.799 [679/745] Linking target lib/librte_power.so.23.0 00:01:55.799 [680/745] Linking target lib/librte_eventdev.so.23.0 00:01:55.799 [681/745] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:01:55.799 [682/745] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:01:55.799 [683/745] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:01:55.799 [684/745] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:01:55.799 [685/745] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:01:55.799 [686/745] Linking target lib/librte_pdump.so.23.0 00:01:55.799 [687/745] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:55.799 [688/745] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:01:55.799 [689/745] Linking target lib/librte_port.so.23.0 00:01:55.799 [690/745] Linking target lib/librte_bitratestats.so.23.0 00:01:55.799 [691/745] Linking target lib/librte_latencystats.so.23.0 00:01:56.058 [692/745] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:01:56.058 [693/745] Linking target lib/librte_table.so.23.0 00:01:56.058 [694/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:56.318 [695/745] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:56.318 [696/745] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:01:56.318 [697/745] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:56.577 [698/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:56.577 [699/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:56.836 [700/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:56.836 [701/745] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:56.836 [702/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:56.836 [703/745] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:57.096 [704/745] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:57.355 [705/745] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:57.355 [706/745] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:57.355 [707/745] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:57.355 [708/745] Linking static target drivers/librte_net_i40e.a 00:01:57.355 [709/745] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:57.614 [710/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:57.873 [711/745] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.873 [712/745] Linking target drivers/librte_net_i40e.so.23.0 00:01:58.441 [713/745] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:58.441 [714/745] Linking static target lib/librte_node.a 00:01:58.700 [715/745] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.700 [716/745] Linking target lib/librte_node.so.23.0 00:01:58.960 [717/745] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:59.532 [718/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:00.100 [719/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:08.266 [720/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:40.346 [721/745] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:40.346 [722/745] Linking static target lib/librte_vhost.a 00:02:40.346 [723/745] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.346 [724/745] Linking target lib/librte_vhost.so.23.0 00:02:50.329 [725/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:50.329 [726/745] Linking static target lib/librte_pipeline.a 00:02:50.329 [727/745] Linking target app/dpdk-test-fib 00:02:50.329 [728/745] Linking target app/dpdk-test-regex 00:02:50.329 [729/745] Linking target app/dpdk-test-sad 00:02:50.329 [730/745] Linking target app/dpdk-test-cmdline 00:02:50.329 [731/745] Linking target app/dpdk-test-gpudev 00:02:50.329 [732/745] Linking target app/dpdk-dumpcap 00:02:50.329 [733/745] Linking target app/dpdk-test-acl 00:02:50.329 [734/745] Linking target app/dpdk-pdump 00:02:50.329 [735/745] Linking target app/dpdk-test-security-perf 00:02:50.329 [736/745] Linking target app/dpdk-test-flow-perf 00:02:50.329 [737/745] Linking target app/dpdk-test-pipeline 00:02:50.329 [738/745] Linking target app/dpdk-proc-info 00:02:50.329 [739/745] Linking target app/dpdk-test-crypto-perf 00:02:50.329 [740/745] Linking target app/dpdk-test-eventdev 00:02:50.329 [741/745] Linking target app/dpdk-test-bbdev 00:02:50.329 [742/745] Linking target app/dpdk-test-compress-perf 00:02:50.329 [743/745] Linking target app/dpdk-testpmd 00:02:52.231 [744/745] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.231 [745/745] Linking target lib/librte_pipeline.so.23.0 00:02:52.231 00:08:15 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:02:52.231 00:08:15 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:52.231 00:08:15 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:02:52.231 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:52.231 [0/1] Installing files. 00:02:52.495 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.495 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:52.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:52.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:52.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:52.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:52.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:52.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:52.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:52.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:52.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:52.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:52.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:52.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:52.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:52.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:52.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:52.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:52.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:52.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/flow_classify.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:52.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/ipv4_rules_file.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:52.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:52.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:52.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:52.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:52.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:52.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:52.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:52.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:52.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:52.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:52.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:52.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:52.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:52.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:52.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:52.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:52.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:52.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:52.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:52.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:52.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:52.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:52.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:52.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:52.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:52.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:52.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:52.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:52.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:52.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:52.496 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:52.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:52.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:52.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:52.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:52.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:52.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:52.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:52.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:52.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:52.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:52.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:52.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:52.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:52.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:52.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:52.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:52.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:52.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:52.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:52.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:52.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:52.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:52.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:52.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:52.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:52.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:52.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:52.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:52.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:52.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:52.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:52.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:52.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:52.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:52.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:52.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:52.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:52.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:52.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:52.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:52.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:52.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:52.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:52.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:52.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:52.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:52.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:52.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:52.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:52.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:52.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:52.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:52.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:52.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:52.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:52.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:52.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:52.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:52.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:52.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:52.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:52.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:52.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:52.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:52.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:52.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:52.497 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.498 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.499 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:52.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:52.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:52.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:52.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:52.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:52.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/kni.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:52.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:52.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:52.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:52.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:52.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:52.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:52.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:52.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:52.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:52.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:52.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:52.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:52.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:52.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:52.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:52.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:52.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:52.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:52.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:52.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:52.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:52.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:52.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:52.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:52.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:52.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:52.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:52.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:52.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:52.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:52.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:52.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:52.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:52.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:52.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:52.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:52.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:52.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:52.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:52.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:52.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:52.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:52.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:52.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:52.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:52.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:52.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:52.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:52.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:52.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:52.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:52.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:52.762 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.762 Installing lib/librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.762 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.762 Installing lib/librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.762 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.762 Installing lib/librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.762 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.762 Installing lib/librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.762 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.762 Installing lib/librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.762 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.762 Installing lib/librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.762 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.762 Installing lib/librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.762 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.762 Installing lib/librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.762 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.762 Installing lib/librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.762 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.762 Installing lib/librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.762 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.762 Installing lib/librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.762 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.762 Installing lib/librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.762 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.762 Installing lib/librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.762 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.762 Installing lib/librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.762 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.762 Installing lib/librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:52.763 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:53.339 Installing lib/librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:53.339 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:53.339 Installing lib/librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:53.339 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:53.339 Installing drivers/librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:53.339 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:53.339 Installing drivers/librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:53.339 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:53.339 Installing drivers/librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:53.339 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:53.339 Installing drivers/librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:53.339 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:53.339 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:53.339 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:53.339 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:53.339 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:53.339 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:53.339 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:53.339 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:53.339 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:53.339 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:53.339 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:53.339 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:53.339 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:53.339 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:53.339 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:53.339 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:53.339 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_empty_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_intel_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.340 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:53.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:53.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:53.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:53.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:53.341 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:53.341 Installing symlink pointing to librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.23 00:02:53.341 Installing symlink pointing to librte_kvargs.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:53.341 Installing symlink pointing to librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.23 00:02:53.341 Installing symlink pointing to librte_telemetry.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:53.341 Installing symlink pointing to librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.23 00:02:53.341 Installing symlink pointing to librte_eal.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:53.341 Installing symlink pointing to librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.23 00:02:53.341 Installing symlink pointing to librte_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:53.341 Installing symlink pointing to librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.23 00:02:53.341 Installing symlink pointing to librte_rcu.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:53.341 Installing symlink pointing to librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.23 00:02:53.341 Installing symlink pointing to librte_mempool.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:53.341 Installing symlink pointing to librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.23 00:02:53.341 Installing symlink pointing to librte_mbuf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:53.341 Installing symlink pointing to librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.23 00:02:53.341 Installing symlink pointing to librte_net.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:53.341 Installing symlink pointing to librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.23 00:02:53.341 Installing symlink pointing to librte_meter.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:53.341 Installing symlink pointing to librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.23 00:02:53.341 Installing symlink pointing to librte_ethdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:53.341 Installing symlink pointing to librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.23 00:02:53.341 Installing symlink pointing to librte_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:53.341 Installing symlink pointing to librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.23 00:02:53.341 Installing symlink pointing to librte_cmdline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:53.341 Installing symlink pointing to librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.23 00:02:53.341 Installing symlink pointing to librte_metrics.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:53.341 Installing symlink pointing to librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.23 00:02:53.341 Installing symlink pointing to librte_hash.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:53.341 Installing symlink pointing to librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.23 00:02:53.341 Installing symlink pointing to librte_timer.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:53.341 Installing symlink pointing to librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.23 00:02:53.341 Installing symlink pointing to librte_acl.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:53.341 Installing symlink pointing to librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.23 00:02:53.341 Installing symlink pointing to librte_bbdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:53.341 Installing symlink pointing to librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.23 00:02:53.341 Installing symlink pointing to librte_bitratestats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:53.341 Installing symlink pointing to librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.23 00:02:53.341 Installing symlink pointing to librte_bpf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:53.341 Installing symlink pointing to librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.23 00:02:53.341 Installing symlink pointing to librte_cfgfile.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:53.341 Installing symlink pointing to librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.23 00:02:53.341 Installing symlink pointing to librte_compressdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:53.341 Installing symlink pointing to librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.23 00:02:53.341 Installing symlink pointing to librte_cryptodev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:53.341 Installing symlink pointing to librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.23 00:02:53.341 Installing symlink pointing to librte_distributor.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:53.341 Installing symlink pointing to librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.23 00:02:53.341 Installing symlink pointing to librte_efd.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:53.341 Installing symlink pointing to librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.23 00:02:53.341 Installing symlink pointing to librte_eventdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:53.341 Installing symlink pointing to librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.23 00:02:53.341 Installing symlink pointing to librte_gpudev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:53.341 Installing symlink pointing to librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.23 00:02:53.341 Installing symlink pointing to librte_gro.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:53.341 Installing symlink pointing to librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.23 00:02:53.341 Installing symlink pointing to librte_gso.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:53.341 Installing symlink pointing to librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.23 00:02:53.341 Installing symlink pointing to librte_ip_frag.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:53.341 Installing symlink pointing to librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.23 00:02:53.341 Installing symlink pointing to librte_jobstats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:53.341 Installing symlink pointing to librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.23 00:02:53.341 Installing symlink pointing to librte_latencystats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:53.341 Installing symlink pointing to librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.23 00:02:53.341 Installing symlink pointing to librte_lpm.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:53.341 Installing symlink pointing to librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.23 00:02:53.341 Installing symlink pointing to librte_member.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:53.341 Installing symlink pointing to librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.23 00:02:53.341 Installing symlink pointing to librte_pcapng.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:53.341 Installing symlink pointing to librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.23 00:02:53.341 Installing symlink pointing to librte_power.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:53.341 Installing symlink pointing to librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.23 00:02:53.341 Installing symlink pointing to librte_rawdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:53.341 Installing symlink pointing to librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.23 00:02:53.341 Installing symlink pointing to librte_regexdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:53.341 Installing symlink pointing to librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.23 00:02:53.341 Installing symlink pointing to librte_dmadev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:53.341 Installing symlink pointing to librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.23 00:02:53.341 Installing symlink pointing to librte_rib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:53.341 Installing symlink pointing to librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.23 00:02:53.341 Installing symlink pointing to librte_reorder.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:53.341 Installing symlink pointing to librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.23 00:02:53.341 Installing symlink pointing to librte_sched.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:53.341 Installing symlink pointing to librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.23 00:02:53.341 Installing symlink pointing to librte_security.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:53.341 Installing symlink pointing to librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.23 00:02:53.341 Installing symlink pointing to librte_stack.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:53.341 Installing symlink pointing to librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.23 00:02:53.341 Installing symlink pointing to librte_vhost.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:53.341 Installing symlink pointing to librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.23 00:02:53.341 Installing symlink pointing to librte_ipsec.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:53.341 Installing symlink pointing to librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.23 00:02:53.341 Installing symlink pointing to librte_fib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:53.341 Installing symlink pointing to librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.23 00:02:53.341 Installing symlink pointing to librte_port.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:53.341 Installing symlink pointing to librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.23 00:02:53.341 Installing symlink pointing to librte_pdump.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:53.341 Installing symlink pointing to librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.23 00:02:53.341 Installing symlink pointing to librte_table.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:53.341 Installing symlink pointing to librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.23 00:02:53.341 Installing symlink pointing to librte_pipeline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:53.341 Installing symlink pointing to librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.23 00:02:53.341 Installing symlink pointing to librte_graph.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:53.341 Installing symlink pointing to librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.23 00:02:53.341 Installing symlink pointing to librte_node.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:53.341 Installing symlink pointing to librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:02:53.341 Installing symlink pointing to librte_bus_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:02:53.342 Installing symlink pointing to librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:02:53.342 Installing symlink pointing to librte_bus_vdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:02:53.342 Installing symlink pointing to librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:02:53.342 Installing symlink pointing to librte_mempool_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:02:53.342 Installing symlink pointing to librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:02:53.342 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:02:53.342 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:02:53.342 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:02:53.342 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:02:53.342 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:02:53.342 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:02:53.342 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:02:53.342 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:02:53.342 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:02:53.342 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:02:53.342 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:02:53.342 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:02:53.342 Installing symlink pointing to librte_net_i40e.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:02:53.342 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:02:53.342 00:08:16 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:02:53.342 00:08:16 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:53.342 00:02:53.342 real 1m23.315s 00:02:53.342 user 14m26.442s 00:02:53.342 sys 1m53.147s 00:02:53.342 00:08:16 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:53.342 00:08:16 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:53.342 ************************************ 00:02:53.342 END TEST build_native_dpdk 00:02:53.342 ************************************ 00:02:53.342 00:08:16 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:53.342 00:08:16 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:53.342 00:08:16 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:53.342 00:08:16 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:53.342 00:08:16 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:53.342 00:08:16 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:53.342 00:08:16 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:53.342 00:08:16 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:53.342 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:53.601 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:53.601 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:53.601 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:53.861 Using 'verbs' RDMA provider 00:03:04.849 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:14.835 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:14.835 Creating mk/config.mk...done. 00:03:14.835 Creating mk/cc.flags.mk...done. 00:03:14.835 Type 'make' to build. 00:03:14.835 00:08:37 -- spdk/autobuild.sh@70 -- $ run_test make make -j48 00:03:14.835 00:08:37 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:14.835 00:08:37 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:14.835 00:08:37 -- common/autotest_common.sh@10 -- $ set +x 00:03:14.835 ************************************ 00:03:14.835 START TEST make 00:03:14.835 ************************************ 00:03:14.835 00:08:37 make -- common/autotest_common.sh@1129 -- $ make -j48 00:03:14.835 make[1]: Nothing to be done for 'all'. 00:03:16.232 The Meson build system 00:03:16.232 Version: 1.5.0 00:03:16.232 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:16.232 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:16.232 Build type: native build 00:03:16.232 Project name: libvfio-user 00:03:16.232 Project version: 0.0.1 00:03:16.232 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:16.232 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:16.232 Host machine cpu family: x86_64 00:03:16.232 Host machine cpu: x86_64 00:03:16.232 Run-time dependency threads found: YES 00:03:16.232 Library dl found: YES 00:03:16.232 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:16.232 Run-time dependency json-c found: YES 0.17 00:03:16.232 Run-time dependency cmocka found: YES 1.1.7 00:03:16.232 Program pytest-3 found: NO 00:03:16.232 Program flake8 found: NO 00:03:16.232 Program misspell-fixer found: NO 00:03:16.232 Program restructuredtext-lint found: NO 00:03:16.232 Program valgrind found: YES (/usr/bin/valgrind) 00:03:16.232 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:16.232 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:16.232 Compiler for C supports arguments -Wwrite-strings: YES 00:03:16.232 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:16.232 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:16.232 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:16.232 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:16.232 Build targets in project: 8 00:03:16.232 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:16.232 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:16.232 00:03:16.232 libvfio-user 0.0.1 00:03:16.232 00:03:16.232 User defined options 00:03:16.232 buildtype : debug 00:03:16.232 default_library: shared 00:03:16.232 libdir : /usr/local/lib 00:03:16.232 00:03:16.232 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:17.201 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:17.201 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:17.201 [2/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:17.201 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:17.201 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:17.201 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:17.201 [6/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:17.201 [7/37] Compiling C object samples/null.p/null.c.o 00:03:17.201 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:17.201 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:17.201 [10/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:17.201 [11/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:17.462 [12/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:17.462 [13/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:17.462 [14/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:17.462 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:17.462 [16/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:17.462 [17/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:17.462 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:17.462 [19/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:17.462 [20/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:17.462 [21/37] Compiling C object samples/server.p/server.c.o 00:03:17.462 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:17.462 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:17.462 [24/37] Compiling C object samples/client.p/client.c.o 00:03:17.462 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:17.462 [26/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:17.462 [27/37] Linking target samples/client 00:03:17.462 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:17.462 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:17.726 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:03:17.726 [31/37] Linking target test/unit_tests 00:03:17.726 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:17.989 [33/37] Linking target samples/server 00:03:17.989 [34/37] Linking target samples/gpio-pci-idio-16 00:03:17.989 [35/37] Linking target samples/shadow_ioeventfd_server 00:03:17.989 [36/37] Linking target samples/lspci 00:03:17.989 [37/37] Linking target samples/null 00:03:17.989 INFO: autodetecting backend as ninja 00:03:17.989 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:17.990 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:18.929 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:18.929 ninja: no work to do. 00:03:57.643 CC lib/log/log.o 00:03:57.643 CC lib/log/log_flags.o 00:03:57.643 CC lib/log/log_deprecated.o 00:03:57.643 CC lib/ut_mock/mock.o 00:03:57.643 CC lib/ut/ut.o 00:03:57.643 LIB libspdk_ut.a 00:03:57.643 LIB libspdk_ut_mock.a 00:03:57.643 LIB libspdk_log.a 00:03:57.643 SO libspdk_ut.so.2.0 00:03:57.643 SO libspdk_ut_mock.so.6.0 00:03:57.643 SO libspdk_log.so.7.1 00:03:57.643 SYMLINK libspdk_ut_mock.so 00:03:57.643 SYMLINK libspdk_ut.so 00:03:57.643 SYMLINK libspdk_log.so 00:03:57.643 CC lib/dma/dma.o 00:03:57.643 CC lib/ioat/ioat.o 00:03:57.643 CXX lib/trace_parser/trace.o 00:03:57.643 CC lib/util/base64.o 00:03:57.643 CC lib/util/bit_array.o 00:03:57.643 CC lib/util/cpuset.o 00:03:57.643 CC lib/util/crc16.o 00:03:57.643 CC lib/util/crc32.o 00:03:57.643 CC lib/util/crc32c.o 00:03:57.643 CC lib/util/crc32_ieee.o 00:03:57.643 CC lib/util/crc64.o 00:03:57.643 CC lib/util/dif.o 00:03:57.643 CC lib/util/fd.o 00:03:57.643 CC lib/util/fd_group.o 00:03:57.643 CC lib/util/file.o 00:03:57.643 CC lib/util/hexlify.o 00:03:57.643 CC lib/util/iov.o 00:03:57.643 CC lib/util/math.o 00:03:57.643 CC lib/util/net.o 00:03:57.643 CC lib/util/pipe.o 00:03:57.643 CC lib/util/strerror_tls.o 00:03:57.643 CC lib/util/string.o 00:03:57.643 CC lib/util/uuid.o 00:03:57.643 CC lib/util/xor.o 00:03:57.643 CC lib/util/zipf.o 00:03:57.643 CC lib/util/md5.o 00:03:57.643 CC lib/vfio_user/host/vfio_user_pci.o 00:03:57.643 CC lib/vfio_user/host/vfio_user.o 00:03:57.643 LIB libspdk_dma.a 00:03:57.643 SO libspdk_dma.so.5.0 00:03:57.643 SYMLINK libspdk_dma.so 00:03:57.643 LIB libspdk_ioat.a 00:03:57.643 SO libspdk_ioat.so.7.0 00:03:57.643 SYMLINK libspdk_ioat.so 00:03:57.643 LIB libspdk_vfio_user.a 00:03:57.643 SO libspdk_vfio_user.so.5.0 00:03:57.643 SYMLINK libspdk_vfio_user.so 00:03:57.643 LIB libspdk_util.a 00:03:57.643 SO libspdk_util.so.10.1 00:03:57.643 SYMLINK libspdk_util.so 00:03:57.643 CC lib/conf/conf.o 00:03:57.643 CC lib/rdma_utils/rdma_utils.o 00:03:57.643 CC lib/idxd/idxd.o 00:03:57.643 CC lib/json/json_parse.o 00:03:57.643 CC lib/idxd/idxd_user.o 00:03:57.643 CC lib/json/json_util.o 00:03:57.643 CC lib/vmd/vmd.o 00:03:57.643 CC lib/idxd/idxd_kernel.o 00:03:57.643 CC lib/json/json_write.o 00:03:57.643 CC lib/env_dpdk/env.o 00:03:57.643 CC lib/vmd/led.o 00:03:57.643 CC lib/env_dpdk/memory.o 00:03:57.643 CC lib/env_dpdk/pci.o 00:03:57.643 CC lib/env_dpdk/init.o 00:03:57.643 CC lib/env_dpdk/threads.o 00:03:57.643 CC lib/env_dpdk/pci_ioat.o 00:03:57.643 CC lib/env_dpdk/pci_virtio.o 00:03:57.643 CC lib/env_dpdk/pci_vmd.o 00:03:57.644 CC lib/env_dpdk/pci_idxd.o 00:03:57.644 CC lib/env_dpdk/pci_event.o 00:03:57.644 CC lib/env_dpdk/sigbus_handler.o 00:03:57.644 CC lib/env_dpdk/pci_dpdk.o 00:03:57.644 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:57.644 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:57.644 LIB libspdk_rdma_utils.a 00:03:57.644 LIB libspdk_json.a 00:03:57.644 SO libspdk_rdma_utils.so.1.0 00:03:57.644 LIB libspdk_conf.a 00:03:57.644 SO libspdk_json.so.6.0 00:03:57.644 SO libspdk_conf.so.6.0 00:03:57.644 SYMLINK libspdk_rdma_utils.so 00:03:57.644 SYMLINK libspdk_conf.so 00:03:57.644 SYMLINK libspdk_json.so 00:03:57.644 CC lib/rdma_provider/common.o 00:03:57.644 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:57.644 CC lib/jsonrpc/jsonrpc_server.o 00:03:57.644 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:57.644 CC lib/jsonrpc/jsonrpc_client.o 00:03:57.644 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:57.644 LIB libspdk_idxd.a 00:03:57.644 SO libspdk_idxd.so.12.1 00:03:57.644 LIB libspdk_vmd.a 00:03:57.644 SYMLINK libspdk_idxd.so 00:03:57.644 SO libspdk_vmd.so.6.0 00:03:57.644 SYMLINK libspdk_vmd.so 00:03:57.644 LIB libspdk_rdma_provider.a 00:03:57.644 SO libspdk_rdma_provider.so.7.0 00:03:57.644 LIB libspdk_jsonrpc.a 00:03:57.644 SYMLINK libspdk_rdma_provider.so 00:03:57.644 SO libspdk_jsonrpc.so.6.0 00:03:57.644 SYMLINK libspdk_jsonrpc.so 00:03:57.644 LIB libspdk_trace_parser.a 00:03:57.644 SO libspdk_trace_parser.so.6.0 00:03:57.930 SYMLINK libspdk_trace_parser.so 00:03:57.930 CC lib/rpc/rpc.o 00:03:57.930 LIB libspdk_rpc.a 00:03:58.189 SO libspdk_rpc.so.6.0 00:03:58.189 SYMLINK libspdk_rpc.so 00:03:58.189 CC lib/keyring/keyring.o 00:03:58.189 CC lib/notify/notify.o 00:03:58.189 CC lib/keyring/keyring_rpc.o 00:03:58.189 CC lib/notify/notify_rpc.o 00:03:58.189 CC lib/trace/trace.o 00:03:58.189 CC lib/trace/trace_flags.o 00:03:58.189 CC lib/trace/trace_rpc.o 00:03:58.452 LIB libspdk_notify.a 00:03:58.452 SO libspdk_notify.so.6.0 00:03:58.452 SYMLINK libspdk_notify.so 00:03:58.452 LIB libspdk_keyring.a 00:03:58.452 LIB libspdk_trace.a 00:03:58.452 SO libspdk_keyring.so.2.0 00:03:58.711 SO libspdk_trace.so.11.0 00:03:58.711 SYMLINK libspdk_keyring.so 00:03:58.711 SYMLINK libspdk_trace.so 00:03:58.711 CC lib/thread/thread.o 00:03:58.711 CC lib/thread/iobuf.o 00:03:58.711 CC lib/sock/sock.o 00:03:58.711 CC lib/sock/sock_rpc.o 00:03:58.711 LIB libspdk_env_dpdk.a 00:03:58.970 SO libspdk_env_dpdk.so.15.1 00:03:58.970 SYMLINK libspdk_env_dpdk.so 00:03:59.240 LIB libspdk_sock.a 00:03:59.240 SO libspdk_sock.so.10.0 00:03:59.240 SYMLINK libspdk_sock.so 00:03:59.503 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:59.503 CC lib/nvme/nvme_ctrlr.o 00:03:59.503 CC lib/nvme/nvme_fabric.o 00:03:59.503 CC lib/nvme/nvme_ns_cmd.o 00:03:59.503 CC lib/nvme/nvme_ns.o 00:03:59.503 CC lib/nvme/nvme_pcie_common.o 00:03:59.503 CC lib/nvme/nvme_pcie.o 00:03:59.503 CC lib/nvme/nvme_qpair.o 00:03:59.503 CC lib/nvme/nvme.o 00:03:59.503 CC lib/nvme/nvme_quirks.o 00:03:59.503 CC lib/nvme/nvme_transport.o 00:03:59.503 CC lib/nvme/nvme_discovery.o 00:03:59.503 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:59.503 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:59.503 CC lib/nvme/nvme_tcp.o 00:03:59.503 CC lib/nvme/nvme_opal.o 00:03:59.503 CC lib/nvme/nvme_io_msg.o 00:03:59.503 CC lib/nvme/nvme_poll_group.o 00:03:59.503 CC lib/nvme/nvme_zns.o 00:03:59.503 CC lib/nvme/nvme_stubs.o 00:03:59.503 CC lib/nvme/nvme_auth.o 00:03:59.503 CC lib/nvme/nvme_cuse.o 00:03:59.503 CC lib/nvme/nvme_vfio_user.o 00:03:59.503 CC lib/nvme/nvme_rdma.o 00:04:00.442 LIB libspdk_thread.a 00:04:00.442 SO libspdk_thread.so.11.0 00:04:00.442 SYMLINK libspdk_thread.so 00:04:00.701 CC lib/accel/accel.o 00:04:00.701 CC lib/accel/accel_rpc.o 00:04:00.701 CC lib/accel/accel_sw.o 00:04:00.701 CC lib/fsdev/fsdev.o 00:04:00.701 CC lib/fsdev/fsdev_io.o 00:04:00.701 CC lib/vfu_tgt/tgt_endpoint.o 00:04:00.701 CC lib/blob/blobstore.o 00:04:00.701 CC lib/vfu_tgt/tgt_rpc.o 00:04:00.701 CC lib/init/json_config.o 00:04:00.701 CC lib/virtio/virtio.o 00:04:00.701 CC lib/blob/request.o 00:04:00.701 CC lib/fsdev/fsdev_rpc.o 00:04:00.701 CC lib/init/subsystem.o 00:04:00.701 CC lib/blob/zeroes.o 00:04:00.701 CC lib/virtio/virtio_vhost_user.o 00:04:00.701 CC lib/blob/blob_bs_dev.o 00:04:00.701 CC lib/init/subsystem_rpc.o 00:04:00.701 CC lib/virtio/virtio_pci.o 00:04:00.701 CC lib/init/rpc.o 00:04:00.701 CC lib/virtio/virtio_vfio_user.o 00:04:00.960 LIB libspdk_init.a 00:04:00.960 SO libspdk_init.so.6.0 00:04:00.960 LIB libspdk_virtio.a 00:04:00.960 SYMLINK libspdk_init.so 00:04:00.960 LIB libspdk_vfu_tgt.a 00:04:00.960 SO libspdk_vfu_tgt.so.3.0 00:04:00.960 SO libspdk_virtio.so.7.0 00:04:01.219 SYMLINK libspdk_vfu_tgt.so 00:04:01.219 SYMLINK libspdk_virtio.so 00:04:01.219 CC lib/event/app.o 00:04:01.219 CC lib/event/reactor.o 00:04:01.219 CC lib/event/log_rpc.o 00:04:01.219 CC lib/event/app_rpc.o 00:04:01.219 CC lib/event/scheduler_static.o 00:04:01.219 LIB libspdk_fsdev.a 00:04:01.478 SO libspdk_fsdev.so.2.0 00:04:01.478 SYMLINK libspdk_fsdev.so 00:04:01.478 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:01.736 LIB libspdk_event.a 00:04:01.736 SO libspdk_event.so.14.0 00:04:01.736 SYMLINK libspdk_event.so 00:04:01.736 LIB libspdk_accel.a 00:04:01.736 SO libspdk_accel.so.16.0 00:04:01.995 SYMLINK libspdk_accel.so 00:04:01.995 LIB libspdk_nvme.a 00:04:01.995 CC lib/bdev/bdev.o 00:04:01.995 CC lib/bdev/bdev_rpc.o 00:04:01.995 CC lib/bdev/bdev_zone.o 00:04:01.995 CC lib/bdev/part.o 00:04:01.995 CC lib/bdev/scsi_nvme.o 00:04:01.995 SO libspdk_nvme.so.15.0 00:04:02.253 LIB libspdk_fuse_dispatcher.a 00:04:02.253 SO libspdk_fuse_dispatcher.so.1.0 00:04:02.253 SYMLINK libspdk_fuse_dispatcher.so 00:04:02.253 SYMLINK libspdk_nvme.so 00:04:04.158 LIB libspdk_blob.a 00:04:04.158 SO libspdk_blob.so.11.0 00:04:04.158 SYMLINK libspdk_blob.so 00:04:04.158 CC lib/blobfs/blobfs.o 00:04:04.158 CC lib/blobfs/tree.o 00:04:04.158 CC lib/lvol/lvol.o 00:04:04.726 LIB libspdk_bdev.a 00:04:04.726 SO libspdk_bdev.so.17.0 00:04:04.989 LIB libspdk_blobfs.a 00:04:04.989 SYMLINK libspdk_bdev.so 00:04:04.989 SO libspdk_blobfs.so.10.0 00:04:04.989 SYMLINK libspdk_blobfs.so 00:04:04.989 LIB libspdk_lvol.a 00:04:04.989 SO libspdk_lvol.so.10.0 00:04:04.989 CC lib/nbd/nbd.o 00:04:04.989 CC lib/nbd/nbd_rpc.o 00:04:04.989 CC lib/ublk/ublk.o 00:04:04.989 CC lib/ublk/ublk_rpc.o 00:04:04.989 CC lib/ftl/ftl_core.o 00:04:04.989 CC lib/scsi/dev.o 00:04:04.989 CC lib/nvmf/ctrlr.o 00:04:04.989 CC lib/scsi/lun.o 00:04:04.989 CC lib/ftl/ftl_init.o 00:04:04.989 CC lib/nvmf/ctrlr_discovery.o 00:04:04.989 CC lib/scsi/port.o 00:04:04.989 CC lib/ftl/ftl_layout.o 00:04:04.989 CC lib/nvmf/ctrlr_bdev.o 00:04:04.989 CC lib/scsi/scsi.o 00:04:04.989 CC lib/ftl/ftl_debug.o 00:04:04.989 CC lib/nvmf/subsystem.o 00:04:04.989 CC lib/scsi/scsi_bdev.o 00:04:04.989 CC lib/nvmf/nvmf.o 00:04:04.989 CC lib/ftl/ftl_io.o 00:04:04.989 CC lib/nvmf/nvmf_rpc.o 00:04:04.989 CC lib/scsi/scsi_pr.o 00:04:04.989 CC lib/ftl/ftl_sb.o 00:04:04.989 CC lib/nvmf/transport.o 00:04:04.989 CC lib/scsi/scsi_rpc.o 00:04:04.989 CC lib/ftl/ftl_l2p.o 00:04:04.989 CC lib/scsi/task.o 00:04:04.989 CC lib/ftl/ftl_l2p_flat.o 00:04:04.989 CC lib/nvmf/stubs.o 00:04:04.989 CC lib/nvmf/tcp.o 00:04:04.989 CC lib/ftl/ftl_nv_cache.o 00:04:04.989 CC lib/ftl/ftl_band.o 00:04:04.989 CC lib/nvmf/mdns_server.o 00:04:04.989 CC lib/ftl/ftl_band_ops.o 00:04:04.989 CC lib/nvmf/vfio_user.o 00:04:04.989 CC lib/ftl/ftl_writer.o 00:04:04.989 CC lib/nvmf/rdma.o 00:04:04.989 CC lib/nvmf/auth.o 00:04:04.989 CC lib/ftl/ftl_rq.o 00:04:04.989 CC lib/ftl/ftl_reloc.o 00:04:04.989 CC lib/ftl/ftl_l2p_cache.o 00:04:04.989 CC lib/ftl/ftl_p2l.o 00:04:04.989 CC lib/ftl/ftl_p2l_log.o 00:04:04.989 CC lib/ftl/mngt/ftl_mngt.o 00:04:04.989 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:04.989 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:04.989 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:04.989 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:04.989 SYMLINK libspdk_lvol.so 00:04:04.989 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:05.560 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:05.560 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:05.560 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:05.560 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:05.560 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:05.560 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:05.560 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:05.560 CC lib/ftl/utils/ftl_conf.o 00:04:05.560 CC lib/ftl/utils/ftl_md.o 00:04:05.560 CC lib/ftl/utils/ftl_mempool.o 00:04:05.560 CC lib/ftl/utils/ftl_bitmap.o 00:04:05.560 CC lib/ftl/utils/ftl_property.o 00:04:05.560 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:05.560 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:05.560 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:05.560 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:05.560 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:05.560 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:05.560 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:05.820 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:05.820 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:05.820 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:05.820 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:05.820 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:05.820 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:05.820 CC lib/ftl/base/ftl_base_dev.o 00:04:05.820 CC lib/ftl/base/ftl_base_bdev.o 00:04:05.820 CC lib/ftl/ftl_trace.o 00:04:05.820 LIB libspdk_nbd.a 00:04:05.820 SO libspdk_nbd.so.7.0 00:04:06.079 LIB libspdk_scsi.a 00:04:06.079 SYMLINK libspdk_nbd.so 00:04:06.079 SO libspdk_scsi.so.9.0 00:04:06.079 SYMLINK libspdk_scsi.so 00:04:06.079 LIB libspdk_ublk.a 00:04:06.079 SO libspdk_ublk.so.3.0 00:04:06.339 SYMLINK libspdk_ublk.so 00:04:06.339 CC lib/iscsi/conn.o 00:04:06.339 CC lib/vhost/vhost.o 00:04:06.339 CC lib/iscsi/init_grp.o 00:04:06.339 CC lib/vhost/vhost_rpc.o 00:04:06.339 CC lib/iscsi/iscsi.o 00:04:06.339 CC lib/vhost/vhost_scsi.o 00:04:06.339 CC lib/iscsi/param.o 00:04:06.339 CC lib/vhost/vhost_blk.o 00:04:06.339 CC lib/iscsi/portal_grp.o 00:04:06.339 CC lib/vhost/rte_vhost_user.o 00:04:06.339 CC lib/iscsi/tgt_node.o 00:04:06.339 CC lib/iscsi/iscsi_subsystem.o 00:04:06.339 CC lib/iscsi/iscsi_rpc.o 00:04:06.339 CC lib/iscsi/task.o 00:04:06.598 LIB libspdk_ftl.a 00:04:06.855 SO libspdk_ftl.so.9.0 00:04:07.113 SYMLINK libspdk_ftl.so 00:04:07.682 LIB libspdk_vhost.a 00:04:07.682 SO libspdk_vhost.so.8.0 00:04:07.682 SYMLINK libspdk_vhost.so 00:04:07.682 LIB libspdk_nvmf.a 00:04:07.682 SO libspdk_nvmf.so.20.0 00:04:07.682 LIB libspdk_iscsi.a 00:04:07.944 SO libspdk_iscsi.so.8.0 00:04:07.944 SYMLINK libspdk_nvmf.so 00:04:07.944 SYMLINK libspdk_iscsi.so 00:04:08.204 CC module/vfu_device/vfu_virtio.o 00:04:08.204 CC module/vfu_device/vfu_virtio_blk.o 00:04:08.204 CC module/vfu_device/vfu_virtio_scsi.o 00:04:08.204 CC module/env_dpdk/env_dpdk_rpc.o 00:04:08.204 CC module/vfu_device/vfu_virtio_rpc.o 00:04:08.204 CC module/vfu_device/vfu_virtio_fs.o 00:04:08.463 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:08.463 CC module/accel/error/accel_error.o 00:04:08.463 CC module/accel/dsa/accel_dsa.o 00:04:08.463 CC module/scheduler/gscheduler/gscheduler.o 00:04:08.463 CC module/accel/error/accel_error_rpc.o 00:04:08.463 CC module/accel/dsa/accel_dsa_rpc.o 00:04:08.463 CC module/accel/ioat/accel_ioat.o 00:04:08.463 CC module/accel/ioat/accel_ioat_rpc.o 00:04:08.463 CC module/fsdev/aio/fsdev_aio.o 00:04:08.463 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:08.463 CC module/sock/posix/posix.o 00:04:08.463 CC module/keyring/linux/keyring.o 00:04:08.463 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:08.463 CC module/keyring/linux/keyring_rpc.o 00:04:08.463 CC module/blob/bdev/blob_bdev.o 00:04:08.463 CC module/keyring/file/keyring_rpc.o 00:04:08.463 CC module/keyring/file/keyring.o 00:04:08.463 CC module/accel/iaa/accel_iaa.o 00:04:08.463 CC module/fsdev/aio/linux_aio_mgr.o 00:04:08.463 CC module/accel/iaa/accel_iaa_rpc.o 00:04:08.463 LIB libspdk_env_dpdk_rpc.a 00:04:08.463 SO libspdk_env_dpdk_rpc.so.6.0 00:04:08.463 LIB libspdk_keyring_file.a 00:04:08.463 LIB libspdk_scheduler_gscheduler.a 00:04:08.463 SYMLINK libspdk_env_dpdk_rpc.so 00:04:08.463 SO libspdk_scheduler_gscheduler.so.4.0 00:04:08.463 SO libspdk_keyring_file.so.2.0 00:04:08.463 LIB libspdk_scheduler_dpdk_governor.a 00:04:08.463 LIB libspdk_accel_ioat.a 00:04:08.463 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:08.722 LIB libspdk_accel_iaa.a 00:04:08.722 LIB libspdk_keyring_linux.a 00:04:08.722 SO libspdk_accel_ioat.so.6.0 00:04:08.722 SYMLINK libspdk_scheduler_gscheduler.so 00:04:08.722 SYMLINK libspdk_keyring_file.so 00:04:08.722 SO libspdk_accel_iaa.so.3.0 00:04:08.722 SO libspdk_keyring_linux.so.1.0 00:04:08.722 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:08.722 SYMLINK libspdk_accel_ioat.so 00:04:08.722 LIB libspdk_scheduler_dynamic.a 00:04:08.722 LIB libspdk_accel_error.a 00:04:08.722 LIB libspdk_blob_bdev.a 00:04:08.722 LIB libspdk_accel_dsa.a 00:04:08.722 SYMLINK libspdk_accel_iaa.so 00:04:08.722 SYMLINK libspdk_keyring_linux.so 00:04:08.722 SO libspdk_scheduler_dynamic.so.4.0 00:04:08.722 SO libspdk_accel_error.so.2.0 00:04:08.722 SO libspdk_blob_bdev.so.11.0 00:04:08.722 SO libspdk_accel_dsa.so.5.0 00:04:08.722 SYMLINK libspdk_scheduler_dynamic.so 00:04:08.722 SYMLINK libspdk_blob_bdev.so 00:04:08.722 SYMLINK libspdk_accel_error.so 00:04:08.722 SYMLINK libspdk_accel_dsa.so 00:04:08.984 LIB libspdk_vfu_device.a 00:04:08.984 SO libspdk_vfu_device.so.3.0 00:04:08.984 CC module/blobfs/bdev/blobfs_bdev.o 00:04:08.984 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:08.984 CC module/bdev/malloc/bdev_malloc.o 00:04:08.984 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:08.984 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:08.984 CC module/bdev/lvol/vbdev_lvol.o 00:04:08.984 CC module/bdev/gpt/gpt.o 00:04:08.984 CC module/bdev/delay/vbdev_delay.o 00:04:08.984 CC module/bdev/gpt/vbdev_gpt.o 00:04:08.984 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:08.984 CC module/bdev/error/vbdev_error.o 00:04:08.984 CC module/bdev/aio/bdev_aio.o 00:04:08.984 CC module/bdev/error/vbdev_error_rpc.o 00:04:08.984 CC module/bdev/aio/bdev_aio_rpc.o 00:04:08.984 CC module/bdev/raid/bdev_raid.o 00:04:08.984 CC module/bdev/raid/bdev_raid_rpc.o 00:04:08.984 CC module/bdev/raid/bdev_raid_sb.o 00:04:08.984 CC module/bdev/raid/raid0.o 00:04:08.984 CC module/bdev/null/bdev_null.o 00:04:08.984 CC module/bdev/raid/raid1.o 00:04:08.985 CC module/bdev/nvme/bdev_nvme.o 00:04:08.985 CC module/bdev/split/vbdev_split.o 00:04:08.985 CC module/bdev/null/bdev_null_rpc.o 00:04:08.985 CC module/bdev/passthru/vbdev_passthru.o 00:04:08.985 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:08.985 CC module/bdev/raid/concat.o 00:04:08.985 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:08.985 CC module/bdev/ftl/bdev_ftl.o 00:04:08.985 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:08.985 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:08.985 CC module/bdev/split/vbdev_split_rpc.o 00:04:08.985 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:08.985 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:08.985 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:08.985 CC module/bdev/nvme/nvme_rpc.o 00:04:08.985 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:08.985 CC module/bdev/nvme/bdev_mdns_client.o 00:04:08.985 CC module/bdev/nvme/vbdev_opal.o 00:04:08.985 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:08.985 CC module/bdev/iscsi/bdev_iscsi.o 00:04:08.985 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:08.985 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:09.244 SYMLINK libspdk_vfu_device.so 00:04:09.244 LIB libspdk_fsdev_aio.a 00:04:09.244 SO libspdk_fsdev_aio.so.1.0 00:04:09.244 SYMLINK libspdk_fsdev_aio.so 00:04:09.244 LIB libspdk_sock_posix.a 00:04:09.244 SO libspdk_sock_posix.so.6.0 00:04:09.502 LIB libspdk_blobfs_bdev.a 00:04:09.502 SO libspdk_blobfs_bdev.so.6.0 00:04:09.502 SYMLINK libspdk_sock_posix.so 00:04:09.502 SYMLINK libspdk_blobfs_bdev.so 00:04:09.502 LIB libspdk_bdev_split.a 00:04:09.502 LIB libspdk_bdev_passthru.a 00:04:09.502 LIB libspdk_bdev_error.a 00:04:09.502 SO libspdk_bdev_split.so.6.0 00:04:09.502 SO libspdk_bdev_passthru.so.6.0 00:04:09.502 LIB libspdk_bdev_null.a 00:04:09.502 LIB libspdk_bdev_gpt.a 00:04:09.502 SO libspdk_bdev_error.so.6.0 00:04:09.502 LIB libspdk_bdev_delay.a 00:04:09.502 SO libspdk_bdev_null.so.6.0 00:04:09.502 SO libspdk_bdev_gpt.so.6.0 00:04:09.502 LIB libspdk_bdev_zone_block.a 00:04:09.502 SYMLINK libspdk_bdev_split.so 00:04:09.502 SO libspdk_bdev_delay.so.6.0 00:04:09.502 LIB libspdk_bdev_ftl.a 00:04:09.502 SYMLINK libspdk_bdev_passthru.so 00:04:09.502 LIB libspdk_bdev_iscsi.a 00:04:09.502 SYMLINK libspdk_bdev_error.so 00:04:09.502 SO libspdk_bdev_zone_block.so.6.0 00:04:09.502 SO libspdk_bdev_ftl.so.6.0 00:04:09.502 SYMLINK libspdk_bdev_null.so 00:04:09.502 SYMLINK libspdk_bdev_gpt.so 00:04:09.502 LIB libspdk_bdev_aio.a 00:04:09.502 SO libspdk_bdev_iscsi.so.6.0 00:04:09.502 LIB libspdk_bdev_malloc.a 00:04:09.502 SYMLINK libspdk_bdev_delay.so 00:04:09.761 SO libspdk_bdev_aio.so.6.0 00:04:09.761 SO libspdk_bdev_malloc.so.6.0 00:04:09.761 SYMLINK libspdk_bdev_zone_block.so 00:04:09.761 SYMLINK libspdk_bdev_ftl.so 00:04:09.761 SYMLINK libspdk_bdev_iscsi.so 00:04:09.761 SYMLINK libspdk_bdev_aio.so 00:04:09.761 SYMLINK libspdk_bdev_malloc.so 00:04:09.761 LIB libspdk_bdev_virtio.a 00:04:09.761 LIB libspdk_bdev_lvol.a 00:04:09.761 SO libspdk_bdev_virtio.so.6.0 00:04:09.761 SO libspdk_bdev_lvol.so.6.0 00:04:09.761 SYMLINK libspdk_bdev_virtio.so 00:04:09.761 SYMLINK libspdk_bdev_lvol.so 00:04:10.328 LIB libspdk_bdev_raid.a 00:04:10.328 SO libspdk_bdev_raid.so.6.0 00:04:10.328 SYMLINK libspdk_bdev_raid.so 00:04:11.706 LIB libspdk_bdev_nvme.a 00:04:11.706 SO libspdk_bdev_nvme.so.7.1 00:04:11.965 SYMLINK libspdk_bdev_nvme.so 00:04:12.224 CC module/event/subsystems/sock/sock.o 00:04:12.224 CC module/event/subsystems/scheduler/scheduler.o 00:04:12.224 CC module/event/subsystems/iobuf/iobuf.o 00:04:12.224 CC module/event/subsystems/fsdev/fsdev.o 00:04:12.224 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:12.224 CC module/event/subsystems/keyring/keyring.o 00:04:12.224 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:04:12.224 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:12.224 CC module/event/subsystems/vmd/vmd.o 00:04:12.224 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:12.482 LIB libspdk_event_keyring.a 00:04:12.482 LIB libspdk_event_vhost_blk.a 00:04:12.482 LIB libspdk_event_fsdev.a 00:04:12.482 LIB libspdk_event_vfu_tgt.a 00:04:12.482 LIB libspdk_event_scheduler.a 00:04:12.482 LIB libspdk_event_vmd.a 00:04:12.482 LIB libspdk_event_sock.a 00:04:12.482 SO libspdk_event_fsdev.so.1.0 00:04:12.482 SO libspdk_event_keyring.so.1.0 00:04:12.482 LIB libspdk_event_iobuf.a 00:04:12.482 SO libspdk_event_vhost_blk.so.3.0 00:04:12.482 SO libspdk_event_scheduler.so.4.0 00:04:12.482 SO libspdk_event_vfu_tgt.so.3.0 00:04:12.482 SO libspdk_event_sock.so.5.0 00:04:12.482 SO libspdk_event_vmd.so.6.0 00:04:12.482 SO libspdk_event_iobuf.so.3.0 00:04:12.482 SYMLINK libspdk_event_fsdev.so 00:04:12.482 SYMLINK libspdk_event_keyring.so 00:04:12.482 SYMLINK libspdk_event_vhost_blk.so 00:04:12.482 SYMLINK libspdk_event_vfu_tgt.so 00:04:12.482 SYMLINK libspdk_event_scheduler.so 00:04:12.482 SYMLINK libspdk_event_sock.so 00:04:12.482 SYMLINK libspdk_event_vmd.so 00:04:12.482 SYMLINK libspdk_event_iobuf.so 00:04:12.741 CC module/event/subsystems/accel/accel.o 00:04:12.741 LIB libspdk_event_accel.a 00:04:12.741 SO libspdk_event_accel.so.6.0 00:04:13.003 SYMLINK libspdk_event_accel.so 00:04:13.003 CC module/event/subsystems/bdev/bdev.o 00:04:13.262 LIB libspdk_event_bdev.a 00:04:13.262 SO libspdk_event_bdev.so.6.0 00:04:13.262 SYMLINK libspdk_event_bdev.so 00:04:13.521 CC module/event/subsystems/ublk/ublk.o 00:04:13.521 CC module/event/subsystems/scsi/scsi.o 00:04:13.521 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:13.521 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:13.521 CC module/event/subsystems/nbd/nbd.o 00:04:13.521 LIB libspdk_event_ublk.a 00:04:13.521 LIB libspdk_event_nbd.a 00:04:13.521 LIB libspdk_event_scsi.a 00:04:13.521 SO libspdk_event_nbd.so.6.0 00:04:13.521 SO libspdk_event_ublk.so.3.0 00:04:13.779 SO libspdk_event_scsi.so.6.0 00:04:13.779 SYMLINK libspdk_event_nbd.so 00:04:13.779 SYMLINK libspdk_event_ublk.so 00:04:13.779 SYMLINK libspdk_event_scsi.so 00:04:13.779 LIB libspdk_event_nvmf.a 00:04:13.779 SO libspdk_event_nvmf.so.6.0 00:04:13.779 SYMLINK libspdk_event_nvmf.so 00:04:13.779 CC module/event/subsystems/iscsi/iscsi.o 00:04:13.779 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:14.040 LIB libspdk_event_vhost_scsi.a 00:04:14.040 LIB libspdk_event_iscsi.a 00:04:14.040 SO libspdk_event_vhost_scsi.so.3.0 00:04:14.040 SO libspdk_event_iscsi.so.6.0 00:04:14.040 SYMLINK libspdk_event_vhost_scsi.so 00:04:14.040 SYMLINK libspdk_event_iscsi.so 00:04:14.300 SO libspdk.so.6.0 00:04:14.300 SYMLINK libspdk.so 00:04:14.300 CXX app/trace/trace.o 00:04:14.300 CC app/trace_record/trace_record.o 00:04:14.300 CC app/spdk_top/spdk_top.o 00:04:14.300 CC app/spdk_nvme_identify/identify.o 00:04:14.300 CC app/spdk_nvme_perf/perf.o 00:04:14.300 CC app/spdk_nvme_discover/discovery_aer.o 00:04:14.300 CC test/rpc_client/rpc_client_test.o 00:04:14.300 TEST_HEADER include/spdk/accel.h 00:04:14.300 TEST_HEADER include/spdk/accel_module.h 00:04:14.300 TEST_HEADER include/spdk/assert.h 00:04:14.300 CC app/spdk_lspci/spdk_lspci.o 00:04:14.300 TEST_HEADER include/spdk/barrier.h 00:04:14.300 TEST_HEADER include/spdk/base64.h 00:04:14.300 TEST_HEADER include/spdk/bdev.h 00:04:14.300 TEST_HEADER include/spdk/bdev_module.h 00:04:14.300 TEST_HEADER include/spdk/bdev_zone.h 00:04:14.300 TEST_HEADER include/spdk/bit_array.h 00:04:14.300 TEST_HEADER include/spdk/bit_pool.h 00:04:14.300 TEST_HEADER include/spdk/blob_bdev.h 00:04:14.300 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:14.300 TEST_HEADER include/spdk/blobfs.h 00:04:14.300 TEST_HEADER include/spdk/blob.h 00:04:14.300 TEST_HEADER include/spdk/conf.h 00:04:14.300 TEST_HEADER include/spdk/config.h 00:04:14.300 TEST_HEADER include/spdk/cpuset.h 00:04:14.300 TEST_HEADER include/spdk/crc16.h 00:04:14.300 TEST_HEADER include/spdk/crc64.h 00:04:14.300 TEST_HEADER include/spdk/crc32.h 00:04:14.300 TEST_HEADER include/spdk/dif.h 00:04:14.300 TEST_HEADER include/spdk/dma.h 00:04:14.300 TEST_HEADER include/spdk/endian.h 00:04:14.566 TEST_HEADER include/spdk/env_dpdk.h 00:04:14.566 TEST_HEADER include/spdk/event.h 00:04:14.566 TEST_HEADER include/spdk/env.h 00:04:14.566 TEST_HEADER include/spdk/fd_group.h 00:04:14.566 TEST_HEADER include/spdk/fd.h 00:04:14.566 TEST_HEADER include/spdk/file.h 00:04:14.566 TEST_HEADER include/spdk/fsdev.h 00:04:14.566 TEST_HEADER include/spdk/fsdev_module.h 00:04:14.566 TEST_HEADER include/spdk/ftl.h 00:04:14.566 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:14.566 TEST_HEADER include/spdk/gpt_spec.h 00:04:14.566 TEST_HEADER include/spdk/hexlify.h 00:04:14.566 TEST_HEADER include/spdk/histogram_data.h 00:04:14.566 TEST_HEADER include/spdk/idxd_spec.h 00:04:14.566 TEST_HEADER include/spdk/idxd.h 00:04:14.566 TEST_HEADER include/spdk/init.h 00:04:14.566 TEST_HEADER include/spdk/ioat.h 00:04:14.566 TEST_HEADER include/spdk/ioat_spec.h 00:04:14.566 TEST_HEADER include/spdk/iscsi_spec.h 00:04:14.566 TEST_HEADER include/spdk/json.h 00:04:14.566 TEST_HEADER include/spdk/jsonrpc.h 00:04:14.566 TEST_HEADER include/spdk/keyring.h 00:04:14.566 TEST_HEADER include/spdk/keyring_module.h 00:04:14.566 TEST_HEADER include/spdk/likely.h 00:04:14.566 TEST_HEADER include/spdk/log.h 00:04:14.566 TEST_HEADER include/spdk/lvol.h 00:04:14.566 TEST_HEADER include/spdk/memory.h 00:04:14.566 TEST_HEADER include/spdk/md5.h 00:04:14.566 TEST_HEADER include/spdk/mmio.h 00:04:14.566 TEST_HEADER include/spdk/nbd.h 00:04:14.566 TEST_HEADER include/spdk/net.h 00:04:14.567 TEST_HEADER include/spdk/notify.h 00:04:14.567 TEST_HEADER include/spdk/nvme.h 00:04:14.567 TEST_HEADER include/spdk/nvme_intel.h 00:04:14.567 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:14.567 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:14.567 TEST_HEADER include/spdk/nvme_spec.h 00:04:14.567 TEST_HEADER include/spdk/nvme_zns.h 00:04:14.567 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:14.567 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:14.567 TEST_HEADER include/spdk/nvmf.h 00:04:14.567 TEST_HEADER include/spdk/nvmf_spec.h 00:04:14.567 TEST_HEADER include/spdk/nvmf_transport.h 00:04:14.567 TEST_HEADER include/spdk/opal.h 00:04:14.567 TEST_HEADER include/spdk/opal_spec.h 00:04:14.567 TEST_HEADER include/spdk/pci_ids.h 00:04:14.567 TEST_HEADER include/spdk/pipe.h 00:04:14.567 TEST_HEADER include/spdk/queue.h 00:04:14.567 TEST_HEADER include/spdk/reduce.h 00:04:14.567 TEST_HEADER include/spdk/scheduler.h 00:04:14.567 TEST_HEADER include/spdk/rpc.h 00:04:14.567 TEST_HEADER include/spdk/scsi.h 00:04:14.567 TEST_HEADER include/spdk/scsi_spec.h 00:04:14.567 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:14.567 TEST_HEADER include/spdk/stdinc.h 00:04:14.567 TEST_HEADER include/spdk/sock.h 00:04:14.567 TEST_HEADER include/spdk/string.h 00:04:14.567 TEST_HEADER include/spdk/thread.h 00:04:14.567 TEST_HEADER include/spdk/trace.h 00:04:14.567 TEST_HEADER include/spdk/trace_parser.h 00:04:14.567 TEST_HEADER include/spdk/tree.h 00:04:14.567 TEST_HEADER include/spdk/ublk.h 00:04:14.567 TEST_HEADER include/spdk/uuid.h 00:04:14.567 TEST_HEADER include/spdk/util.h 00:04:14.567 TEST_HEADER include/spdk/version.h 00:04:14.567 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:14.567 TEST_HEADER include/spdk/vhost.h 00:04:14.567 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:14.567 TEST_HEADER include/spdk/vmd.h 00:04:14.567 TEST_HEADER include/spdk/xor.h 00:04:14.567 TEST_HEADER include/spdk/zipf.h 00:04:14.567 CXX test/cpp_headers/accel.o 00:04:14.567 CXX test/cpp_headers/accel_module.o 00:04:14.567 CXX test/cpp_headers/assert.o 00:04:14.567 CXX test/cpp_headers/barrier.o 00:04:14.567 CXX test/cpp_headers/base64.o 00:04:14.567 CXX test/cpp_headers/bdev.o 00:04:14.567 CXX test/cpp_headers/bdev_module.o 00:04:14.567 CXX test/cpp_headers/bdev_zone.o 00:04:14.567 CXX test/cpp_headers/bit_array.o 00:04:14.567 CXX test/cpp_headers/bit_pool.o 00:04:14.567 CXX test/cpp_headers/blob_bdev.o 00:04:14.567 CXX test/cpp_headers/blobfs_bdev.o 00:04:14.567 CXX test/cpp_headers/blobfs.o 00:04:14.567 CXX test/cpp_headers/blob.o 00:04:14.567 CXX test/cpp_headers/conf.o 00:04:14.567 CXX test/cpp_headers/config.o 00:04:14.567 CC app/iscsi_tgt/iscsi_tgt.o 00:04:14.567 CXX test/cpp_headers/cpuset.o 00:04:14.567 CXX test/cpp_headers/crc16.o 00:04:14.567 CC app/spdk_dd/spdk_dd.o 00:04:14.567 CC app/nvmf_tgt/nvmf_main.o 00:04:14.567 CXX test/cpp_headers/crc32.o 00:04:14.567 CC examples/ioat/perf/perf.o 00:04:14.567 CC examples/util/zipf/zipf.o 00:04:14.567 CC app/spdk_tgt/spdk_tgt.o 00:04:14.567 CC examples/ioat/verify/verify.o 00:04:14.567 CC test/env/vtophys/vtophys.o 00:04:14.567 CC test/app/histogram_perf/histogram_perf.o 00:04:14.567 CC test/env/pci/pci_ut.o 00:04:14.567 CC test/env/memory/memory_ut.o 00:04:14.567 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:14.567 CC app/fio/nvme/fio_plugin.o 00:04:14.567 CC test/app/jsoncat/jsoncat.o 00:04:14.567 CC test/app/stub/stub.o 00:04:14.567 CC test/thread/poller_perf/poller_perf.o 00:04:14.567 CC test/dma/test_dma/test_dma.o 00:04:14.567 CC app/fio/bdev/fio_plugin.o 00:04:14.567 CC test/app/bdev_svc/bdev_svc.o 00:04:14.831 LINK spdk_lspci 00:04:14.831 CC test/env/mem_callbacks/mem_callbacks.o 00:04:14.831 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:14.831 LINK spdk_nvme_discover 00:04:14.831 LINK rpc_client_test 00:04:14.831 LINK jsoncat 00:04:14.831 LINK histogram_perf 00:04:14.831 LINK interrupt_tgt 00:04:14.831 LINK zipf 00:04:14.831 LINK vtophys 00:04:14.831 CXX test/cpp_headers/crc64.o 00:04:14.831 CXX test/cpp_headers/dif.o 00:04:14.831 LINK spdk_trace_record 00:04:14.831 LINK nvmf_tgt 00:04:15.098 LINK poller_perf 00:04:15.098 LINK env_dpdk_post_init 00:04:15.098 CXX test/cpp_headers/dma.o 00:04:15.098 CXX test/cpp_headers/endian.o 00:04:15.098 CXX test/cpp_headers/env_dpdk.o 00:04:15.098 CXX test/cpp_headers/env.o 00:04:15.098 CXX test/cpp_headers/event.o 00:04:15.098 LINK stub 00:04:15.098 CXX test/cpp_headers/fd_group.o 00:04:15.098 CXX test/cpp_headers/fd.o 00:04:15.098 CXX test/cpp_headers/file.o 00:04:15.098 CXX test/cpp_headers/fsdev.o 00:04:15.098 CXX test/cpp_headers/fsdev_module.o 00:04:15.098 CXX test/cpp_headers/ftl.o 00:04:15.098 LINK ioat_perf 00:04:15.098 CXX test/cpp_headers/fuse_dispatcher.o 00:04:15.098 CXX test/cpp_headers/gpt_spec.o 00:04:15.098 LINK iscsi_tgt 00:04:15.098 CXX test/cpp_headers/hexlify.o 00:04:15.098 LINK verify 00:04:15.098 LINK bdev_svc 00:04:15.098 CXX test/cpp_headers/histogram_data.o 00:04:15.098 LINK spdk_tgt 00:04:15.098 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:15.098 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:15.098 LINK mem_callbacks 00:04:15.098 CXX test/cpp_headers/idxd.o 00:04:15.098 CXX test/cpp_headers/idxd_spec.o 00:04:15.098 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:15.098 CXX test/cpp_headers/init.o 00:04:15.098 CXX test/cpp_headers/ioat.o 00:04:15.364 CXX test/cpp_headers/ioat_spec.o 00:04:15.364 LINK spdk_dd 00:04:15.364 CXX test/cpp_headers/iscsi_spec.o 00:04:15.364 CXX test/cpp_headers/json.o 00:04:15.364 CXX test/cpp_headers/jsonrpc.o 00:04:15.364 CXX test/cpp_headers/keyring.o 00:04:15.364 CXX test/cpp_headers/keyring_module.o 00:04:15.364 LINK spdk_trace 00:04:15.364 CXX test/cpp_headers/likely.o 00:04:15.364 CXX test/cpp_headers/log.o 00:04:15.364 LINK pci_ut 00:04:15.364 CXX test/cpp_headers/lvol.o 00:04:15.364 CXX test/cpp_headers/md5.o 00:04:15.364 CXX test/cpp_headers/memory.o 00:04:15.364 CXX test/cpp_headers/mmio.o 00:04:15.364 CXX test/cpp_headers/nbd.o 00:04:15.364 CXX test/cpp_headers/net.o 00:04:15.364 CXX test/cpp_headers/notify.o 00:04:15.364 CXX test/cpp_headers/nvme.o 00:04:15.364 CXX test/cpp_headers/nvme_intel.o 00:04:15.364 CXX test/cpp_headers/nvme_ocssd.o 00:04:15.364 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:15.364 CXX test/cpp_headers/nvme_spec.o 00:04:15.364 CXX test/cpp_headers/nvme_zns.o 00:04:15.364 CXX test/cpp_headers/nvmf_cmd.o 00:04:15.364 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:15.364 CXX test/cpp_headers/nvmf.o 00:04:15.364 CXX test/cpp_headers/nvmf_spec.o 00:04:15.364 CXX test/cpp_headers/nvmf_transport.o 00:04:15.364 CXX test/cpp_headers/opal.o 00:04:15.625 LINK nvme_fuzz 00:04:15.625 CXX test/cpp_headers/opal_spec.o 00:04:15.625 CC examples/sock/hello_world/hello_sock.o 00:04:15.625 CC examples/thread/thread/thread_ex.o 00:04:15.625 CXX test/cpp_headers/pci_ids.o 00:04:15.625 CXX test/cpp_headers/pipe.o 00:04:15.625 CC test/event/event_perf/event_perf.o 00:04:15.625 LINK test_dma 00:04:15.625 CXX test/cpp_headers/queue.o 00:04:15.625 CXX test/cpp_headers/reduce.o 00:04:15.625 CC examples/vmd/lsvmd/lsvmd.o 00:04:15.625 CC examples/idxd/perf/perf.o 00:04:15.625 CXX test/cpp_headers/rpc.o 00:04:15.626 CC test/event/reactor/reactor.o 00:04:15.626 CXX test/cpp_headers/scheduler.o 00:04:15.887 CXX test/cpp_headers/scsi.o 00:04:15.887 CC test/event/reactor_perf/reactor_perf.o 00:04:15.887 CC examples/vmd/led/led.o 00:04:15.887 CXX test/cpp_headers/scsi_spec.o 00:04:15.887 CXX test/cpp_headers/sock.o 00:04:15.887 CXX test/cpp_headers/stdinc.o 00:04:15.887 CC test/event/app_repeat/app_repeat.o 00:04:15.887 CXX test/cpp_headers/string.o 00:04:15.887 CXX test/cpp_headers/trace.o 00:04:15.887 CXX test/cpp_headers/thread.o 00:04:15.887 CXX test/cpp_headers/trace_parser.o 00:04:15.887 CXX test/cpp_headers/tree.o 00:04:15.887 CC test/event/scheduler/scheduler.o 00:04:15.887 CXX test/cpp_headers/ublk.o 00:04:15.887 LINK spdk_bdev 00:04:15.887 CXX test/cpp_headers/util.o 00:04:15.887 CXX test/cpp_headers/uuid.o 00:04:15.887 CXX test/cpp_headers/version.o 00:04:15.887 CC app/vhost/vhost.o 00:04:15.887 CXX test/cpp_headers/vfio_user_pci.o 00:04:15.887 CXX test/cpp_headers/vfio_user_spec.o 00:04:15.887 CXX test/cpp_headers/vhost.o 00:04:15.887 CXX test/cpp_headers/vmd.o 00:04:15.887 LINK spdk_nvme_perf 00:04:15.887 CXX test/cpp_headers/xor.o 00:04:15.887 LINK spdk_nvme 00:04:15.887 CXX test/cpp_headers/zipf.o 00:04:16.149 LINK event_perf 00:04:16.149 LINK lsvmd 00:04:16.149 LINK reactor 00:04:16.149 LINK vhost_fuzz 00:04:16.149 LINK reactor_perf 00:04:16.149 LINK spdk_nvme_identify 00:04:16.149 LINK thread 00:04:16.149 LINK memory_ut 00:04:16.149 LINK led 00:04:16.149 LINK hello_sock 00:04:16.149 LINK spdk_top 00:04:16.149 LINK app_repeat 00:04:16.410 CC test/nvme/e2edp/nvme_dp.o 00:04:16.410 LINK vhost 00:04:16.410 CC test/nvme/aer/aer.o 00:04:16.410 CC test/nvme/reset/reset.o 00:04:16.410 CC test/nvme/simple_copy/simple_copy.o 00:04:16.410 CC test/nvme/overhead/overhead.o 00:04:16.410 CC test/nvme/boot_partition/boot_partition.o 00:04:16.410 CC test/nvme/err_injection/err_injection.o 00:04:16.410 CC test/nvme/compliance/nvme_compliance.o 00:04:16.410 CC test/nvme/reserve/reserve.o 00:04:16.410 CC test/nvme/connect_stress/connect_stress.o 00:04:16.410 CC test/nvme/sgl/sgl.o 00:04:16.410 CC test/nvme/fused_ordering/fused_ordering.o 00:04:16.410 CC test/nvme/startup/startup.o 00:04:16.410 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:16.410 LINK scheduler 00:04:16.411 CC test/nvme/fdp/fdp.o 00:04:16.411 CC test/nvme/cuse/cuse.o 00:04:16.411 CC test/blobfs/mkfs/mkfs.o 00:04:16.411 LINK idxd_perf 00:04:16.411 CC test/accel/dif/dif.o 00:04:16.411 CC test/lvol/esnap/esnap.o 00:04:16.672 LINK startup 00:04:16.672 LINK reserve 00:04:16.672 LINK fused_ordering 00:04:16.672 CC examples/nvme/hotplug/hotplug.o 00:04:16.672 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:16.672 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:16.672 CC examples/nvme/hello_world/hello_world.o 00:04:16.672 CC examples/nvme/reconnect/reconnect.o 00:04:16.672 CC examples/nvme/abort/abort.o 00:04:16.672 CC examples/nvme/arbitration/arbitration.o 00:04:16.672 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:16.672 LINK doorbell_aers 00:04:16.672 CC examples/accel/perf/accel_perf.o 00:04:16.672 LINK boot_partition 00:04:16.672 LINK simple_copy 00:04:16.672 LINK err_injection 00:04:16.672 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:16.672 LINK connect_stress 00:04:16.672 CC examples/blob/cli/blobcli.o 00:04:16.672 CC examples/blob/hello_world/hello_blob.o 00:04:16.672 LINK sgl 00:04:16.672 LINK reset 00:04:16.672 LINK mkfs 00:04:16.672 LINK aer 00:04:16.931 LINK overhead 00:04:16.931 LINK fdp 00:04:16.931 LINK nvme_dp 00:04:16.931 LINK pmr_persistence 00:04:16.931 LINK nvme_compliance 00:04:16.931 LINK cmb_copy 00:04:16.931 LINK hello_world 00:04:16.931 LINK hotplug 00:04:16.931 LINK hello_fsdev 00:04:17.192 LINK hello_blob 00:04:17.192 LINK reconnect 00:04:17.192 LINK arbitration 00:04:17.192 LINK abort 00:04:17.192 LINK dif 00:04:17.192 LINK nvme_manage 00:04:17.192 LINK blobcli 00:04:17.192 LINK accel_perf 00:04:17.451 LINK iscsi_fuzz 00:04:17.710 CC test/bdev/bdevio/bdevio.o 00:04:17.710 CC examples/bdev/hello_world/hello_bdev.o 00:04:17.710 CC examples/bdev/bdevperf/bdevperf.o 00:04:17.969 LINK hello_bdev 00:04:17.969 LINK cuse 00:04:17.969 LINK bdevio 00:04:18.537 LINK bdevperf 00:04:18.796 CC examples/nvmf/nvmf/nvmf.o 00:04:19.364 LINK nvmf 00:04:21.897 LINK esnap 00:04:21.897 00:04:21.897 real 1m7.861s 00:04:21.897 user 9m5.283s 00:04:21.897 sys 1m57.792s 00:04:21.897 00:09:45 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:21.897 00:09:45 make -- common/autotest_common.sh@10 -- $ set +x 00:04:21.897 ************************************ 00:04:21.897 END TEST make 00:04:21.897 ************************************ 00:04:21.897 00:09:45 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:21.897 00:09:45 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:21.897 00:09:45 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:21.897 00:09:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:21.897 00:09:45 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:21.897 00:09:45 -- pm/common@44 -- $ pid=6101 00:04:21.897 00:09:45 -- pm/common@50 -- $ kill -TERM 6101 00:04:21.897 00:09:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:21.897 00:09:45 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:21.897 00:09:45 -- pm/common@44 -- $ pid=6103 00:04:21.897 00:09:45 -- pm/common@50 -- $ kill -TERM 6103 00:04:21.897 00:09:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:21.897 00:09:45 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:21.897 00:09:45 -- pm/common@44 -- $ pid=6105 00:04:21.897 00:09:45 -- pm/common@50 -- $ kill -TERM 6105 00:04:21.897 00:09:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:21.897 00:09:45 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:21.897 00:09:45 -- pm/common@44 -- $ pid=6136 00:04:21.897 00:09:45 -- pm/common@50 -- $ sudo -E kill -TERM 6136 00:04:21.897 00:09:45 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:21.897 00:09:45 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:21.897 00:09:45 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:21.897 00:09:45 -- common/autotest_common.sh@1693 -- # lcov --version 00:04:21.897 00:09:45 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:22.157 00:09:45 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:22.157 00:09:45 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:22.157 00:09:45 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:22.157 00:09:45 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:22.157 00:09:45 -- scripts/common.sh@336 -- # IFS=.-: 00:04:22.157 00:09:45 -- scripts/common.sh@336 -- # read -ra ver1 00:04:22.157 00:09:45 -- scripts/common.sh@337 -- # IFS=.-: 00:04:22.157 00:09:45 -- scripts/common.sh@337 -- # read -ra ver2 00:04:22.157 00:09:45 -- scripts/common.sh@338 -- # local 'op=<' 00:04:22.157 00:09:45 -- scripts/common.sh@340 -- # ver1_l=2 00:04:22.157 00:09:45 -- scripts/common.sh@341 -- # ver2_l=1 00:04:22.157 00:09:45 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:22.157 00:09:45 -- scripts/common.sh@344 -- # case "$op" in 00:04:22.157 00:09:45 -- scripts/common.sh@345 -- # : 1 00:04:22.157 00:09:45 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:22.157 00:09:45 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:22.157 00:09:45 -- scripts/common.sh@365 -- # decimal 1 00:04:22.157 00:09:45 -- scripts/common.sh@353 -- # local d=1 00:04:22.157 00:09:45 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:22.157 00:09:45 -- scripts/common.sh@355 -- # echo 1 00:04:22.157 00:09:45 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:22.157 00:09:45 -- scripts/common.sh@366 -- # decimal 2 00:04:22.157 00:09:45 -- scripts/common.sh@353 -- # local d=2 00:04:22.157 00:09:45 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:22.157 00:09:45 -- scripts/common.sh@355 -- # echo 2 00:04:22.157 00:09:45 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:22.157 00:09:45 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:22.157 00:09:45 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:22.157 00:09:45 -- scripts/common.sh@368 -- # return 0 00:04:22.157 00:09:45 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:22.157 00:09:45 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:22.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.157 --rc genhtml_branch_coverage=1 00:04:22.157 --rc genhtml_function_coverage=1 00:04:22.157 --rc genhtml_legend=1 00:04:22.157 --rc geninfo_all_blocks=1 00:04:22.157 --rc geninfo_unexecuted_blocks=1 00:04:22.157 00:04:22.157 ' 00:04:22.157 00:09:45 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:22.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.157 --rc genhtml_branch_coverage=1 00:04:22.157 --rc genhtml_function_coverage=1 00:04:22.157 --rc genhtml_legend=1 00:04:22.157 --rc geninfo_all_blocks=1 00:04:22.157 --rc geninfo_unexecuted_blocks=1 00:04:22.157 00:04:22.157 ' 00:04:22.157 00:09:45 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:22.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.157 --rc genhtml_branch_coverage=1 00:04:22.157 --rc genhtml_function_coverage=1 00:04:22.157 --rc genhtml_legend=1 00:04:22.157 --rc geninfo_all_blocks=1 00:04:22.157 --rc geninfo_unexecuted_blocks=1 00:04:22.157 00:04:22.157 ' 00:04:22.157 00:09:45 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:22.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.157 --rc genhtml_branch_coverage=1 00:04:22.157 --rc genhtml_function_coverage=1 00:04:22.157 --rc genhtml_legend=1 00:04:22.157 --rc geninfo_all_blocks=1 00:04:22.157 --rc geninfo_unexecuted_blocks=1 00:04:22.157 00:04:22.157 ' 00:04:22.157 00:09:45 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:22.157 00:09:45 -- nvmf/common.sh@7 -- # uname -s 00:04:22.157 00:09:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:22.157 00:09:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:22.157 00:09:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:22.157 00:09:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:22.157 00:09:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:22.157 00:09:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:22.157 00:09:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:22.157 00:09:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:22.157 00:09:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:22.157 00:09:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:22.157 00:09:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:22.157 00:09:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:22.157 00:09:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:22.157 00:09:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:22.157 00:09:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:22.157 00:09:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:22.157 00:09:45 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:22.157 00:09:45 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:22.157 00:09:45 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:22.157 00:09:45 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:22.157 00:09:45 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:22.157 00:09:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.157 00:09:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.157 00:09:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.157 00:09:45 -- paths/export.sh@5 -- # export PATH 00:04:22.157 00:09:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.157 00:09:45 -- nvmf/common.sh@51 -- # : 0 00:04:22.157 00:09:45 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:22.157 00:09:45 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:22.157 00:09:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:22.157 00:09:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:22.157 00:09:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:22.157 00:09:45 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:22.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:22.157 00:09:45 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:22.157 00:09:45 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:22.157 00:09:45 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:22.157 00:09:45 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:22.157 00:09:45 -- spdk/autotest.sh@32 -- # uname -s 00:04:22.157 00:09:45 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:22.157 00:09:45 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:22.157 00:09:45 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:22.157 00:09:45 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:22.157 00:09:45 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:22.157 00:09:45 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:22.157 00:09:45 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:22.157 00:09:45 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:22.157 00:09:45 -- spdk/autotest.sh@48 -- # udevadm_pid=86884 00:04:22.157 00:09:45 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:22.157 00:09:45 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:22.157 00:09:45 -- pm/common@17 -- # local monitor 00:04:22.157 00:09:45 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:22.157 00:09:45 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:22.157 00:09:45 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:22.157 00:09:45 -- pm/common@21 -- # date +%s 00:04:22.157 00:09:45 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:22.157 00:09:45 -- pm/common@21 -- # date +%s 00:04:22.157 00:09:45 -- pm/common@25 -- # sleep 1 00:04:22.157 00:09:45 -- pm/common@21 -- # date +%s 00:04:22.157 00:09:45 -- pm/common@21 -- # date +%s 00:04:22.158 00:09:45 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731884985 00:04:22.158 00:09:45 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731884985 00:04:22.158 00:09:45 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731884985 00:04:22.158 00:09:45 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731884985 00:04:22.158 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731884985_collect-cpu-load.pm.log 00:04:22.158 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731884985_collect-vmstat.pm.log 00:04:22.158 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731884985_collect-cpu-temp.pm.log 00:04:22.158 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731884985_collect-bmc-pm.bmc.pm.log 00:04:23.099 00:09:46 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:23.099 00:09:46 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:23.099 00:09:46 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:23.099 00:09:46 -- common/autotest_common.sh@10 -- # set +x 00:04:23.099 00:09:46 -- spdk/autotest.sh@59 -- # create_test_list 00:04:23.099 00:09:46 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:23.099 00:09:46 -- common/autotest_common.sh@10 -- # set +x 00:04:23.099 00:09:46 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:04:23.099 00:09:46 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:23.099 00:09:46 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:23.099 00:09:46 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:23.099 00:09:46 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:23.099 00:09:46 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:23.099 00:09:46 -- common/autotest_common.sh@1457 -- # uname 00:04:23.357 00:09:46 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:23.357 00:09:46 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:23.357 00:09:46 -- common/autotest_common.sh@1477 -- # uname 00:04:23.357 00:09:46 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:23.357 00:09:46 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:23.357 00:09:46 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:23.357 lcov: LCOV version 1.15 00:04:23.357 00:09:47 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:55.474 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:55.474 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:05:00.763 00:10:23 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:00.763 00:10:23 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:00.763 00:10:23 -- common/autotest_common.sh@10 -- # set +x 00:05:00.763 00:10:23 -- spdk/autotest.sh@78 -- # rm -f 00:05:00.763 00:10:23 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:01.334 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:05:01.334 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:05:01.334 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:05:01.334 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:05:01.334 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:05:01.334 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:05:01.334 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:05:01.334 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:05:01.334 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:05:01.334 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:05:01.334 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:05:01.334 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:05:01.334 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:05:01.334 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:05:01.334 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:05:01.334 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:05:01.334 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:05:01.594 00:10:25 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:01.594 00:10:25 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:01.594 00:10:25 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:01.594 00:10:25 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:05:01.594 00:10:25 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:01.594 00:10:25 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:05:01.594 00:10:25 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:01.594 00:10:25 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:01.594 00:10:25 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:01.594 00:10:25 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:01.594 00:10:25 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:01.594 00:10:25 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:01.594 00:10:25 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:01.594 00:10:25 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:01.594 00:10:25 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:01.594 No valid GPT data, bailing 00:05:01.594 00:10:25 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:01.594 00:10:25 -- scripts/common.sh@394 -- # pt= 00:05:01.594 00:10:25 -- scripts/common.sh@395 -- # return 1 00:05:01.594 00:10:25 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:01.594 1+0 records in 00:05:01.594 1+0 records out 00:05:01.594 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00193555 s, 542 MB/s 00:05:01.594 00:10:25 -- spdk/autotest.sh@105 -- # sync 00:05:01.594 00:10:25 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:01.594 00:10:25 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:01.594 00:10:25 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:04.144 00:10:27 -- spdk/autotest.sh@111 -- # uname -s 00:05:04.144 00:10:27 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:04.144 00:10:27 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:04.144 00:10:27 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:04.714 Hugepages 00:05:04.714 node hugesize free / total 00:05:04.714 node0 1048576kB 0 / 0 00:05:04.714 node0 2048kB 0 / 0 00:05:04.976 node1 1048576kB 0 / 0 00:05:04.976 node1 2048kB 0 / 0 00:05:04.976 00:05:04.976 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:04.976 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:04.976 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:04.976 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:04.976 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:04.976 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:04.976 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:04.976 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:04.976 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:04.976 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:04.976 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:04.976 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:04.976 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:04.976 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:04.976 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:04.976 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:04.976 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:04.976 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:04.976 00:10:28 -- spdk/autotest.sh@117 -- # uname -s 00:05:04.976 00:10:28 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:04.976 00:10:28 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:04.976 00:10:28 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:06.367 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:06.367 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:06.367 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:06.367 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:06.367 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:06.367 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:06.367 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:06.367 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:06.367 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:06.367 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:06.367 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:06.367 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:06.367 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:06.367 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:06.367 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:06.367 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:07.305 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:07.305 00:10:31 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:08.706 00:10:32 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:08.706 00:10:32 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:08.706 00:10:32 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:08.706 00:10:32 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:08.706 00:10:32 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:08.706 00:10:32 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:08.706 00:10:32 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:08.706 00:10:32 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:08.706 00:10:32 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:08.706 00:10:32 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:08.706 00:10:32 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:05:08.706 00:10:32 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:09.647 Waiting for block devices as requested 00:05:09.647 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:05:09.647 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:09.906 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:09.906 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:09.906 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:09.906 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:10.165 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:10.165 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:10.165 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:10.423 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:10.423 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:10.423 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:10.423 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:10.683 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:10.683 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:10.683 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:10.683 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:10.943 00:10:34 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:10.943 00:10:34 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:05:10.943 00:10:34 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:05:10.943 00:10:34 -- common/autotest_common.sh@1487 -- # grep 0000:88:00.0/nvme/nvme 00:05:10.943 00:10:34 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:10.943 00:10:34 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:05:10.943 00:10:34 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:10.943 00:10:34 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:10.943 00:10:34 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:10.943 00:10:34 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:10.943 00:10:34 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:10.943 00:10:34 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:10.943 00:10:34 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:10.943 00:10:34 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:05:10.943 00:10:34 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:10.943 00:10:34 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:10.943 00:10:34 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:10.943 00:10:34 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:10.943 00:10:34 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:10.943 00:10:34 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:10.943 00:10:34 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:10.943 00:10:34 -- common/autotest_common.sh@1543 -- # continue 00:05:10.943 00:10:34 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:10.943 00:10:34 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:10.943 00:10:34 -- common/autotest_common.sh@10 -- # set +x 00:05:10.943 00:10:34 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:10.943 00:10:34 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:10.943 00:10:34 -- common/autotest_common.sh@10 -- # set +x 00:05:10.943 00:10:34 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:12.327 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:12.327 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:12.327 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:12.327 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:12.327 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:12.327 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:12.327 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:12.327 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:12.327 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:12.327 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:12.327 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:12.327 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:12.327 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:12.327 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:12.327 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:12.327 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:13.269 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:13.528 00:10:37 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:13.528 00:10:37 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:13.528 00:10:37 -- common/autotest_common.sh@10 -- # set +x 00:05:13.528 00:10:37 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:13.528 00:10:37 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:13.528 00:10:37 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:13.528 00:10:37 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:13.528 00:10:37 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:13.528 00:10:37 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:13.529 00:10:37 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:13.529 00:10:37 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:13.529 00:10:37 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:13.529 00:10:37 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:13.529 00:10:37 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:13.529 00:10:37 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:13.529 00:10:37 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:13.529 00:10:37 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:13.529 00:10:37 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:05:13.529 00:10:37 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:13.529 00:10:37 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:05:13.529 00:10:37 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:05:13.529 00:10:37 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:13.529 00:10:37 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:05:13.529 00:10:37 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:05:13.529 00:10:37 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:88:00.0 00:05:13.529 00:10:37 -- common/autotest_common.sh@1579 -- # [[ -z 0000:88:00.0 ]] 00:05:13.529 00:10:37 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=97482 00:05:13.529 00:10:37 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:13.529 00:10:37 -- common/autotest_common.sh@1585 -- # waitforlisten 97482 00:05:13.529 00:10:37 -- common/autotest_common.sh@835 -- # '[' -z 97482 ']' 00:05:13.529 00:10:37 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.529 00:10:37 -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.529 00:10:37 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.529 00:10:37 -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.529 00:10:37 -- common/autotest_common.sh@10 -- # set +x 00:05:13.529 [2024-11-18 00:10:37.278090] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:05:13.529 [2024-11-18 00:10:37.278184] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97482 ] 00:05:13.529 [2024-11-18 00:10:37.345571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.788 [2024-11-18 00:10:37.394175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.047 00:10:37 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.047 00:10:37 -- common/autotest_common.sh@868 -- # return 0 00:05:14.047 00:10:37 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:05:14.047 00:10:37 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:05:14.047 00:10:37 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:05:17.338 nvme0n1 00:05:17.338 00:10:40 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:17.338 [2024-11-18 00:10:40.996546] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:17.338 [2024-11-18 00:10:40.996587] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:17.338 request: 00:05:17.338 { 00:05:17.338 "nvme_ctrlr_name": "nvme0", 00:05:17.338 "password": "test", 00:05:17.338 "method": "bdev_nvme_opal_revert", 00:05:17.338 "req_id": 1 00:05:17.338 } 00:05:17.338 Got JSON-RPC error response 00:05:17.338 response: 00:05:17.338 { 00:05:17.338 "code": -32603, 00:05:17.338 "message": "Internal error" 00:05:17.338 } 00:05:17.338 00:10:41 -- common/autotest_common.sh@1591 -- # true 00:05:17.338 00:10:41 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:05:17.338 00:10:41 -- common/autotest_common.sh@1595 -- # killprocess 97482 00:05:17.338 00:10:41 -- common/autotest_common.sh@954 -- # '[' -z 97482 ']' 00:05:17.338 00:10:41 -- common/autotest_common.sh@958 -- # kill -0 97482 00:05:17.338 00:10:41 -- common/autotest_common.sh@959 -- # uname 00:05:17.338 00:10:41 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:17.338 00:10:41 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97482 00:05:17.338 00:10:41 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:17.338 00:10:41 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:17.338 00:10:41 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97482' 00:05:17.338 killing process with pid 97482 00:05:17.338 00:10:41 -- common/autotest_common.sh@973 -- # kill 97482 00:05:17.338 00:10:41 -- common/autotest_common.sh@978 -- # wait 97482 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.338 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.339 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.339 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.339 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.339 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.339 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.339 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.339 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.339 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.339 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.339 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.339 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.339 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.339 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.339 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.339 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.339 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.339 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.339 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.339 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.339 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.339 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.339 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.339 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.339 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.339 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.339 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.339 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.339 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.339 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.339 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.339 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.339 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.339 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.339 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.339 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.339 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.339 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.339 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.339 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.339 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:18.977 00:10:42 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:18.977 00:10:42 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:18.977 00:10:42 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:18.977 00:10:42 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:18.977 00:10:42 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:18.977 00:10:42 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:18.977 00:10:42 -- common/autotest_common.sh@10 -- # set +x 00:05:18.977 00:10:42 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:18.977 00:10:42 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:18.977 00:10:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:18.977 00:10:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.977 00:10:42 -- common/autotest_common.sh@10 -- # set +x 00:05:19.237 ************************************ 00:05:19.237 START TEST env 00:05:19.237 ************************************ 00:05:19.237 00:10:42 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:19.237 * Looking for test storage... 00:05:19.237 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:19.237 00:10:42 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:19.237 00:10:42 env -- common/autotest_common.sh@1693 -- # lcov --version 00:05:19.237 00:10:42 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:19.237 00:10:42 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:19.237 00:10:42 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.237 00:10:42 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.237 00:10:42 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.237 00:10:42 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.237 00:10:42 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.237 00:10:42 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.237 00:10:42 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.237 00:10:42 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.237 00:10:42 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.237 00:10:42 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.237 00:10:42 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.237 00:10:42 env -- scripts/common.sh@344 -- # case "$op" in 00:05:19.237 00:10:42 env -- scripts/common.sh@345 -- # : 1 00:05:19.237 00:10:42 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.237 00:10:42 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.237 00:10:42 env -- scripts/common.sh@365 -- # decimal 1 00:05:19.237 00:10:42 env -- scripts/common.sh@353 -- # local d=1 00:05:19.237 00:10:42 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.237 00:10:42 env -- scripts/common.sh@355 -- # echo 1 00:05:19.237 00:10:42 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.237 00:10:42 env -- scripts/common.sh@366 -- # decimal 2 00:05:19.237 00:10:42 env -- scripts/common.sh@353 -- # local d=2 00:05:19.237 00:10:42 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.237 00:10:42 env -- scripts/common.sh@355 -- # echo 2 00:05:19.237 00:10:42 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.237 00:10:42 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.237 00:10:42 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.237 00:10:42 env -- scripts/common.sh@368 -- # return 0 00:05:19.237 00:10:42 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.237 00:10:42 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:19.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.237 --rc genhtml_branch_coverage=1 00:05:19.237 --rc genhtml_function_coverage=1 00:05:19.237 --rc genhtml_legend=1 00:05:19.237 --rc geninfo_all_blocks=1 00:05:19.237 --rc geninfo_unexecuted_blocks=1 00:05:19.237 00:05:19.237 ' 00:05:19.237 00:10:42 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:19.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.237 --rc genhtml_branch_coverage=1 00:05:19.237 --rc genhtml_function_coverage=1 00:05:19.237 --rc genhtml_legend=1 00:05:19.237 --rc geninfo_all_blocks=1 00:05:19.237 --rc geninfo_unexecuted_blocks=1 00:05:19.237 00:05:19.237 ' 00:05:19.237 00:10:42 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:19.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.237 --rc genhtml_branch_coverage=1 00:05:19.237 --rc genhtml_function_coverage=1 00:05:19.237 --rc genhtml_legend=1 00:05:19.237 --rc geninfo_all_blocks=1 00:05:19.238 --rc geninfo_unexecuted_blocks=1 00:05:19.238 00:05:19.238 ' 00:05:19.238 00:10:42 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:19.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.238 --rc genhtml_branch_coverage=1 00:05:19.238 --rc genhtml_function_coverage=1 00:05:19.238 --rc genhtml_legend=1 00:05:19.238 --rc geninfo_all_blocks=1 00:05:19.238 --rc geninfo_unexecuted_blocks=1 00:05:19.238 00:05:19.238 ' 00:05:19.238 00:10:42 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:19.238 00:10:42 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.238 00:10:42 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.238 00:10:42 env -- common/autotest_common.sh@10 -- # set +x 00:05:19.238 ************************************ 00:05:19.238 START TEST env_memory 00:05:19.238 ************************************ 00:05:19.238 00:10:42 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:19.238 00:05:19.238 00:05:19.238 CUnit - A unit testing framework for C - Version 2.1-3 00:05:19.238 http://cunit.sourceforge.net/ 00:05:19.238 00:05:19.238 00:05:19.238 Suite: memory 00:05:19.238 Test: alloc and free memory map ...[2024-11-18 00:10:43.012177] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:19.238 passed 00:05:19.238 Test: mem map translation ...[2024-11-18 00:10:43.032206] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:19.238 [2024-11-18 00:10:43.032227] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:19.238 [2024-11-18 00:10:43.032282] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:19.238 [2024-11-18 00:10:43.032294] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:19.505 passed 00:05:19.505 Test: mem map registration ...[2024-11-18 00:10:43.073212] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:19.505 [2024-11-18 00:10:43.073230] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:19.506 passed 00:05:19.506 Test: mem map adjacent registrations ...passed 00:05:19.506 00:05:19.506 Run Summary: Type Total Ran Passed Failed Inactive 00:05:19.506 suites 1 1 n/a 0 0 00:05:19.506 tests 4 4 4 0 0 00:05:19.506 asserts 152 152 152 0 n/a 00:05:19.506 00:05:19.506 Elapsed time = 0.138 seconds 00:05:19.506 00:05:19.506 real 0m0.147s 00:05:19.506 user 0m0.135s 00:05:19.506 sys 0m0.012s 00:05:19.506 00:10:43 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.506 00:10:43 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:19.506 ************************************ 00:05:19.506 END TEST env_memory 00:05:19.506 ************************************ 00:05:19.506 00:10:43 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:19.506 00:10:43 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.506 00:10:43 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.506 00:10:43 env -- common/autotest_common.sh@10 -- # set +x 00:05:19.506 ************************************ 00:05:19.506 START TEST env_vtophys 00:05:19.506 ************************************ 00:05:19.506 00:10:43 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:19.506 EAL: lib.eal log level changed from notice to debug 00:05:19.506 EAL: Detected lcore 0 as core 0 on socket 0 00:05:19.506 EAL: Detected lcore 1 as core 1 on socket 0 00:05:19.506 EAL: Detected lcore 2 as core 2 on socket 0 00:05:19.506 EAL: Detected lcore 3 as core 3 on socket 0 00:05:19.506 EAL: Detected lcore 4 as core 4 on socket 0 00:05:19.506 EAL: Detected lcore 5 as core 5 on socket 0 00:05:19.506 EAL: Detected lcore 6 as core 8 on socket 0 00:05:19.506 EAL: Detected lcore 7 as core 9 on socket 0 00:05:19.506 EAL: Detected lcore 8 as core 10 on socket 0 00:05:19.506 EAL: Detected lcore 9 as core 11 on socket 0 00:05:19.506 EAL: Detected lcore 10 as core 12 on socket 0 00:05:19.506 EAL: Detected lcore 11 as core 13 on socket 0 00:05:19.506 EAL: Detected lcore 12 as core 0 on socket 1 00:05:19.506 EAL: Detected lcore 13 as core 1 on socket 1 00:05:19.506 EAL: Detected lcore 14 as core 2 on socket 1 00:05:19.506 EAL: Detected lcore 15 as core 3 on socket 1 00:05:19.506 EAL: Detected lcore 16 as core 4 on socket 1 00:05:19.506 EAL: Detected lcore 17 as core 5 on socket 1 00:05:19.506 EAL: Detected lcore 18 as core 8 on socket 1 00:05:19.506 EAL: Detected lcore 19 as core 9 on socket 1 00:05:19.506 EAL: Detected lcore 20 as core 10 on socket 1 00:05:19.506 EAL: Detected lcore 21 as core 11 on socket 1 00:05:19.506 EAL: Detected lcore 22 as core 12 on socket 1 00:05:19.506 EAL: Detected lcore 23 as core 13 on socket 1 00:05:19.506 EAL: Detected lcore 24 as core 0 on socket 0 00:05:19.506 EAL: Detected lcore 25 as core 1 on socket 0 00:05:19.506 EAL: Detected lcore 26 as core 2 on socket 0 00:05:19.506 EAL: Detected lcore 27 as core 3 on socket 0 00:05:19.506 EAL: Detected lcore 28 as core 4 on socket 0 00:05:19.506 EAL: Detected lcore 29 as core 5 on socket 0 00:05:19.506 EAL: Detected lcore 30 as core 8 on socket 0 00:05:19.506 EAL: Detected lcore 31 as core 9 on socket 0 00:05:19.506 EAL: Detected lcore 32 as core 10 on socket 0 00:05:19.506 EAL: Detected lcore 33 as core 11 on socket 0 00:05:19.506 EAL: Detected lcore 34 as core 12 on socket 0 00:05:19.506 EAL: Detected lcore 35 as core 13 on socket 0 00:05:19.506 EAL: Detected lcore 36 as core 0 on socket 1 00:05:19.506 EAL: Detected lcore 37 as core 1 on socket 1 00:05:19.506 EAL: Detected lcore 38 as core 2 on socket 1 00:05:19.506 EAL: Detected lcore 39 as core 3 on socket 1 00:05:19.506 EAL: Detected lcore 40 as core 4 on socket 1 00:05:19.506 EAL: Detected lcore 41 as core 5 on socket 1 00:05:19.506 EAL: Detected lcore 42 as core 8 on socket 1 00:05:19.506 EAL: Detected lcore 43 as core 9 on socket 1 00:05:19.506 EAL: Detected lcore 44 as core 10 on socket 1 00:05:19.506 EAL: Detected lcore 45 as core 11 on socket 1 00:05:19.506 EAL: Detected lcore 46 as core 12 on socket 1 00:05:19.506 EAL: Detected lcore 47 as core 13 on socket 1 00:05:19.506 EAL: Maximum logical cores by configuration: 128 00:05:19.506 EAL: Detected CPU lcores: 48 00:05:19.506 EAL: Detected NUMA nodes: 2 00:05:19.506 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:19.506 EAL: Detected shared linkage of DPDK 00:05:19.506 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:19.506 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:19.506 EAL: Registered [vdev] bus. 00:05:19.506 EAL: bus.vdev log level changed from disabled to notice 00:05:19.506 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:19.506 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:19.506 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:19.506 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:19.506 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:19.506 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:19.506 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:19.506 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:19.506 EAL: No shared files mode enabled, IPC will be disabled 00:05:19.506 EAL: No shared files mode enabled, IPC is disabled 00:05:19.506 EAL: Bus pci wants IOVA as 'DC' 00:05:19.506 EAL: Bus vdev wants IOVA as 'DC' 00:05:19.506 EAL: Buses did not request a specific IOVA mode. 00:05:19.506 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:19.506 EAL: Selected IOVA mode 'VA' 00:05:19.506 EAL: Probing VFIO support... 00:05:19.506 EAL: IOMMU type 1 (Type 1) is supported 00:05:19.506 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:19.506 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:19.506 EAL: VFIO support initialized 00:05:19.506 EAL: Ask a virtual area of 0x2e000 bytes 00:05:19.506 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:19.506 EAL: Setting up physically contiguous memory... 00:05:19.506 EAL: Setting maximum number of open files to 524288 00:05:19.506 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:19.506 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:19.506 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:19.506 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.506 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:19.506 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:19.506 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.506 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:19.506 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:19.506 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.506 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:19.506 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:19.506 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.506 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:19.506 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:19.506 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.506 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:19.506 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:19.506 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.506 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:19.506 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:19.506 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.506 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:19.506 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:19.506 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.506 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:19.506 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:19.506 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:19.506 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.506 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:19.506 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:19.506 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.506 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:19.506 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:19.506 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.506 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:19.506 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:19.506 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.506 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:19.506 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:19.506 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.506 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:19.506 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:19.506 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.506 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:19.506 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:19.506 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.506 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:19.506 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:19.506 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.506 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:19.506 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:19.506 EAL: Hugepages will be freed exactly as allocated. 00:05:19.506 EAL: No shared files mode enabled, IPC is disabled 00:05:19.506 EAL: No shared files mode enabled, IPC is disabled 00:05:19.506 EAL: TSC frequency is ~2700000 KHz 00:05:19.506 EAL: Main lcore 0 is ready (tid=7f1ae6144a00;cpuset=[0]) 00:05:19.506 EAL: Trying to obtain current memory policy. 00:05:19.506 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.506 EAL: Restoring previous memory policy: 0 00:05:19.506 EAL: request: mp_malloc_sync 00:05:19.506 EAL: No shared files mode enabled, IPC is disabled 00:05:19.506 EAL: Heap on socket 0 was expanded by 2MB 00:05:19.506 EAL: No shared files mode enabled, IPC is disabled 00:05:19.506 EAL: No shared files mode enabled, IPC is disabled 00:05:19.506 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:19.506 EAL: Mem event callback 'spdk:(nil)' registered 00:05:19.506 00:05:19.506 00:05:19.506 CUnit - A unit testing framework for C - Version 2.1-3 00:05:19.506 http://cunit.sourceforge.net/ 00:05:19.506 00:05:19.506 00:05:19.506 Suite: components_suite 00:05:19.506 Test: vtophys_malloc_test ...passed 00:05:19.507 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:19.507 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.507 EAL: Restoring previous memory policy: 4 00:05:19.507 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.507 EAL: request: mp_malloc_sync 00:05:19.507 EAL: No shared files mode enabled, IPC is disabled 00:05:19.507 EAL: Heap on socket 0 was expanded by 4MB 00:05:19.507 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.507 EAL: request: mp_malloc_sync 00:05:19.507 EAL: No shared files mode enabled, IPC is disabled 00:05:19.507 EAL: Heap on socket 0 was shrunk by 4MB 00:05:19.507 EAL: Trying to obtain current memory policy. 00:05:19.507 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.507 EAL: Restoring previous memory policy: 4 00:05:19.507 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.507 EAL: request: mp_malloc_sync 00:05:19.507 EAL: No shared files mode enabled, IPC is disabled 00:05:19.507 EAL: Heap on socket 0 was expanded by 6MB 00:05:19.507 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.507 EAL: request: mp_malloc_sync 00:05:19.507 EAL: No shared files mode enabled, IPC is disabled 00:05:19.507 EAL: Heap on socket 0 was shrunk by 6MB 00:05:19.507 EAL: Trying to obtain current memory policy. 00:05:19.507 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.507 EAL: Restoring previous memory policy: 4 00:05:19.507 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.507 EAL: request: mp_malloc_sync 00:05:19.507 EAL: No shared files mode enabled, IPC is disabled 00:05:19.507 EAL: Heap on socket 0 was expanded by 10MB 00:05:19.507 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.507 EAL: request: mp_malloc_sync 00:05:19.507 EAL: No shared files mode enabled, IPC is disabled 00:05:19.507 EAL: Heap on socket 0 was shrunk by 10MB 00:05:19.507 EAL: Trying to obtain current memory policy. 00:05:19.507 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.507 EAL: Restoring previous memory policy: 4 00:05:19.507 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.507 EAL: request: mp_malloc_sync 00:05:19.507 EAL: No shared files mode enabled, IPC is disabled 00:05:19.507 EAL: Heap on socket 0 was expanded by 18MB 00:05:19.507 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.507 EAL: request: mp_malloc_sync 00:05:19.507 EAL: No shared files mode enabled, IPC is disabled 00:05:19.507 EAL: Heap on socket 0 was shrunk by 18MB 00:05:19.507 EAL: Trying to obtain current memory policy. 00:05:19.507 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.507 EAL: Restoring previous memory policy: 4 00:05:19.507 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.507 EAL: request: mp_malloc_sync 00:05:19.507 EAL: No shared files mode enabled, IPC is disabled 00:05:19.507 EAL: Heap on socket 0 was expanded by 34MB 00:05:19.507 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.507 EAL: request: mp_malloc_sync 00:05:19.507 EAL: No shared files mode enabled, IPC is disabled 00:05:19.507 EAL: Heap on socket 0 was shrunk by 34MB 00:05:19.507 EAL: Trying to obtain current memory policy. 00:05:19.507 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.507 EAL: Restoring previous memory policy: 4 00:05:19.507 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.507 EAL: request: mp_malloc_sync 00:05:19.507 EAL: No shared files mode enabled, IPC is disabled 00:05:19.507 EAL: Heap on socket 0 was expanded by 66MB 00:05:19.507 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.507 EAL: request: mp_malloc_sync 00:05:19.507 EAL: No shared files mode enabled, IPC is disabled 00:05:19.507 EAL: Heap on socket 0 was shrunk by 66MB 00:05:19.507 EAL: Trying to obtain current memory policy. 00:05:19.507 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.766 EAL: Restoring previous memory policy: 4 00:05:19.766 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.766 EAL: request: mp_malloc_sync 00:05:19.766 EAL: No shared files mode enabled, IPC is disabled 00:05:19.766 EAL: Heap on socket 0 was expanded by 130MB 00:05:19.766 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.766 EAL: request: mp_malloc_sync 00:05:19.766 EAL: No shared files mode enabled, IPC is disabled 00:05:19.766 EAL: Heap on socket 0 was shrunk by 130MB 00:05:19.766 EAL: Trying to obtain current memory policy. 00:05:19.766 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.766 EAL: Restoring previous memory policy: 4 00:05:19.766 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.766 EAL: request: mp_malloc_sync 00:05:19.766 EAL: No shared files mode enabled, IPC is disabled 00:05:19.766 EAL: Heap on socket 0 was expanded by 258MB 00:05:19.766 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.766 EAL: request: mp_malloc_sync 00:05:19.766 EAL: No shared files mode enabled, IPC is disabled 00:05:19.766 EAL: Heap on socket 0 was shrunk by 258MB 00:05:19.766 EAL: Trying to obtain current memory policy. 00:05:19.766 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.024 EAL: Restoring previous memory policy: 4 00:05:20.024 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.024 EAL: request: mp_malloc_sync 00:05:20.024 EAL: No shared files mode enabled, IPC is disabled 00:05:20.024 EAL: Heap on socket 0 was expanded by 514MB 00:05:20.024 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.283 EAL: request: mp_malloc_sync 00:05:20.283 EAL: No shared files mode enabled, IPC is disabled 00:05:20.283 EAL: Heap on socket 0 was shrunk by 514MB 00:05:20.283 EAL: Trying to obtain current memory policy. 00:05:20.283 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.542 EAL: Restoring previous memory policy: 4 00:05:20.542 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.542 EAL: request: mp_malloc_sync 00:05:20.542 EAL: No shared files mode enabled, IPC is disabled 00:05:20.542 EAL: Heap on socket 0 was expanded by 1026MB 00:05:20.801 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.801 EAL: request: mp_malloc_sync 00:05:20.801 EAL: No shared files mode enabled, IPC is disabled 00:05:20.801 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:20.801 passed 00:05:20.801 00:05:20.801 Run Summary: Type Total Ran Passed Failed Inactive 00:05:20.801 suites 1 1 n/a 0 0 00:05:20.801 tests 2 2 2 0 0 00:05:20.801 asserts 497 497 497 0 n/a 00:05:20.801 00:05:20.801 Elapsed time = 1.328 seconds 00:05:20.801 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.801 EAL: request: mp_malloc_sync 00:05:20.801 EAL: No shared files mode enabled, IPC is disabled 00:05:20.801 EAL: Heap on socket 0 was shrunk by 2MB 00:05:20.801 EAL: No shared files mode enabled, IPC is disabled 00:05:20.801 EAL: No shared files mode enabled, IPC is disabled 00:05:20.801 EAL: No shared files mode enabled, IPC is disabled 00:05:21.062 00:05:21.062 real 0m1.450s 00:05:21.062 user 0m0.854s 00:05:21.062 sys 0m0.559s 00:05:21.062 00:10:44 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.062 00:10:44 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:21.062 ************************************ 00:05:21.062 END TEST env_vtophys 00:05:21.062 ************************************ 00:05:21.062 00:10:44 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:21.062 00:10:44 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.062 00:10:44 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.062 00:10:44 env -- common/autotest_common.sh@10 -- # set +x 00:05:21.062 ************************************ 00:05:21.062 START TEST env_pci 00:05:21.062 ************************************ 00:05:21.062 00:10:44 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:21.062 00:05:21.062 00:05:21.062 CUnit - A unit testing framework for C - Version 2.1-3 00:05:21.062 http://cunit.sourceforge.net/ 00:05:21.062 00:05:21.062 00:05:21.062 Suite: pci 00:05:21.062 Test: pci_hook ...[2024-11-18 00:10:44.687481] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 98373 has claimed it 00:05:21.062 EAL: Cannot find device (10000:00:01.0) 00:05:21.062 EAL: Failed to attach device on primary process 00:05:21.062 passed 00:05:21.062 00:05:21.062 Run Summary: Type Total Ran Passed Failed Inactive 00:05:21.062 suites 1 1 n/a 0 0 00:05:21.062 tests 1 1 1 0 0 00:05:21.062 asserts 25 25 25 0 n/a 00:05:21.062 00:05:21.062 Elapsed time = 0.022 seconds 00:05:21.062 00:05:21.062 real 0m0.035s 00:05:21.062 user 0m0.009s 00:05:21.062 sys 0m0.026s 00:05:21.062 00:10:44 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.062 00:10:44 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:21.062 ************************************ 00:05:21.062 END TEST env_pci 00:05:21.062 ************************************ 00:05:21.062 00:10:44 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:21.062 00:10:44 env -- env/env.sh@15 -- # uname 00:05:21.062 00:10:44 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:21.062 00:10:44 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:21.062 00:10:44 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:21.062 00:10:44 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:21.062 00:10:44 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.062 00:10:44 env -- common/autotest_common.sh@10 -- # set +x 00:05:21.062 ************************************ 00:05:21.062 START TEST env_dpdk_post_init 00:05:21.062 ************************************ 00:05:21.062 00:10:44 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:21.062 EAL: Detected CPU lcores: 48 00:05:21.062 EAL: Detected NUMA nodes: 2 00:05:21.062 EAL: Detected shared linkage of DPDK 00:05:21.062 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:21.062 EAL: Selected IOVA mode 'VA' 00:05:21.062 EAL: VFIO support initialized 00:05:21.062 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:21.062 EAL: Using IOMMU type 1 (Type 1) 00:05:21.062 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:05:21.323 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:05:21.323 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:05:21.323 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:05:21.323 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:05:21.323 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:05:21.323 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:05:21.323 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:05:21.323 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:05:21.323 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:05:21.323 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:05:21.323 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:05:21.323 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:05:21.323 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:05:21.323 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:05:21.323 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:05:22.270 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:05:25.554 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:05:25.554 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:05:25.554 Starting DPDK initialization... 00:05:25.554 Starting SPDK post initialization... 00:05:25.554 SPDK NVMe probe 00:05:25.554 Attaching to 0000:88:00.0 00:05:25.554 Attached to 0000:88:00.0 00:05:25.554 Cleaning up... 00:05:25.554 00:05:25.554 real 0m4.382s 00:05:25.554 user 0m3.259s 00:05:25.554 sys 0m0.185s 00:05:25.554 00:10:49 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.554 00:10:49 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:25.554 ************************************ 00:05:25.554 END TEST env_dpdk_post_init 00:05:25.554 ************************************ 00:05:25.554 00:10:49 env -- env/env.sh@26 -- # uname 00:05:25.554 00:10:49 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:25.554 00:10:49 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:25.554 00:10:49 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.554 00:10:49 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.554 00:10:49 env -- common/autotest_common.sh@10 -- # set +x 00:05:25.554 ************************************ 00:05:25.554 START TEST env_mem_callbacks 00:05:25.554 ************************************ 00:05:25.554 00:10:49 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:25.554 EAL: Detected CPU lcores: 48 00:05:25.554 EAL: Detected NUMA nodes: 2 00:05:25.554 EAL: Detected shared linkage of DPDK 00:05:25.554 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:25.554 EAL: Selected IOVA mode 'VA' 00:05:25.554 EAL: VFIO support initialized 00:05:25.554 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:25.554 00:05:25.554 00:05:25.554 CUnit - A unit testing framework for C - Version 2.1-3 00:05:25.554 http://cunit.sourceforge.net/ 00:05:25.554 00:05:25.554 00:05:25.554 Suite: memory 00:05:25.554 Test: test ... 00:05:25.554 register 0x200000200000 2097152 00:05:25.554 malloc 3145728 00:05:25.554 register 0x200000400000 4194304 00:05:25.554 buf 0x200000500000 len 3145728 PASSED 00:05:25.554 malloc 64 00:05:25.554 buf 0x2000004fff40 len 64 PASSED 00:05:25.554 malloc 4194304 00:05:25.554 register 0x200000800000 6291456 00:05:25.554 buf 0x200000a00000 len 4194304 PASSED 00:05:25.554 free 0x200000500000 3145728 00:05:25.554 free 0x2000004fff40 64 00:05:25.554 unregister 0x200000400000 4194304 PASSED 00:05:25.554 free 0x200000a00000 4194304 00:05:25.554 unregister 0x200000800000 6291456 PASSED 00:05:25.554 malloc 8388608 00:05:25.554 register 0x200000400000 10485760 00:05:25.554 buf 0x200000600000 len 8388608 PASSED 00:05:25.554 free 0x200000600000 8388608 00:05:25.554 unregister 0x200000400000 10485760 PASSED 00:05:25.554 passed 00:05:25.554 00:05:25.554 Run Summary: Type Total Ran Passed Failed Inactive 00:05:25.554 suites 1 1 n/a 0 0 00:05:25.554 tests 1 1 1 0 0 00:05:25.554 asserts 15 15 15 0 n/a 00:05:25.554 00:05:25.554 Elapsed time = 0.005 seconds 00:05:25.554 00:05:25.554 real 0m0.045s 00:05:25.554 user 0m0.013s 00:05:25.554 sys 0m0.032s 00:05:25.554 00:10:49 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.554 00:10:49 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:25.554 ************************************ 00:05:25.554 END TEST env_mem_callbacks 00:05:25.554 ************************************ 00:05:25.554 00:05:25.554 real 0m6.447s 00:05:25.554 user 0m4.450s 00:05:25.554 sys 0m1.041s 00:05:25.554 00:10:49 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.554 00:10:49 env -- common/autotest_common.sh@10 -- # set +x 00:05:25.554 ************************************ 00:05:25.554 END TEST env 00:05:25.554 ************************************ 00:05:25.554 00:10:49 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:25.554 00:10:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.554 00:10:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.554 00:10:49 -- common/autotest_common.sh@10 -- # set +x 00:05:25.554 ************************************ 00:05:25.554 START TEST rpc 00:05:25.554 ************************************ 00:05:25.554 00:10:49 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:25.554 * Looking for test storage... 00:05:25.554 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:25.554 00:10:49 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:25.554 00:10:49 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:25.554 00:10:49 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:25.812 00:10:49 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:25.812 00:10:49 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:25.812 00:10:49 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:25.812 00:10:49 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:25.812 00:10:49 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:25.813 00:10:49 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:25.813 00:10:49 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:25.813 00:10:49 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:25.813 00:10:49 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:25.813 00:10:49 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:25.813 00:10:49 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:25.813 00:10:49 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:25.813 00:10:49 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:25.813 00:10:49 rpc -- scripts/common.sh@345 -- # : 1 00:05:25.813 00:10:49 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:25.813 00:10:49 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:25.813 00:10:49 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:25.813 00:10:49 rpc -- scripts/common.sh@353 -- # local d=1 00:05:25.813 00:10:49 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:25.813 00:10:49 rpc -- scripts/common.sh@355 -- # echo 1 00:05:25.813 00:10:49 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:25.813 00:10:49 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:25.813 00:10:49 rpc -- scripts/common.sh@353 -- # local d=2 00:05:25.813 00:10:49 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:25.813 00:10:49 rpc -- scripts/common.sh@355 -- # echo 2 00:05:25.813 00:10:49 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:25.813 00:10:49 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:25.813 00:10:49 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:25.813 00:10:49 rpc -- scripts/common.sh@368 -- # return 0 00:05:25.813 00:10:49 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:25.813 00:10:49 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:25.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.813 --rc genhtml_branch_coverage=1 00:05:25.813 --rc genhtml_function_coverage=1 00:05:25.813 --rc genhtml_legend=1 00:05:25.813 --rc geninfo_all_blocks=1 00:05:25.813 --rc geninfo_unexecuted_blocks=1 00:05:25.813 00:05:25.813 ' 00:05:25.813 00:10:49 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:25.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.813 --rc genhtml_branch_coverage=1 00:05:25.813 --rc genhtml_function_coverage=1 00:05:25.813 --rc genhtml_legend=1 00:05:25.813 --rc geninfo_all_blocks=1 00:05:25.813 --rc geninfo_unexecuted_blocks=1 00:05:25.813 00:05:25.813 ' 00:05:25.813 00:10:49 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:25.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.813 --rc genhtml_branch_coverage=1 00:05:25.813 --rc genhtml_function_coverage=1 00:05:25.813 --rc genhtml_legend=1 00:05:25.813 --rc geninfo_all_blocks=1 00:05:25.813 --rc geninfo_unexecuted_blocks=1 00:05:25.813 00:05:25.813 ' 00:05:25.813 00:10:49 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:25.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.813 --rc genhtml_branch_coverage=1 00:05:25.813 --rc genhtml_function_coverage=1 00:05:25.813 --rc genhtml_legend=1 00:05:25.813 --rc geninfo_all_blocks=1 00:05:25.813 --rc geninfo_unexecuted_blocks=1 00:05:25.813 00:05:25.813 ' 00:05:25.813 00:10:49 rpc -- rpc/rpc.sh@65 -- # spdk_pid=99160 00:05:25.813 00:10:49 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:25.813 00:10:49 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:25.813 00:10:49 rpc -- rpc/rpc.sh@67 -- # waitforlisten 99160 00:05:25.813 00:10:49 rpc -- common/autotest_common.sh@835 -- # '[' -z 99160 ']' 00:05:25.813 00:10:49 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.813 00:10:49 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:25.813 00:10:49 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.813 00:10:49 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:25.813 00:10:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.813 [2024-11-18 00:10:49.521447] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:05:25.813 [2024-11-18 00:10:49.521532] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99160 ] 00:05:25.813 [2024-11-18 00:10:49.589568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.071 [2024-11-18 00:10:49.636290] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:26.071 [2024-11-18 00:10:49.636363] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 99160' to capture a snapshot of events at runtime. 00:05:26.071 [2024-11-18 00:10:49.636378] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:26.071 [2024-11-18 00:10:49.636389] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:26.071 [2024-11-18 00:10:49.636399] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid99160 for offline analysis/debug. 00:05:26.071 [2024-11-18 00:10:49.637043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.071 00:10:49 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:26.071 00:10:49 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:26.071 00:10:49 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:26.071 00:10:49 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:26.071 00:10:49 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:26.071 00:10:49 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:26.071 00:10:49 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.071 00:10:49 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.071 00:10:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.331 ************************************ 00:05:26.331 START TEST rpc_integrity 00:05:26.331 ************************************ 00:05:26.331 00:10:49 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:26.331 00:10:49 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:26.331 00:10:49 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.331 00:10:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.331 00:10:49 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.331 00:10:49 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:26.331 00:10:49 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:26.331 00:10:49 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:26.331 00:10:49 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:26.331 00:10:49 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.331 00:10:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.331 00:10:49 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.331 00:10:49 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:26.331 00:10:49 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:26.331 00:10:49 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.331 00:10:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.331 00:10:49 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.331 00:10:49 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:26.331 { 00:05:26.331 "name": "Malloc0", 00:05:26.331 "aliases": [ 00:05:26.331 "663be8a3-c00e-489e-ae18-8d6af42e518f" 00:05:26.331 ], 00:05:26.331 "product_name": "Malloc disk", 00:05:26.331 "block_size": 512, 00:05:26.331 "num_blocks": 16384, 00:05:26.331 "uuid": "663be8a3-c00e-489e-ae18-8d6af42e518f", 00:05:26.331 "assigned_rate_limits": { 00:05:26.331 "rw_ios_per_sec": 0, 00:05:26.331 "rw_mbytes_per_sec": 0, 00:05:26.331 "r_mbytes_per_sec": 0, 00:05:26.331 "w_mbytes_per_sec": 0 00:05:26.331 }, 00:05:26.331 "claimed": false, 00:05:26.331 "zoned": false, 00:05:26.331 "supported_io_types": { 00:05:26.331 "read": true, 00:05:26.331 "write": true, 00:05:26.331 "unmap": true, 00:05:26.331 "flush": true, 00:05:26.331 "reset": true, 00:05:26.331 "nvme_admin": false, 00:05:26.331 "nvme_io": false, 00:05:26.331 "nvme_io_md": false, 00:05:26.331 "write_zeroes": true, 00:05:26.331 "zcopy": true, 00:05:26.331 "get_zone_info": false, 00:05:26.331 "zone_management": false, 00:05:26.331 "zone_append": false, 00:05:26.331 "compare": false, 00:05:26.331 "compare_and_write": false, 00:05:26.331 "abort": true, 00:05:26.331 "seek_hole": false, 00:05:26.331 "seek_data": false, 00:05:26.331 "copy": true, 00:05:26.331 "nvme_iov_md": false 00:05:26.331 }, 00:05:26.331 "memory_domains": [ 00:05:26.331 { 00:05:26.331 "dma_device_id": "system", 00:05:26.331 "dma_device_type": 1 00:05:26.331 }, 00:05:26.331 { 00:05:26.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.331 "dma_device_type": 2 00:05:26.331 } 00:05:26.331 ], 00:05:26.331 "driver_specific": {} 00:05:26.331 } 00:05:26.331 ]' 00:05:26.331 00:10:49 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:26.331 00:10:50 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:26.331 00:10:50 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:26.331 00:10:50 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.331 00:10:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.331 [2024-11-18 00:10:50.027021] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:26.331 [2024-11-18 00:10:50.027079] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:26.331 [2024-11-18 00:10:50.027104] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd948d0 00:05:26.331 [2024-11-18 00:10:50.027118] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:26.331 [2024-11-18 00:10:50.028727] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:26.331 [2024-11-18 00:10:50.028751] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:26.331 Passthru0 00:05:26.331 00:10:50 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.331 00:10:50 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:26.331 00:10:50 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.331 00:10:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.331 00:10:50 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.331 00:10:50 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:26.331 { 00:05:26.331 "name": "Malloc0", 00:05:26.331 "aliases": [ 00:05:26.331 "663be8a3-c00e-489e-ae18-8d6af42e518f" 00:05:26.331 ], 00:05:26.331 "product_name": "Malloc disk", 00:05:26.331 "block_size": 512, 00:05:26.331 "num_blocks": 16384, 00:05:26.331 "uuid": "663be8a3-c00e-489e-ae18-8d6af42e518f", 00:05:26.331 "assigned_rate_limits": { 00:05:26.331 "rw_ios_per_sec": 0, 00:05:26.331 "rw_mbytes_per_sec": 0, 00:05:26.331 "r_mbytes_per_sec": 0, 00:05:26.331 "w_mbytes_per_sec": 0 00:05:26.331 }, 00:05:26.331 "claimed": true, 00:05:26.331 "claim_type": "exclusive_write", 00:05:26.331 "zoned": false, 00:05:26.331 "supported_io_types": { 00:05:26.331 "read": true, 00:05:26.331 "write": true, 00:05:26.331 "unmap": true, 00:05:26.331 "flush": true, 00:05:26.331 "reset": true, 00:05:26.331 "nvme_admin": false, 00:05:26.331 "nvme_io": false, 00:05:26.331 "nvme_io_md": false, 00:05:26.331 "write_zeroes": true, 00:05:26.331 "zcopy": true, 00:05:26.331 "get_zone_info": false, 00:05:26.331 "zone_management": false, 00:05:26.331 "zone_append": false, 00:05:26.331 "compare": false, 00:05:26.331 "compare_and_write": false, 00:05:26.331 "abort": true, 00:05:26.331 "seek_hole": false, 00:05:26.331 "seek_data": false, 00:05:26.331 "copy": true, 00:05:26.331 "nvme_iov_md": false 00:05:26.331 }, 00:05:26.331 "memory_domains": [ 00:05:26.331 { 00:05:26.331 "dma_device_id": "system", 00:05:26.331 "dma_device_type": 1 00:05:26.331 }, 00:05:26.331 { 00:05:26.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.331 "dma_device_type": 2 00:05:26.331 } 00:05:26.331 ], 00:05:26.331 "driver_specific": {} 00:05:26.331 }, 00:05:26.331 { 00:05:26.331 "name": "Passthru0", 00:05:26.331 "aliases": [ 00:05:26.331 "e5f1ed15-909c-59b7-b5fc-a93e8ede6660" 00:05:26.331 ], 00:05:26.331 "product_name": "passthru", 00:05:26.331 "block_size": 512, 00:05:26.331 "num_blocks": 16384, 00:05:26.331 "uuid": "e5f1ed15-909c-59b7-b5fc-a93e8ede6660", 00:05:26.331 "assigned_rate_limits": { 00:05:26.331 "rw_ios_per_sec": 0, 00:05:26.331 "rw_mbytes_per_sec": 0, 00:05:26.331 "r_mbytes_per_sec": 0, 00:05:26.331 "w_mbytes_per_sec": 0 00:05:26.331 }, 00:05:26.331 "claimed": false, 00:05:26.331 "zoned": false, 00:05:26.331 "supported_io_types": { 00:05:26.331 "read": true, 00:05:26.331 "write": true, 00:05:26.331 "unmap": true, 00:05:26.331 "flush": true, 00:05:26.331 "reset": true, 00:05:26.331 "nvme_admin": false, 00:05:26.331 "nvme_io": false, 00:05:26.331 "nvme_io_md": false, 00:05:26.331 "write_zeroes": true, 00:05:26.331 "zcopy": true, 00:05:26.331 "get_zone_info": false, 00:05:26.331 "zone_management": false, 00:05:26.331 "zone_append": false, 00:05:26.331 "compare": false, 00:05:26.331 "compare_and_write": false, 00:05:26.331 "abort": true, 00:05:26.331 "seek_hole": false, 00:05:26.331 "seek_data": false, 00:05:26.331 "copy": true, 00:05:26.331 "nvme_iov_md": false 00:05:26.331 }, 00:05:26.331 "memory_domains": [ 00:05:26.331 { 00:05:26.331 "dma_device_id": "system", 00:05:26.331 "dma_device_type": 1 00:05:26.331 }, 00:05:26.331 { 00:05:26.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.331 "dma_device_type": 2 00:05:26.331 } 00:05:26.331 ], 00:05:26.331 "driver_specific": { 00:05:26.331 "passthru": { 00:05:26.331 "name": "Passthru0", 00:05:26.331 "base_bdev_name": "Malloc0" 00:05:26.331 } 00:05:26.331 } 00:05:26.331 } 00:05:26.331 ]' 00:05:26.331 00:10:50 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:26.331 00:10:50 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:26.331 00:10:50 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:26.331 00:10:50 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.331 00:10:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.331 00:10:50 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.331 00:10:50 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:26.331 00:10:50 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.331 00:10:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.331 00:10:50 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.331 00:10:50 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:26.331 00:10:50 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.331 00:10:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.331 00:10:50 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.331 00:10:50 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:26.331 00:10:50 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:26.331 00:10:50 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:26.331 00:05:26.331 real 0m0.233s 00:05:26.331 user 0m0.148s 00:05:26.331 sys 0m0.022s 00:05:26.331 00:10:50 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.331 00:10:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.331 ************************************ 00:05:26.331 END TEST rpc_integrity 00:05:26.331 ************************************ 00:05:26.591 00:10:50 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:26.591 00:10:50 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.591 00:10:50 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.591 00:10:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.591 ************************************ 00:05:26.591 START TEST rpc_plugins 00:05:26.591 ************************************ 00:05:26.591 00:10:50 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:26.591 00:10:50 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:26.591 00:10:50 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.591 00:10:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:26.591 00:10:50 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.591 00:10:50 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:26.591 00:10:50 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:26.591 00:10:50 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.591 00:10:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:26.591 00:10:50 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.591 00:10:50 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:26.591 { 00:05:26.591 "name": "Malloc1", 00:05:26.591 "aliases": [ 00:05:26.591 "5c7e69dc-071b-4c71-acff-fca381e42793" 00:05:26.591 ], 00:05:26.591 "product_name": "Malloc disk", 00:05:26.591 "block_size": 4096, 00:05:26.591 "num_blocks": 256, 00:05:26.591 "uuid": "5c7e69dc-071b-4c71-acff-fca381e42793", 00:05:26.591 "assigned_rate_limits": { 00:05:26.591 "rw_ios_per_sec": 0, 00:05:26.591 "rw_mbytes_per_sec": 0, 00:05:26.591 "r_mbytes_per_sec": 0, 00:05:26.591 "w_mbytes_per_sec": 0 00:05:26.591 }, 00:05:26.591 "claimed": false, 00:05:26.591 "zoned": false, 00:05:26.591 "supported_io_types": { 00:05:26.591 "read": true, 00:05:26.591 "write": true, 00:05:26.591 "unmap": true, 00:05:26.591 "flush": true, 00:05:26.591 "reset": true, 00:05:26.591 "nvme_admin": false, 00:05:26.591 "nvme_io": false, 00:05:26.591 "nvme_io_md": false, 00:05:26.591 "write_zeroes": true, 00:05:26.591 "zcopy": true, 00:05:26.591 "get_zone_info": false, 00:05:26.591 "zone_management": false, 00:05:26.591 "zone_append": false, 00:05:26.591 "compare": false, 00:05:26.591 "compare_and_write": false, 00:05:26.591 "abort": true, 00:05:26.591 "seek_hole": false, 00:05:26.591 "seek_data": false, 00:05:26.591 "copy": true, 00:05:26.591 "nvme_iov_md": false 00:05:26.591 }, 00:05:26.591 "memory_domains": [ 00:05:26.591 { 00:05:26.591 "dma_device_id": "system", 00:05:26.591 "dma_device_type": 1 00:05:26.591 }, 00:05:26.591 { 00:05:26.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.591 "dma_device_type": 2 00:05:26.591 } 00:05:26.591 ], 00:05:26.591 "driver_specific": {} 00:05:26.591 } 00:05:26.591 ]' 00:05:26.591 00:10:50 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:26.591 00:10:50 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:26.591 00:10:50 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:26.591 00:10:50 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.591 00:10:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:26.591 00:10:50 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.591 00:10:50 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:26.591 00:10:50 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.591 00:10:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:26.591 00:10:50 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.591 00:10:50 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:26.591 00:10:50 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:26.591 00:10:50 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:26.591 00:05:26.591 real 0m0.113s 00:05:26.591 user 0m0.071s 00:05:26.591 sys 0m0.009s 00:05:26.591 00:10:50 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.591 00:10:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:26.591 ************************************ 00:05:26.591 END TEST rpc_plugins 00:05:26.591 ************************************ 00:05:26.591 00:10:50 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:26.591 00:10:50 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.591 00:10:50 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.591 00:10:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.591 ************************************ 00:05:26.591 START TEST rpc_trace_cmd_test 00:05:26.591 ************************************ 00:05:26.591 00:10:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:26.591 00:10:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:26.591 00:10:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:26.591 00:10:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.591 00:10:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:26.591 00:10:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.591 00:10:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:26.591 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid99160", 00:05:26.591 "tpoint_group_mask": "0x8", 00:05:26.591 "iscsi_conn": { 00:05:26.591 "mask": "0x2", 00:05:26.591 "tpoint_mask": "0x0" 00:05:26.591 }, 00:05:26.591 "scsi": { 00:05:26.591 "mask": "0x4", 00:05:26.591 "tpoint_mask": "0x0" 00:05:26.591 }, 00:05:26.591 "bdev": { 00:05:26.591 "mask": "0x8", 00:05:26.591 "tpoint_mask": "0xffffffffffffffff" 00:05:26.591 }, 00:05:26.591 "nvmf_rdma": { 00:05:26.591 "mask": "0x10", 00:05:26.591 "tpoint_mask": "0x0" 00:05:26.591 }, 00:05:26.591 "nvmf_tcp": { 00:05:26.591 "mask": "0x20", 00:05:26.591 "tpoint_mask": "0x0" 00:05:26.591 }, 00:05:26.591 "ftl": { 00:05:26.591 "mask": "0x40", 00:05:26.591 "tpoint_mask": "0x0" 00:05:26.591 }, 00:05:26.591 "blobfs": { 00:05:26.591 "mask": "0x80", 00:05:26.591 "tpoint_mask": "0x0" 00:05:26.591 }, 00:05:26.591 "dsa": { 00:05:26.591 "mask": "0x200", 00:05:26.591 "tpoint_mask": "0x0" 00:05:26.591 }, 00:05:26.591 "thread": { 00:05:26.591 "mask": "0x400", 00:05:26.591 "tpoint_mask": "0x0" 00:05:26.591 }, 00:05:26.591 "nvme_pcie": { 00:05:26.591 "mask": "0x800", 00:05:26.591 "tpoint_mask": "0x0" 00:05:26.591 }, 00:05:26.591 "iaa": { 00:05:26.591 "mask": "0x1000", 00:05:26.591 "tpoint_mask": "0x0" 00:05:26.591 }, 00:05:26.591 "nvme_tcp": { 00:05:26.591 "mask": "0x2000", 00:05:26.591 "tpoint_mask": "0x0" 00:05:26.591 }, 00:05:26.591 "bdev_nvme": { 00:05:26.591 "mask": "0x4000", 00:05:26.591 "tpoint_mask": "0x0" 00:05:26.591 }, 00:05:26.591 "sock": { 00:05:26.591 "mask": "0x8000", 00:05:26.591 "tpoint_mask": "0x0" 00:05:26.591 }, 00:05:26.591 "blob": { 00:05:26.591 "mask": "0x10000", 00:05:26.591 "tpoint_mask": "0x0" 00:05:26.591 }, 00:05:26.591 "bdev_raid": { 00:05:26.591 "mask": "0x20000", 00:05:26.591 "tpoint_mask": "0x0" 00:05:26.591 }, 00:05:26.591 "scheduler": { 00:05:26.591 "mask": "0x40000", 00:05:26.591 "tpoint_mask": "0x0" 00:05:26.591 } 00:05:26.591 }' 00:05:26.592 00:10:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:26.850 00:10:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:26.850 00:10:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:26.850 00:10:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:26.850 00:10:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:26.850 00:10:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:26.850 00:10:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:26.850 00:10:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:26.850 00:10:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:26.850 00:10:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:26.850 00:05:26.850 real 0m0.208s 00:05:26.850 user 0m0.178s 00:05:26.850 sys 0m0.020s 00:05:26.850 00:10:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.850 00:10:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:26.850 ************************************ 00:05:26.850 END TEST rpc_trace_cmd_test 00:05:26.850 ************************************ 00:05:26.850 00:10:50 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:26.850 00:10:50 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:26.850 00:10:50 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:26.850 00:10:50 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.850 00:10:50 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.850 00:10:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.850 ************************************ 00:05:26.850 START TEST rpc_daemon_integrity 00:05:26.850 ************************************ 00:05:26.850 00:10:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:26.850 00:10:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:26.850 00:10:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.850 00:10:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.850 00:10:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.850 00:10:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:26.850 00:10:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:26.850 00:10:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:26.850 00:10:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:26.850 00:10:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.850 00:10:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.850 00:10:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.850 00:10:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:26.850 00:10:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:26.850 00:10:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.850 00:10:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.109 00:10:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.109 00:10:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:27.109 { 00:05:27.109 "name": "Malloc2", 00:05:27.109 "aliases": [ 00:05:27.109 "87d3ffdc-b9f8-42ad-bb51-cc01c27d5e21" 00:05:27.109 ], 00:05:27.109 "product_name": "Malloc disk", 00:05:27.109 "block_size": 512, 00:05:27.109 "num_blocks": 16384, 00:05:27.109 "uuid": "87d3ffdc-b9f8-42ad-bb51-cc01c27d5e21", 00:05:27.109 "assigned_rate_limits": { 00:05:27.109 "rw_ios_per_sec": 0, 00:05:27.109 "rw_mbytes_per_sec": 0, 00:05:27.109 "r_mbytes_per_sec": 0, 00:05:27.109 "w_mbytes_per_sec": 0 00:05:27.109 }, 00:05:27.109 "claimed": false, 00:05:27.109 "zoned": false, 00:05:27.109 "supported_io_types": { 00:05:27.109 "read": true, 00:05:27.109 "write": true, 00:05:27.109 "unmap": true, 00:05:27.109 "flush": true, 00:05:27.109 "reset": true, 00:05:27.109 "nvme_admin": false, 00:05:27.109 "nvme_io": false, 00:05:27.109 "nvme_io_md": false, 00:05:27.109 "write_zeroes": true, 00:05:27.109 "zcopy": true, 00:05:27.109 "get_zone_info": false, 00:05:27.109 "zone_management": false, 00:05:27.109 "zone_append": false, 00:05:27.109 "compare": false, 00:05:27.109 "compare_and_write": false, 00:05:27.109 "abort": true, 00:05:27.109 "seek_hole": false, 00:05:27.109 "seek_data": false, 00:05:27.109 "copy": true, 00:05:27.109 "nvme_iov_md": false 00:05:27.109 }, 00:05:27.109 "memory_domains": [ 00:05:27.109 { 00:05:27.109 "dma_device_id": "system", 00:05:27.109 "dma_device_type": 1 00:05:27.109 }, 00:05:27.109 { 00:05:27.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:27.109 "dma_device_type": 2 00:05:27.109 } 00:05:27.109 ], 00:05:27.109 "driver_specific": {} 00:05:27.109 } 00:05:27.109 ]' 00:05:27.109 00:10:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:27.109 00:10:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:27.109 00:10:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:27.109 00:10:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.109 00:10:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.109 [2024-11-18 00:10:50.713047] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:27.110 [2024-11-18 00:10:50.713097] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:27.110 [2024-11-18 00:10:50.713118] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd95560 00:05:27.110 [2024-11-18 00:10:50.713131] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:27.110 [2024-11-18 00:10:50.714284] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:27.110 [2024-11-18 00:10:50.714332] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:27.110 Passthru0 00:05:27.110 00:10:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.110 00:10:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:27.110 00:10:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.110 00:10:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.110 00:10:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.110 00:10:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:27.110 { 00:05:27.110 "name": "Malloc2", 00:05:27.110 "aliases": [ 00:05:27.110 "87d3ffdc-b9f8-42ad-bb51-cc01c27d5e21" 00:05:27.110 ], 00:05:27.110 "product_name": "Malloc disk", 00:05:27.110 "block_size": 512, 00:05:27.110 "num_blocks": 16384, 00:05:27.110 "uuid": "87d3ffdc-b9f8-42ad-bb51-cc01c27d5e21", 00:05:27.110 "assigned_rate_limits": { 00:05:27.110 "rw_ios_per_sec": 0, 00:05:27.110 "rw_mbytes_per_sec": 0, 00:05:27.110 "r_mbytes_per_sec": 0, 00:05:27.110 "w_mbytes_per_sec": 0 00:05:27.110 }, 00:05:27.110 "claimed": true, 00:05:27.110 "claim_type": "exclusive_write", 00:05:27.110 "zoned": false, 00:05:27.110 "supported_io_types": { 00:05:27.110 "read": true, 00:05:27.110 "write": true, 00:05:27.110 "unmap": true, 00:05:27.110 "flush": true, 00:05:27.110 "reset": true, 00:05:27.110 "nvme_admin": false, 00:05:27.110 "nvme_io": false, 00:05:27.110 "nvme_io_md": false, 00:05:27.110 "write_zeroes": true, 00:05:27.110 "zcopy": true, 00:05:27.110 "get_zone_info": false, 00:05:27.110 "zone_management": false, 00:05:27.110 "zone_append": false, 00:05:27.110 "compare": false, 00:05:27.110 "compare_and_write": false, 00:05:27.110 "abort": true, 00:05:27.110 "seek_hole": false, 00:05:27.110 "seek_data": false, 00:05:27.110 "copy": true, 00:05:27.110 "nvme_iov_md": false 00:05:27.110 }, 00:05:27.110 "memory_domains": [ 00:05:27.110 { 00:05:27.110 "dma_device_id": "system", 00:05:27.110 "dma_device_type": 1 00:05:27.110 }, 00:05:27.110 { 00:05:27.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:27.110 "dma_device_type": 2 00:05:27.110 } 00:05:27.110 ], 00:05:27.110 "driver_specific": {} 00:05:27.110 }, 00:05:27.110 { 00:05:27.110 "name": "Passthru0", 00:05:27.110 "aliases": [ 00:05:27.110 "c27788fb-8d52-5b4e-aa94-0f583f035979" 00:05:27.110 ], 00:05:27.110 "product_name": "passthru", 00:05:27.110 "block_size": 512, 00:05:27.110 "num_blocks": 16384, 00:05:27.110 "uuid": "c27788fb-8d52-5b4e-aa94-0f583f035979", 00:05:27.110 "assigned_rate_limits": { 00:05:27.110 "rw_ios_per_sec": 0, 00:05:27.110 "rw_mbytes_per_sec": 0, 00:05:27.110 "r_mbytes_per_sec": 0, 00:05:27.110 "w_mbytes_per_sec": 0 00:05:27.110 }, 00:05:27.110 "claimed": false, 00:05:27.110 "zoned": false, 00:05:27.110 "supported_io_types": { 00:05:27.110 "read": true, 00:05:27.110 "write": true, 00:05:27.110 "unmap": true, 00:05:27.110 "flush": true, 00:05:27.110 "reset": true, 00:05:27.110 "nvme_admin": false, 00:05:27.110 "nvme_io": false, 00:05:27.110 "nvme_io_md": false, 00:05:27.110 "write_zeroes": true, 00:05:27.110 "zcopy": true, 00:05:27.110 "get_zone_info": false, 00:05:27.110 "zone_management": false, 00:05:27.110 "zone_append": false, 00:05:27.110 "compare": false, 00:05:27.110 "compare_and_write": false, 00:05:27.110 "abort": true, 00:05:27.110 "seek_hole": false, 00:05:27.110 "seek_data": false, 00:05:27.110 "copy": true, 00:05:27.110 "nvme_iov_md": false 00:05:27.110 }, 00:05:27.110 "memory_domains": [ 00:05:27.110 { 00:05:27.110 "dma_device_id": "system", 00:05:27.110 "dma_device_type": 1 00:05:27.110 }, 00:05:27.110 { 00:05:27.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:27.110 "dma_device_type": 2 00:05:27.110 } 00:05:27.110 ], 00:05:27.110 "driver_specific": { 00:05:27.110 "passthru": { 00:05:27.110 "name": "Passthru0", 00:05:27.110 "base_bdev_name": "Malloc2" 00:05:27.110 } 00:05:27.110 } 00:05:27.110 } 00:05:27.110 ]' 00:05:27.110 00:10:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:27.110 00:10:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:27.110 00:10:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:27.110 00:10:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.110 00:10:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.110 00:10:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.110 00:10:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:27.110 00:10:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.110 00:10:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.110 00:10:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.110 00:10:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:27.110 00:10:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.110 00:10:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.110 00:10:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.110 00:10:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:27.110 00:10:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:27.110 00:10:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:27.110 00:05:27.110 real 0m0.207s 00:05:27.110 user 0m0.130s 00:05:27.110 sys 0m0.025s 00:05:27.110 00:10:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:27.110 00:10:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.110 ************************************ 00:05:27.110 END TEST rpc_daemon_integrity 00:05:27.110 ************************************ 00:05:27.110 00:10:50 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:27.110 00:10:50 rpc -- rpc/rpc.sh@84 -- # killprocess 99160 00:05:27.110 00:10:50 rpc -- common/autotest_common.sh@954 -- # '[' -z 99160 ']' 00:05:27.110 00:10:50 rpc -- common/autotest_common.sh@958 -- # kill -0 99160 00:05:27.110 00:10:50 rpc -- common/autotest_common.sh@959 -- # uname 00:05:27.110 00:10:50 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:27.110 00:10:50 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99160 00:05:27.110 00:10:50 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:27.110 00:10:50 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:27.110 00:10:50 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99160' 00:05:27.110 killing process with pid 99160 00:05:27.110 00:10:50 rpc -- common/autotest_common.sh@973 -- # kill 99160 00:05:27.110 00:10:50 rpc -- common/autotest_common.sh@978 -- # wait 99160 00:05:27.678 00:05:27.678 real 0m1.942s 00:05:27.678 user 0m2.427s 00:05:27.678 sys 0m0.613s 00:05:27.678 00:10:51 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:27.678 00:10:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.678 ************************************ 00:05:27.678 END TEST rpc 00:05:27.678 ************************************ 00:05:27.678 00:10:51 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:27.678 00:10:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:27.678 00:10:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:27.678 00:10:51 -- common/autotest_common.sh@10 -- # set +x 00:05:27.678 ************************************ 00:05:27.678 START TEST skip_rpc 00:05:27.678 ************************************ 00:05:27.678 00:10:51 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:27.678 * Looking for test storage... 00:05:27.678 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:27.678 00:10:51 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:27.678 00:10:51 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:27.678 00:10:51 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:27.678 00:10:51 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:27.678 00:10:51 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:27.678 00:10:51 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:27.678 00:10:51 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:27.678 00:10:51 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:27.678 00:10:51 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:27.678 00:10:51 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:27.678 00:10:51 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:27.678 00:10:51 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:27.678 00:10:51 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:27.678 00:10:51 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:27.678 00:10:51 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:27.678 00:10:51 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:27.678 00:10:51 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:27.678 00:10:51 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:27.678 00:10:51 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:27.678 00:10:51 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:27.678 00:10:51 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:27.678 00:10:51 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:27.678 00:10:51 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:27.678 00:10:51 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:27.678 00:10:51 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:27.678 00:10:51 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:27.678 00:10:51 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:27.678 00:10:51 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:27.678 00:10:51 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:27.678 00:10:51 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:27.678 00:10:51 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:27.678 00:10:51 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:27.678 00:10:51 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:27.678 00:10:51 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:27.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.678 --rc genhtml_branch_coverage=1 00:05:27.678 --rc genhtml_function_coverage=1 00:05:27.678 --rc genhtml_legend=1 00:05:27.678 --rc geninfo_all_blocks=1 00:05:27.678 --rc geninfo_unexecuted_blocks=1 00:05:27.678 00:05:27.678 ' 00:05:27.678 00:10:51 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:27.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.678 --rc genhtml_branch_coverage=1 00:05:27.678 --rc genhtml_function_coverage=1 00:05:27.678 --rc genhtml_legend=1 00:05:27.678 --rc geninfo_all_blocks=1 00:05:27.678 --rc geninfo_unexecuted_blocks=1 00:05:27.678 00:05:27.678 ' 00:05:27.678 00:10:51 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:27.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.678 --rc genhtml_branch_coverage=1 00:05:27.678 --rc genhtml_function_coverage=1 00:05:27.678 --rc genhtml_legend=1 00:05:27.678 --rc geninfo_all_blocks=1 00:05:27.678 --rc geninfo_unexecuted_blocks=1 00:05:27.678 00:05:27.678 ' 00:05:27.678 00:10:51 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:27.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.678 --rc genhtml_branch_coverage=1 00:05:27.678 --rc genhtml_function_coverage=1 00:05:27.678 --rc genhtml_legend=1 00:05:27.678 --rc geninfo_all_blocks=1 00:05:27.678 --rc geninfo_unexecuted_blocks=1 00:05:27.678 00:05:27.678 ' 00:05:27.678 00:10:51 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:27.678 00:10:51 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:27.678 00:10:51 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:27.678 00:10:51 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:27.678 00:10:51 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:27.678 00:10:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.678 ************************************ 00:05:27.678 START TEST skip_rpc 00:05:27.678 ************************************ 00:05:27.678 00:10:51 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:27.678 00:10:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=99490 00:05:27.678 00:10:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:27.678 00:10:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:27.678 00:10:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:27.937 [2024-11-18 00:10:51.542005] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:05:27.937 [2024-11-18 00:10:51.542069] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99490 ] 00:05:27.937 [2024-11-18 00:10:51.610474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.937 [2024-11-18 00:10:51.656193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.205 00:10:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:33.205 00:10:56 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:33.205 00:10:56 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:33.205 00:10:56 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:33.205 00:10:56 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:33.205 00:10:56 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:33.205 00:10:56 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:33.205 00:10:56 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:33.205 00:10:56 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.205 00:10:56 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.205 00:10:56 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:33.205 00:10:56 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:33.205 00:10:56 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:33.205 00:10:56 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:33.205 00:10:56 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:33.205 00:10:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:33.205 00:10:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 99490 00:05:33.205 00:10:56 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 99490 ']' 00:05:33.205 00:10:56 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 99490 00:05:33.205 00:10:56 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:33.205 00:10:56 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:33.205 00:10:56 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99490 00:05:33.205 00:10:56 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:33.205 00:10:56 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:33.205 00:10:56 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99490' 00:05:33.205 killing process with pid 99490 00:05:33.205 00:10:56 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 99490 00:05:33.205 00:10:56 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 99490 00:05:33.205 00:05:33.205 real 0m5.423s 00:05:33.205 user 0m5.127s 00:05:33.205 sys 0m0.316s 00:05:33.205 00:10:56 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.205 00:10:56 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.205 ************************************ 00:05:33.205 END TEST skip_rpc 00:05:33.205 ************************************ 00:05:33.205 00:10:56 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:33.205 00:10:56 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.205 00:10:56 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.205 00:10:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.205 ************************************ 00:05:33.205 START TEST skip_rpc_with_json 00:05:33.205 ************************************ 00:05:33.205 00:10:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:33.205 00:10:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:33.205 00:10:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=100177 00:05:33.205 00:10:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:33.205 00:10:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:33.205 00:10:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 100177 00:05:33.205 00:10:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 100177 ']' 00:05:33.205 00:10:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.205 00:10:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:33.205 00:10:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.205 00:10:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:33.205 00:10:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:33.205 [2024-11-18 00:10:57.017790] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:05:33.205 [2024-11-18 00:10:57.017870] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100177 ] 00:05:33.464 [2024-11-18 00:10:57.084198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.464 [2024-11-18 00:10:57.126197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.724 00:10:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:33.724 00:10:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:33.724 00:10:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:33.724 00:10:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.724 00:10:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:33.724 [2024-11-18 00:10:57.371074] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:33.724 request: 00:05:33.724 { 00:05:33.724 "trtype": "tcp", 00:05:33.724 "method": "nvmf_get_transports", 00:05:33.724 "req_id": 1 00:05:33.724 } 00:05:33.724 Got JSON-RPC error response 00:05:33.724 response: 00:05:33.724 { 00:05:33.724 "code": -19, 00:05:33.724 "message": "No such device" 00:05:33.724 } 00:05:33.724 00:10:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:33.724 00:10:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:33.724 00:10:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.724 00:10:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:33.724 [2024-11-18 00:10:57.379179] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:33.724 00:10:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.724 00:10:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:33.724 00:10:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.724 00:10:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:33.724 00:10:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.724 00:10:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:33.724 { 00:05:33.724 "subsystems": [ 00:05:33.724 { 00:05:33.724 "subsystem": "fsdev", 00:05:33.724 "config": [ 00:05:33.724 { 00:05:33.724 "method": "fsdev_set_opts", 00:05:33.724 "params": { 00:05:33.724 "fsdev_io_pool_size": 65535, 00:05:33.724 "fsdev_io_cache_size": 256 00:05:33.724 } 00:05:33.724 } 00:05:33.724 ] 00:05:33.724 }, 00:05:33.724 { 00:05:33.724 "subsystem": "vfio_user_target", 00:05:33.724 "config": null 00:05:33.724 }, 00:05:33.724 { 00:05:33.724 "subsystem": "keyring", 00:05:33.724 "config": [] 00:05:33.724 }, 00:05:33.724 { 00:05:33.724 "subsystem": "iobuf", 00:05:33.724 "config": [ 00:05:33.724 { 00:05:33.724 "method": "iobuf_set_options", 00:05:33.724 "params": { 00:05:33.724 "small_pool_count": 8192, 00:05:33.724 "large_pool_count": 1024, 00:05:33.724 "small_bufsize": 8192, 00:05:33.724 "large_bufsize": 135168, 00:05:33.724 "enable_numa": false 00:05:33.724 } 00:05:33.724 } 00:05:33.724 ] 00:05:33.724 }, 00:05:33.724 { 00:05:33.724 "subsystem": "sock", 00:05:33.724 "config": [ 00:05:33.724 { 00:05:33.724 "method": "sock_set_default_impl", 00:05:33.724 "params": { 00:05:33.724 "impl_name": "posix" 00:05:33.724 } 00:05:33.724 }, 00:05:33.724 { 00:05:33.724 "method": "sock_impl_set_options", 00:05:33.724 "params": { 00:05:33.724 "impl_name": "ssl", 00:05:33.724 "recv_buf_size": 4096, 00:05:33.724 "send_buf_size": 4096, 00:05:33.724 "enable_recv_pipe": true, 00:05:33.724 "enable_quickack": false, 00:05:33.724 "enable_placement_id": 0, 00:05:33.724 "enable_zerocopy_send_server": true, 00:05:33.724 "enable_zerocopy_send_client": false, 00:05:33.724 "zerocopy_threshold": 0, 00:05:33.724 "tls_version": 0, 00:05:33.724 "enable_ktls": false 00:05:33.724 } 00:05:33.724 }, 00:05:33.724 { 00:05:33.724 "method": "sock_impl_set_options", 00:05:33.724 "params": { 00:05:33.724 "impl_name": "posix", 00:05:33.724 "recv_buf_size": 2097152, 00:05:33.724 "send_buf_size": 2097152, 00:05:33.724 "enable_recv_pipe": true, 00:05:33.724 "enable_quickack": false, 00:05:33.724 "enable_placement_id": 0, 00:05:33.724 "enable_zerocopy_send_server": true, 00:05:33.724 "enable_zerocopy_send_client": false, 00:05:33.724 "zerocopy_threshold": 0, 00:05:33.724 "tls_version": 0, 00:05:33.724 "enable_ktls": false 00:05:33.724 } 00:05:33.724 } 00:05:33.724 ] 00:05:33.724 }, 00:05:33.724 { 00:05:33.724 "subsystem": "vmd", 00:05:33.724 "config": [] 00:05:33.724 }, 00:05:33.724 { 00:05:33.724 "subsystem": "accel", 00:05:33.724 "config": [ 00:05:33.724 { 00:05:33.724 "method": "accel_set_options", 00:05:33.724 "params": { 00:05:33.724 "small_cache_size": 128, 00:05:33.724 "large_cache_size": 16, 00:05:33.724 "task_count": 2048, 00:05:33.724 "sequence_count": 2048, 00:05:33.724 "buf_count": 2048 00:05:33.724 } 00:05:33.724 } 00:05:33.724 ] 00:05:33.724 }, 00:05:33.724 { 00:05:33.724 "subsystem": "bdev", 00:05:33.724 "config": [ 00:05:33.724 { 00:05:33.724 "method": "bdev_set_options", 00:05:33.724 "params": { 00:05:33.724 "bdev_io_pool_size": 65535, 00:05:33.724 "bdev_io_cache_size": 256, 00:05:33.724 "bdev_auto_examine": true, 00:05:33.724 "iobuf_small_cache_size": 128, 00:05:33.724 "iobuf_large_cache_size": 16 00:05:33.724 } 00:05:33.724 }, 00:05:33.724 { 00:05:33.724 "method": "bdev_raid_set_options", 00:05:33.724 "params": { 00:05:33.724 "process_window_size_kb": 1024, 00:05:33.724 "process_max_bandwidth_mb_sec": 0 00:05:33.724 } 00:05:33.724 }, 00:05:33.724 { 00:05:33.724 "method": "bdev_iscsi_set_options", 00:05:33.724 "params": { 00:05:33.724 "timeout_sec": 30 00:05:33.724 } 00:05:33.724 }, 00:05:33.724 { 00:05:33.724 "method": "bdev_nvme_set_options", 00:05:33.724 "params": { 00:05:33.724 "action_on_timeout": "none", 00:05:33.724 "timeout_us": 0, 00:05:33.724 "timeout_admin_us": 0, 00:05:33.724 "keep_alive_timeout_ms": 10000, 00:05:33.724 "arbitration_burst": 0, 00:05:33.724 "low_priority_weight": 0, 00:05:33.724 "medium_priority_weight": 0, 00:05:33.724 "high_priority_weight": 0, 00:05:33.724 "nvme_adminq_poll_period_us": 10000, 00:05:33.724 "nvme_ioq_poll_period_us": 0, 00:05:33.724 "io_queue_requests": 0, 00:05:33.724 "delay_cmd_submit": true, 00:05:33.724 "transport_retry_count": 4, 00:05:33.724 "bdev_retry_count": 3, 00:05:33.724 "transport_ack_timeout": 0, 00:05:33.724 "ctrlr_loss_timeout_sec": 0, 00:05:33.724 "reconnect_delay_sec": 0, 00:05:33.724 "fast_io_fail_timeout_sec": 0, 00:05:33.724 "disable_auto_failback": false, 00:05:33.724 "generate_uuids": false, 00:05:33.724 "transport_tos": 0, 00:05:33.724 "nvme_error_stat": false, 00:05:33.724 "rdma_srq_size": 0, 00:05:33.724 "io_path_stat": false, 00:05:33.724 "allow_accel_sequence": false, 00:05:33.724 "rdma_max_cq_size": 0, 00:05:33.724 "rdma_cm_event_timeout_ms": 0, 00:05:33.724 "dhchap_digests": [ 00:05:33.724 "sha256", 00:05:33.724 "sha384", 00:05:33.724 "sha512" 00:05:33.724 ], 00:05:33.724 "dhchap_dhgroups": [ 00:05:33.724 "null", 00:05:33.724 "ffdhe2048", 00:05:33.724 "ffdhe3072", 00:05:33.724 "ffdhe4096", 00:05:33.724 "ffdhe6144", 00:05:33.724 "ffdhe8192" 00:05:33.724 ] 00:05:33.724 } 00:05:33.724 }, 00:05:33.724 { 00:05:33.724 "method": "bdev_nvme_set_hotplug", 00:05:33.724 "params": { 00:05:33.724 "period_us": 100000, 00:05:33.724 "enable": false 00:05:33.724 } 00:05:33.724 }, 00:05:33.724 { 00:05:33.724 "method": "bdev_wait_for_examine" 00:05:33.724 } 00:05:33.724 ] 00:05:33.724 }, 00:05:33.724 { 00:05:33.724 "subsystem": "scsi", 00:05:33.724 "config": null 00:05:33.724 }, 00:05:33.724 { 00:05:33.724 "subsystem": "scheduler", 00:05:33.724 "config": [ 00:05:33.724 { 00:05:33.724 "method": "framework_set_scheduler", 00:05:33.724 "params": { 00:05:33.724 "name": "static" 00:05:33.724 } 00:05:33.724 } 00:05:33.724 ] 00:05:33.724 }, 00:05:33.725 { 00:05:33.725 "subsystem": "vhost_scsi", 00:05:33.725 "config": [] 00:05:33.725 }, 00:05:33.725 { 00:05:33.725 "subsystem": "vhost_blk", 00:05:33.725 "config": [] 00:05:33.725 }, 00:05:33.725 { 00:05:33.725 "subsystem": "ublk", 00:05:33.725 "config": [] 00:05:33.725 }, 00:05:33.725 { 00:05:33.725 "subsystem": "nbd", 00:05:33.725 "config": [] 00:05:33.725 }, 00:05:33.725 { 00:05:33.725 "subsystem": "nvmf", 00:05:33.725 "config": [ 00:05:33.725 { 00:05:33.725 "method": "nvmf_set_config", 00:05:33.725 "params": { 00:05:33.725 "discovery_filter": "match_any", 00:05:33.725 "admin_cmd_passthru": { 00:05:33.725 "identify_ctrlr": false 00:05:33.725 }, 00:05:33.725 "dhchap_digests": [ 00:05:33.725 "sha256", 00:05:33.725 "sha384", 00:05:33.725 "sha512" 00:05:33.725 ], 00:05:33.725 "dhchap_dhgroups": [ 00:05:33.725 "null", 00:05:33.725 "ffdhe2048", 00:05:33.725 "ffdhe3072", 00:05:33.725 "ffdhe4096", 00:05:33.725 "ffdhe6144", 00:05:33.725 "ffdhe8192" 00:05:33.725 ] 00:05:33.725 } 00:05:33.725 }, 00:05:33.725 { 00:05:33.725 "method": "nvmf_set_max_subsystems", 00:05:33.725 "params": { 00:05:33.725 "max_subsystems": 1024 00:05:33.725 } 00:05:33.725 }, 00:05:33.725 { 00:05:33.725 "method": "nvmf_set_crdt", 00:05:33.725 "params": { 00:05:33.725 "crdt1": 0, 00:05:33.725 "crdt2": 0, 00:05:33.725 "crdt3": 0 00:05:33.725 } 00:05:33.725 }, 00:05:33.725 { 00:05:33.725 "method": "nvmf_create_transport", 00:05:33.725 "params": { 00:05:33.725 "trtype": "TCP", 00:05:33.725 "max_queue_depth": 128, 00:05:33.725 "max_io_qpairs_per_ctrlr": 127, 00:05:33.725 "in_capsule_data_size": 4096, 00:05:33.725 "max_io_size": 131072, 00:05:33.725 "io_unit_size": 131072, 00:05:33.725 "max_aq_depth": 128, 00:05:33.725 "num_shared_buffers": 511, 00:05:33.725 "buf_cache_size": 4294967295, 00:05:33.725 "dif_insert_or_strip": false, 00:05:33.725 "zcopy": false, 00:05:33.725 "c2h_success": true, 00:05:33.725 "sock_priority": 0, 00:05:33.725 "abort_timeout_sec": 1, 00:05:33.725 "ack_timeout": 0, 00:05:33.725 "data_wr_pool_size": 0 00:05:33.725 } 00:05:33.725 } 00:05:33.725 ] 00:05:33.725 }, 00:05:33.725 { 00:05:33.725 "subsystem": "iscsi", 00:05:33.725 "config": [ 00:05:33.725 { 00:05:33.725 "method": "iscsi_set_options", 00:05:33.725 "params": { 00:05:33.725 "node_base": "iqn.2016-06.io.spdk", 00:05:33.725 "max_sessions": 128, 00:05:33.725 "max_connections_per_session": 2, 00:05:33.725 "max_queue_depth": 64, 00:05:33.725 "default_time2wait": 2, 00:05:33.725 "default_time2retain": 20, 00:05:33.725 "first_burst_length": 8192, 00:05:33.725 "immediate_data": true, 00:05:33.725 "allow_duplicated_isid": false, 00:05:33.725 "error_recovery_level": 0, 00:05:33.725 "nop_timeout": 60, 00:05:33.725 "nop_in_interval": 30, 00:05:33.725 "disable_chap": false, 00:05:33.725 "require_chap": false, 00:05:33.725 "mutual_chap": false, 00:05:33.725 "chap_group": 0, 00:05:33.725 "max_large_datain_per_connection": 64, 00:05:33.725 "max_r2t_per_connection": 4, 00:05:33.725 "pdu_pool_size": 36864, 00:05:33.725 "immediate_data_pool_size": 16384, 00:05:33.725 "data_out_pool_size": 2048 00:05:33.725 } 00:05:33.725 } 00:05:33.725 ] 00:05:33.725 } 00:05:33.725 ] 00:05:33.725 } 00:05:33.725 00:10:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:33.725 00:10:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 100177 00:05:33.725 00:10:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 100177 ']' 00:05:33.725 00:10:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 100177 00:05:33.987 00:10:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:33.987 00:10:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:33.987 00:10:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100177 00:05:33.987 00:10:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:33.987 00:10:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:33.987 00:10:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100177' 00:05:33.987 killing process with pid 100177 00:05:33.987 00:10:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 100177 00:05:33.987 00:10:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 100177 00:05:34.245 00:10:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=100320 00:05:34.245 00:10:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:34.245 00:10:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:39.521 00:11:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 100320 00:05:39.521 00:11:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 100320 ']' 00:05:39.521 00:11:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 100320 00:05:39.521 00:11:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:39.521 00:11:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:39.521 00:11:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100320 00:05:39.521 00:11:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:39.521 00:11:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:39.521 00:11:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100320' 00:05:39.521 killing process with pid 100320 00:05:39.521 00:11:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 100320 00:05:39.521 00:11:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 100320 00:05:39.779 00:11:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:39.779 00:11:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:39.779 00:05:39.779 real 0m6.414s 00:05:39.779 user 0m6.076s 00:05:39.779 sys 0m0.659s 00:05:39.779 00:11:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.779 00:11:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:39.779 ************************************ 00:05:39.779 END TEST skip_rpc_with_json 00:05:39.779 ************************************ 00:05:39.779 00:11:03 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:39.779 00:11:03 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.779 00:11:03 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.779 00:11:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.779 ************************************ 00:05:39.779 START TEST skip_rpc_with_delay 00:05:39.779 ************************************ 00:05:39.779 00:11:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:39.779 00:11:03 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:39.779 00:11:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:39.779 00:11:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:39.779 00:11:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:39.779 00:11:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:39.779 00:11:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:39.779 00:11:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:39.779 00:11:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:39.779 00:11:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:39.779 00:11:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:39.779 00:11:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:39.779 00:11:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:39.779 [2024-11-18 00:11:03.484961] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:39.779 00:11:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:39.779 00:11:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:39.779 00:11:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:39.779 00:11:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:39.779 00:05:39.779 real 0m0.072s 00:05:39.779 user 0m0.047s 00:05:39.779 sys 0m0.025s 00:05:39.779 00:11:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.779 00:11:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:39.779 ************************************ 00:05:39.779 END TEST skip_rpc_with_delay 00:05:39.779 ************************************ 00:05:39.779 00:11:03 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:39.779 00:11:03 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:39.779 00:11:03 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:39.779 00:11:03 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.779 00:11:03 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.780 00:11:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.780 ************************************ 00:05:39.780 START TEST exit_on_failed_rpc_init 00:05:39.780 ************************************ 00:05:39.780 00:11:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:39.780 00:11:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=101036 00:05:39.780 00:11:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:39.780 00:11:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 101036 00:05:39.780 00:11:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 101036 ']' 00:05:39.780 00:11:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.780 00:11:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:39.780 00:11:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.780 00:11:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:39.780 00:11:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:40.040 [2024-11-18 00:11:03.612715] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:05:40.040 [2024-11-18 00:11:03.612808] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101036 ] 00:05:40.040 [2024-11-18 00:11:03.679274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.040 [2024-11-18 00:11:03.728916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.299 00:11:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.299 00:11:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:40.299 00:11:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:40.299 00:11:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:40.299 00:11:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:40.299 00:11:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:40.299 00:11:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:40.299 00:11:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:40.299 00:11:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:40.299 00:11:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:40.299 00:11:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:40.299 00:11:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:40.299 00:11:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:40.299 00:11:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:40.299 00:11:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:40.299 [2024-11-18 00:11:04.040860] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:05:40.299 [2024-11-18 00:11:04.040941] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101043 ] 00:05:40.299 [2024-11-18 00:11:04.109247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.557 [2024-11-18 00:11:04.156954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.557 [2024-11-18 00:11:04.157084] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:40.557 [2024-11-18 00:11:04.157103] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:40.557 [2024-11-18 00:11:04.157115] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:40.557 00:11:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:40.557 00:11:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:40.557 00:11:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:40.557 00:11:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:40.557 00:11:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:40.557 00:11:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:40.557 00:11:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:40.557 00:11:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 101036 00:05:40.557 00:11:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 101036 ']' 00:05:40.557 00:11:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 101036 00:05:40.557 00:11:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:40.557 00:11:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:40.557 00:11:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101036 00:05:40.557 00:11:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:40.558 00:11:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:40.558 00:11:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101036' 00:05:40.558 killing process with pid 101036 00:05:40.558 00:11:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 101036 00:05:40.558 00:11:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 101036 00:05:40.817 00:05:40.817 real 0m1.062s 00:05:40.817 user 0m1.131s 00:05:40.817 sys 0m0.448s 00:05:40.817 00:11:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.817 00:11:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:40.817 ************************************ 00:05:40.817 END TEST exit_on_failed_rpc_init 00:05:40.817 ************************************ 00:05:41.076 00:11:04 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:41.076 00:05:41.076 real 0m13.332s 00:05:41.077 user 0m12.567s 00:05:41.077 sys 0m1.640s 00:05:41.077 00:11:04 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.077 00:11:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.077 ************************************ 00:05:41.077 END TEST skip_rpc 00:05:41.077 ************************************ 00:05:41.077 00:11:04 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:41.077 00:11:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.077 00:11:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.077 00:11:04 -- common/autotest_common.sh@10 -- # set +x 00:05:41.077 ************************************ 00:05:41.077 START TEST rpc_client 00:05:41.077 ************************************ 00:05:41.077 00:11:04 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:41.077 * Looking for test storage... 00:05:41.077 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:41.077 00:11:04 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:41.077 00:11:04 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:41.077 00:11:04 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:41.077 00:11:04 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:41.077 00:11:04 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.077 00:11:04 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.077 00:11:04 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.077 00:11:04 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.077 00:11:04 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.077 00:11:04 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.077 00:11:04 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.077 00:11:04 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.077 00:11:04 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.077 00:11:04 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.077 00:11:04 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.077 00:11:04 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:41.077 00:11:04 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:41.077 00:11:04 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.077 00:11:04 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.077 00:11:04 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:41.077 00:11:04 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:41.077 00:11:04 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.077 00:11:04 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:41.077 00:11:04 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.077 00:11:04 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:41.077 00:11:04 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:41.077 00:11:04 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.077 00:11:04 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:41.077 00:11:04 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.077 00:11:04 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.077 00:11:04 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.077 00:11:04 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:41.077 00:11:04 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.077 00:11:04 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:41.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.077 --rc genhtml_branch_coverage=1 00:05:41.077 --rc genhtml_function_coverage=1 00:05:41.077 --rc genhtml_legend=1 00:05:41.077 --rc geninfo_all_blocks=1 00:05:41.077 --rc geninfo_unexecuted_blocks=1 00:05:41.077 00:05:41.077 ' 00:05:41.077 00:11:04 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:41.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.077 --rc genhtml_branch_coverage=1 00:05:41.077 --rc genhtml_function_coverage=1 00:05:41.077 --rc genhtml_legend=1 00:05:41.077 --rc geninfo_all_blocks=1 00:05:41.077 --rc geninfo_unexecuted_blocks=1 00:05:41.077 00:05:41.077 ' 00:05:41.077 00:11:04 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:41.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.077 --rc genhtml_branch_coverage=1 00:05:41.077 --rc genhtml_function_coverage=1 00:05:41.077 --rc genhtml_legend=1 00:05:41.077 --rc geninfo_all_blocks=1 00:05:41.077 --rc geninfo_unexecuted_blocks=1 00:05:41.077 00:05:41.077 ' 00:05:41.077 00:11:04 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:41.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.077 --rc genhtml_branch_coverage=1 00:05:41.077 --rc genhtml_function_coverage=1 00:05:41.077 --rc genhtml_legend=1 00:05:41.077 --rc geninfo_all_blocks=1 00:05:41.077 --rc geninfo_unexecuted_blocks=1 00:05:41.077 00:05:41.077 ' 00:05:41.077 00:11:04 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:41.077 OK 00:05:41.077 00:11:04 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:41.077 00:05:41.077 real 0m0.164s 00:05:41.077 user 0m0.113s 00:05:41.077 sys 0m0.060s 00:05:41.077 00:11:04 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.077 00:11:04 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:41.077 ************************************ 00:05:41.077 END TEST rpc_client 00:05:41.077 ************************************ 00:05:41.077 00:11:04 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:41.077 00:11:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.077 00:11:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.077 00:11:04 -- common/autotest_common.sh@10 -- # set +x 00:05:41.343 ************************************ 00:05:41.343 START TEST json_config 00:05:41.343 ************************************ 00:05:41.343 00:11:04 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:41.343 00:11:04 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:41.343 00:11:04 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:41.343 00:11:04 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:41.343 00:11:05 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:41.343 00:11:05 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.343 00:11:05 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.343 00:11:05 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.343 00:11:05 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.343 00:11:05 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.343 00:11:05 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.343 00:11:05 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.343 00:11:05 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.343 00:11:05 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.343 00:11:05 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.343 00:11:05 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.343 00:11:05 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:41.343 00:11:05 json_config -- scripts/common.sh@345 -- # : 1 00:05:41.343 00:11:05 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.343 00:11:05 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.343 00:11:05 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:41.343 00:11:05 json_config -- scripts/common.sh@353 -- # local d=1 00:05:41.343 00:11:05 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.343 00:11:05 json_config -- scripts/common.sh@355 -- # echo 1 00:05:41.343 00:11:05 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.343 00:11:05 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:41.343 00:11:05 json_config -- scripts/common.sh@353 -- # local d=2 00:05:41.343 00:11:05 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.343 00:11:05 json_config -- scripts/common.sh@355 -- # echo 2 00:05:41.343 00:11:05 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.343 00:11:05 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.343 00:11:05 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.343 00:11:05 json_config -- scripts/common.sh@368 -- # return 0 00:05:41.343 00:11:05 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.343 00:11:05 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:41.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.343 --rc genhtml_branch_coverage=1 00:05:41.343 --rc genhtml_function_coverage=1 00:05:41.343 --rc genhtml_legend=1 00:05:41.343 --rc geninfo_all_blocks=1 00:05:41.343 --rc geninfo_unexecuted_blocks=1 00:05:41.343 00:05:41.343 ' 00:05:41.343 00:11:05 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:41.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.343 --rc genhtml_branch_coverage=1 00:05:41.343 --rc genhtml_function_coverage=1 00:05:41.343 --rc genhtml_legend=1 00:05:41.343 --rc geninfo_all_blocks=1 00:05:41.343 --rc geninfo_unexecuted_blocks=1 00:05:41.343 00:05:41.343 ' 00:05:41.343 00:11:05 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:41.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.343 --rc genhtml_branch_coverage=1 00:05:41.343 --rc genhtml_function_coverage=1 00:05:41.343 --rc genhtml_legend=1 00:05:41.343 --rc geninfo_all_blocks=1 00:05:41.343 --rc geninfo_unexecuted_blocks=1 00:05:41.343 00:05:41.343 ' 00:05:41.343 00:11:05 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:41.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.343 --rc genhtml_branch_coverage=1 00:05:41.343 --rc genhtml_function_coverage=1 00:05:41.343 --rc genhtml_legend=1 00:05:41.343 --rc geninfo_all_blocks=1 00:05:41.343 --rc geninfo_unexecuted_blocks=1 00:05:41.343 00:05:41.343 ' 00:05:41.343 00:11:05 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:41.343 00:11:05 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:41.343 00:11:05 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:41.343 00:11:05 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:41.343 00:11:05 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:41.343 00:11:05 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:41.343 00:11:05 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:41.343 00:11:05 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:41.343 00:11:05 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:41.343 00:11:05 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:41.343 00:11:05 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:41.343 00:11:05 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:41.343 00:11:05 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:41.343 00:11:05 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:41.343 00:11:05 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:41.343 00:11:05 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:41.343 00:11:05 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:41.343 00:11:05 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:41.343 00:11:05 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:41.343 00:11:05 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:41.343 00:11:05 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:41.343 00:11:05 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:41.343 00:11:05 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:41.343 00:11:05 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.343 00:11:05 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.343 00:11:05 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.343 00:11:05 json_config -- paths/export.sh@5 -- # export PATH 00:05:41.343 00:11:05 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.343 00:11:05 json_config -- nvmf/common.sh@51 -- # : 0 00:05:41.343 00:11:05 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:41.343 00:11:05 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:41.343 00:11:05 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:41.343 00:11:05 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:41.343 00:11:05 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:41.343 00:11:05 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:41.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:41.343 00:11:05 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:41.343 00:11:05 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:41.343 00:11:05 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:41.343 00:11:05 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:41.343 00:11:05 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:41.343 00:11:05 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:41.343 00:11:05 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:41.343 00:11:05 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:41.343 00:11:05 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:41.343 00:11:05 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:41.343 00:11:05 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:41.343 00:11:05 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:41.343 00:11:05 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:41.343 00:11:05 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:41.343 00:11:05 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:41.343 00:11:05 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:41.343 00:11:05 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:41.343 00:11:05 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:41.343 00:11:05 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:41.343 INFO: JSON configuration test init 00:05:41.343 00:11:05 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:41.343 00:11:05 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:41.343 00:11:05 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:41.343 00:11:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.343 00:11:05 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:41.343 00:11:05 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:41.343 00:11:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.343 00:11:05 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:41.343 00:11:05 json_config -- json_config/common.sh@9 -- # local app=target 00:05:41.343 00:11:05 json_config -- json_config/common.sh@10 -- # shift 00:05:41.343 00:11:05 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:41.343 00:11:05 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:41.343 00:11:05 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:41.343 00:11:05 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:41.343 00:11:05 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:41.343 00:11:05 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=101301 00:05:41.343 00:11:05 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:41.343 00:11:05 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:41.343 Waiting for target to run... 00:05:41.343 00:11:05 json_config -- json_config/common.sh@25 -- # waitforlisten 101301 /var/tmp/spdk_tgt.sock 00:05:41.343 00:11:05 json_config -- common/autotest_common.sh@835 -- # '[' -z 101301 ']' 00:05:41.343 00:11:05 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:41.343 00:11:05 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.343 00:11:05 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:41.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:41.343 00:11:05 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.343 00:11:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.343 [2024-11-18 00:11:05.137094] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:05:41.343 [2024-11-18 00:11:05.137183] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101301 ] 00:05:41.920 [2024-11-18 00:11:05.488605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.920 [2024-11-18 00:11:05.519278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.486 00:11:06 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:42.486 00:11:06 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:42.486 00:11:06 json_config -- json_config/common.sh@26 -- # echo '' 00:05:42.486 00:05:42.486 00:11:06 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:42.486 00:11:06 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:42.486 00:11:06 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:42.486 00:11:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:42.486 00:11:06 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:42.486 00:11:06 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:42.486 00:11:06 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:42.486 00:11:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:42.486 00:11:06 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:42.486 00:11:06 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:42.486 00:11:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:45.822 00:11:09 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:45.822 00:11:09 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:45.823 00:11:09 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:45.823 00:11:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:45.823 00:11:09 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:45.823 00:11:09 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:45.823 00:11:09 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:45.823 00:11:09 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:45.823 00:11:09 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:45.823 00:11:09 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:45.823 00:11:09 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:45.823 00:11:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:45.823 00:11:09 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:45.823 00:11:09 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:45.823 00:11:09 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:45.823 00:11:09 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:45.823 00:11:09 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:45.823 00:11:09 json_config -- json_config/json_config.sh@54 -- # sort 00:05:45.823 00:11:09 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:45.823 00:11:09 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:45.823 00:11:09 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:45.823 00:11:09 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:45.823 00:11:09 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:45.823 00:11:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:45.823 00:11:09 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:45.823 00:11:09 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:45.823 00:11:09 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:45.823 00:11:09 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:45.823 00:11:09 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:45.823 00:11:09 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:45.823 00:11:09 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:45.823 00:11:09 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:45.823 00:11:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:45.823 00:11:09 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:45.823 00:11:09 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:45.823 00:11:09 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:45.823 00:11:09 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:45.823 00:11:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:46.081 MallocForNvmf0 00:05:46.339 00:11:09 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:46.339 00:11:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:46.597 MallocForNvmf1 00:05:46.597 00:11:10 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:46.597 00:11:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:46.855 [2024-11-18 00:11:10.434854] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:46.855 00:11:10 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:46.856 00:11:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:47.115 00:11:10 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:47.115 00:11:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:47.376 00:11:10 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:47.376 00:11:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:47.635 00:11:11 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:47.635 00:11:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:47.893 [2024-11-18 00:11:11.502237] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:47.893 00:11:11 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:47.893 00:11:11 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:47.893 00:11:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.893 00:11:11 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:47.893 00:11:11 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:47.893 00:11:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.893 00:11:11 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:47.893 00:11:11 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:47.893 00:11:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:48.150 MallocBdevForConfigChangeCheck 00:05:48.150 00:11:11 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:48.150 00:11:11 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:48.150 00:11:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:48.150 00:11:11 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:48.150 00:11:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:48.717 00:11:12 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:48.717 INFO: shutting down applications... 00:05:48.717 00:11:12 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:48.717 00:11:12 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:48.717 00:11:12 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:48.717 00:11:12 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:50.617 Calling clear_iscsi_subsystem 00:05:50.617 Calling clear_nvmf_subsystem 00:05:50.617 Calling clear_nbd_subsystem 00:05:50.617 Calling clear_ublk_subsystem 00:05:50.617 Calling clear_vhost_blk_subsystem 00:05:50.617 Calling clear_vhost_scsi_subsystem 00:05:50.617 Calling clear_bdev_subsystem 00:05:50.617 00:11:13 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:50.617 00:11:13 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:50.617 00:11:13 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:50.617 00:11:13 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:50.617 00:11:13 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:50.617 00:11:13 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:50.617 00:11:14 json_config -- json_config/json_config.sh@352 -- # break 00:05:50.617 00:11:14 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:50.617 00:11:14 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:50.618 00:11:14 json_config -- json_config/common.sh@31 -- # local app=target 00:05:50.618 00:11:14 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:50.618 00:11:14 json_config -- json_config/common.sh@35 -- # [[ -n 101301 ]] 00:05:50.618 00:11:14 json_config -- json_config/common.sh@38 -- # kill -SIGINT 101301 00:05:50.618 00:11:14 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:50.618 00:11:14 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:50.618 00:11:14 json_config -- json_config/common.sh@41 -- # kill -0 101301 00:05:50.618 00:11:14 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:51.188 00:11:14 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:51.188 00:11:14 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:51.188 00:11:14 json_config -- json_config/common.sh@41 -- # kill -0 101301 00:05:51.188 00:11:14 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:51.188 00:11:14 json_config -- json_config/common.sh@43 -- # break 00:05:51.188 00:11:14 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:51.188 00:11:14 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:51.188 SPDK target shutdown done 00:05:51.188 00:11:14 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:51.188 INFO: relaunching applications... 00:05:51.188 00:11:14 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:51.188 00:11:14 json_config -- json_config/common.sh@9 -- # local app=target 00:05:51.188 00:11:14 json_config -- json_config/common.sh@10 -- # shift 00:05:51.188 00:11:14 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:51.188 00:11:14 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:51.188 00:11:14 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:51.188 00:11:14 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:51.188 00:11:14 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:51.188 00:11:14 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=102621 00:05:51.188 00:11:14 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:51.188 00:11:14 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:51.188 Waiting for target to run... 00:05:51.188 00:11:14 json_config -- json_config/common.sh@25 -- # waitforlisten 102621 /var/tmp/spdk_tgt.sock 00:05:51.188 00:11:14 json_config -- common/autotest_common.sh@835 -- # '[' -z 102621 ']' 00:05:51.188 00:11:14 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:51.188 00:11:14 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:51.188 00:11:14 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:51.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:51.188 00:11:14 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:51.188 00:11:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:51.188 [2024-11-18 00:11:14.916745] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:05:51.188 [2024-11-18 00:11:14.916825] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102621 ] 00:05:51.754 [2024-11-18 00:11:15.418567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.754 [2024-11-18 00:11:15.459539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.050 [2024-11-18 00:11:18.508672] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:55.050 [2024-11-18 00:11:18.541117] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:55.050 00:11:18 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.050 00:11:18 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:55.050 00:11:18 json_config -- json_config/common.sh@26 -- # echo '' 00:05:55.050 00:05:55.050 00:11:18 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:55.050 00:11:18 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:55.050 INFO: Checking if target configuration is the same... 00:05:55.050 00:11:18 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:55.050 00:11:18 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:55.050 00:11:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:55.050 + '[' 2 -ne 2 ']' 00:05:55.050 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:55.050 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:55.050 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:55.050 +++ basename /dev/fd/62 00:05:55.050 ++ mktemp /tmp/62.XXX 00:05:55.050 + tmp_file_1=/tmp/62.pGG 00:05:55.050 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:55.050 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:55.050 + tmp_file_2=/tmp/spdk_tgt_config.json.S2y 00:05:55.050 + ret=0 00:05:55.050 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:55.308 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:55.308 + diff -u /tmp/62.pGG /tmp/spdk_tgt_config.json.S2y 00:05:55.308 + echo 'INFO: JSON config files are the same' 00:05:55.308 INFO: JSON config files are the same 00:05:55.308 + rm /tmp/62.pGG /tmp/spdk_tgt_config.json.S2y 00:05:55.308 + exit 0 00:05:55.308 00:11:19 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:55.308 00:11:19 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:55.308 INFO: changing configuration and checking if this can be detected... 00:05:55.308 00:11:19 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:55.308 00:11:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:55.566 00:11:19 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:55.566 00:11:19 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:55.566 00:11:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:55.566 + '[' 2 -ne 2 ']' 00:05:55.566 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:55.566 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:55.566 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:55.566 +++ basename /dev/fd/62 00:05:55.566 ++ mktemp /tmp/62.XXX 00:05:55.566 + tmp_file_1=/tmp/62.9FX 00:05:55.566 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:55.566 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:55.566 + tmp_file_2=/tmp/spdk_tgt_config.json.dkt 00:05:55.566 + ret=0 00:05:55.566 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:56.132 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:56.132 + diff -u /tmp/62.9FX /tmp/spdk_tgt_config.json.dkt 00:05:56.132 + ret=1 00:05:56.132 + echo '=== Start of file: /tmp/62.9FX ===' 00:05:56.132 + cat /tmp/62.9FX 00:05:56.132 + echo '=== End of file: /tmp/62.9FX ===' 00:05:56.132 + echo '' 00:05:56.132 + echo '=== Start of file: /tmp/spdk_tgt_config.json.dkt ===' 00:05:56.132 + cat /tmp/spdk_tgt_config.json.dkt 00:05:56.132 + echo '=== End of file: /tmp/spdk_tgt_config.json.dkt ===' 00:05:56.132 + echo '' 00:05:56.132 + rm /tmp/62.9FX /tmp/spdk_tgt_config.json.dkt 00:05:56.132 + exit 1 00:05:56.132 00:11:19 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:56.132 INFO: configuration change detected. 00:05:56.132 00:11:19 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:56.132 00:11:19 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:56.132 00:11:19 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:56.132 00:11:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:56.132 00:11:19 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:56.132 00:11:19 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:56.132 00:11:19 json_config -- json_config/json_config.sh@324 -- # [[ -n 102621 ]] 00:05:56.132 00:11:19 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:56.132 00:11:19 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:56.132 00:11:19 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:56.132 00:11:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:56.132 00:11:19 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:56.132 00:11:19 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:56.132 00:11:19 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:56.132 00:11:19 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:56.132 00:11:19 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:56.133 00:11:19 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:56.133 00:11:19 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:56.133 00:11:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:56.133 00:11:19 json_config -- json_config/json_config.sh@330 -- # killprocess 102621 00:05:56.133 00:11:19 json_config -- common/autotest_common.sh@954 -- # '[' -z 102621 ']' 00:05:56.133 00:11:19 json_config -- common/autotest_common.sh@958 -- # kill -0 102621 00:05:56.133 00:11:19 json_config -- common/autotest_common.sh@959 -- # uname 00:05:56.133 00:11:19 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:56.133 00:11:19 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102621 00:05:56.133 00:11:19 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:56.133 00:11:19 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:56.133 00:11:19 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102621' 00:05:56.133 killing process with pid 102621 00:05:56.133 00:11:19 json_config -- common/autotest_common.sh@973 -- # kill 102621 00:05:56.133 00:11:19 json_config -- common/autotest_common.sh@978 -- # wait 102621 00:05:58.034 00:11:21 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:58.034 00:11:21 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:58.034 00:11:21 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:58.034 00:11:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:58.034 00:11:21 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:58.034 00:11:21 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:58.034 INFO: Success 00:05:58.034 00:05:58.034 real 0m16.564s 00:05:58.034 user 0m18.736s 00:05:58.034 sys 0m2.077s 00:05:58.034 00:11:21 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.034 00:11:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:58.034 ************************************ 00:05:58.034 END TEST json_config 00:05:58.034 ************************************ 00:05:58.034 00:11:21 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:58.034 00:11:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.034 00:11:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.034 00:11:21 -- common/autotest_common.sh@10 -- # set +x 00:05:58.034 ************************************ 00:05:58.034 START TEST json_config_extra_key 00:05:58.034 ************************************ 00:05:58.034 00:11:21 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:58.034 00:11:21 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:58.034 00:11:21 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:58.034 00:11:21 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:58.034 00:11:21 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:58.034 00:11:21 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:58.034 00:11:21 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:58.034 00:11:21 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:58.034 00:11:21 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:58.034 00:11:21 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:58.034 00:11:21 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:58.034 00:11:21 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:58.034 00:11:21 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:58.034 00:11:21 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:58.034 00:11:21 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:58.034 00:11:21 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:58.034 00:11:21 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:58.034 00:11:21 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:58.034 00:11:21 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:58.034 00:11:21 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:58.034 00:11:21 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:58.034 00:11:21 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:58.034 00:11:21 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:58.034 00:11:21 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:58.034 00:11:21 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:58.034 00:11:21 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:58.034 00:11:21 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:58.034 00:11:21 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:58.034 00:11:21 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:58.034 00:11:21 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:58.034 00:11:21 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:58.034 00:11:21 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:58.034 00:11:21 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:58.034 00:11:21 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:58.034 00:11:21 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:58.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.034 --rc genhtml_branch_coverage=1 00:05:58.034 --rc genhtml_function_coverage=1 00:05:58.034 --rc genhtml_legend=1 00:05:58.034 --rc geninfo_all_blocks=1 00:05:58.034 --rc geninfo_unexecuted_blocks=1 00:05:58.034 00:05:58.034 ' 00:05:58.035 00:11:21 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:58.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.035 --rc genhtml_branch_coverage=1 00:05:58.035 --rc genhtml_function_coverage=1 00:05:58.035 --rc genhtml_legend=1 00:05:58.035 --rc geninfo_all_blocks=1 00:05:58.035 --rc geninfo_unexecuted_blocks=1 00:05:58.035 00:05:58.035 ' 00:05:58.035 00:11:21 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:58.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.035 --rc genhtml_branch_coverage=1 00:05:58.035 --rc genhtml_function_coverage=1 00:05:58.035 --rc genhtml_legend=1 00:05:58.035 --rc geninfo_all_blocks=1 00:05:58.035 --rc geninfo_unexecuted_blocks=1 00:05:58.035 00:05:58.035 ' 00:05:58.035 00:11:21 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:58.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.035 --rc genhtml_branch_coverage=1 00:05:58.035 --rc genhtml_function_coverage=1 00:05:58.035 --rc genhtml_legend=1 00:05:58.035 --rc geninfo_all_blocks=1 00:05:58.035 --rc geninfo_unexecuted_blocks=1 00:05:58.035 00:05:58.035 ' 00:05:58.035 00:11:21 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:58.035 00:11:21 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:58.035 00:11:21 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:58.035 00:11:21 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:58.035 00:11:21 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:58.035 00:11:21 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:58.035 00:11:21 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:58.035 00:11:21 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:58.035 00:11:21 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:58.035 00:11:21 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:58.035 00:11:21 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:58.035 00:11:21 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:58.035 00:11:21 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:58.035 00:11:21 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:58.035 00:11:21 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:58.035 00:11:21 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:58.035 00:11:21 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:58.035 00:11:21 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:58.035 00:11:21 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:58.035 00:11:21 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:58.035 00:11:21 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:58.035 00:11:21 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:58.035 00:11:21 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:58.035 00:11:21 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.035 00:11:21 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.035 00:11:21 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.035 00:11:21 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:58.035 00:11:21 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.035 00:11:21 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:58.035 00:11:21 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:58.035 00:11:21 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:58.035 00:11:21 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:58.035 00:11:21 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:58.035 00:11:21 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:58.035 00:11:21 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:58.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:58.035 00:11:21 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:58.035 00:11:21 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:58.035 00:11:21 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:58.035 00:11:21 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:58.035 00:11:21 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:58.035 00:11:21 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:58.035 00:11:21 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:58.035 00:11:21 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:58.035 00:11:21 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:58.035 00:11:21 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:58.035 00:11:21 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:58.035 00:11:21 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:58.035 00:11:21 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:58.035 00:11:21 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:58.035 INFO: launching applications... 00:05:58.035 00:11:21 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:58.035 00:11:21 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:58.035 00:11:21 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:58.035 00:11:21 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:58.035 00:11:21 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:58.035 00:11:21 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:58.035 00:11:21 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:58.035 00:11:21 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:58.035 00:11:21 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=103548 00:05:58.035 00:11:21 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:58.035 00:11:21 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:58.035 Waiting for target to run... 00:05:58.035 00:11:21 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 103548 /var/tmp/spdk_tgt.sock 00:05:58.035 00:11:21 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 103548 ']' 00:05:58.035 00:11:21 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:58.035 00:11:21 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:58.035 00:11:21 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:58.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:58.035 00:11:21 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:58.035 00:11:21 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:58.035 [2024-11-18 00:11:21.722615] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:05:58.035 [2024-11-18 00:11:21.722711] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103548 ] 00:05:58.294 [2024-11-18 00:11:22.054540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.294 [2024-11-18 00:11:22.086829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.229 00:11:22 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.229 00:11:22 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:59.229 00:11:22 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:59.229 00:05:59.229 00:11:22 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:59.229 INFO: shutting down applications... 00:05:59.229 00:11:22 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:59.229 00:11:22 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:59.229 00:11:22 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:59.229 00:11:22 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 103548 ]] 00:05:59.229 00:11:22 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 103548 00:05:59.229 00:11:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:59.229 00:11:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:59.229 00:11:22 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 103548 00:05:59.229 00:11:22 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:59.489 00:11:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:59.489 00:11:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:59.489 00:11:23 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 103548 00:05:59.489 00:11:23 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:59.489 00:11:23 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:59.489 00:11:23 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:59.489 00:11:23 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:59.489 SPDK target shutdown done 00:05:59.489 00:11:23 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:59.489 Success 00:05:59.489 00:05:59.489 real 0m1.673s 00:05:59.489 user 0m1.634s 00:05:59.489 sys 0m0.448s 00:05:59.489 00:11:23 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.489 00:11:23 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:59.489 ************************************ 00:05:59.489 END TEST json_config_extra_key 00:05:59.489 ************************************ 00:05:59.489 00:11:23 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:59.489 00:11:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:59.489 00:11:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.489 00:11:23 -- common/autotest_common.sh@10 -- # set +x 00:05:59.489 ************************************ 00:05:59.489 START TEST alias_rpc 00:05:59.489 ************************************ 00:05:59.489 00:11:23 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:59.489 * Looking for test storage... 00:05:59.489 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:59.489 00:11:23 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:59.489 00:11:23 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:59.489 00:11:23 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:59.750 00:11:23 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:59.750 00:11:23 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:59.750 00:11:23 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:59.750 00:11:23 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:59.750 00:11:23 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:59.750 00:11:23 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:59.750 00:11:23 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:59.750 00:11:23 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:59.750 00:11:23 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:59.750 00:11:23 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:59.750 00:11:23 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:59.750 00:11:23 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:59.750 00:11:23 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:59.750 00:11:23 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:59.750 00:11:23 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:59.750 00:11:23 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:59.750 00:11:23 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:59.750 00:11:23 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:59.750 00:11:23 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:59.750 00:11:23 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:59.750 00:11:23 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:59.750 00:11:23 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:59.750 00:11:23 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:59.750 00:11:23 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:59.750 00:11:23 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:59.750 00:11:23 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:59.750 00:11:23 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:59.750 00:11:23 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:59.750 00:11:23 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:59.750 00:11:23 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:59.750 00:11:23 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:59.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.750 --rc genhtml_branch_coverage=1 00:05:59.750 --rc genhtml_function_coverage=1 00:05:59.751 --rc genhtml_legend=1 00:05:59.751 --rc geninfo_all_blocks=1 00:05:59.751 --rc geninfo_unexecuted_blocks=1 00:05:59.751 00:05:59.751 ' 00:05:59.751 00:11:23 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:59.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.751 --rc genhtml_branch_coverage=1 00:05:59.751 --rc genhtml_function_coverage=1 00:05:59.751 --rc genhtml_legend=1 00:05:59.751 --rc geninfo_all_blocks=1 00:05:59.751 --rc geninfo_unexecuted_blocks=1 00:05:59.751 00:05:59.751 ' 00:05:59.751 00:11:23 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:59.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.751 --rc genhtml_branch_coverage=1 00:05:59.751 --rc genhtml_function_coverage=1 00:05:59.751 --rc genhtml_legend=1 00:05:59.751 --rc geninfo_all_blocks=1 00:05:59.751 --rc geninfo_unexecuted_blocks=1 00:05:59.751 00:05:59.751 ' 00:05:59.751 00:11:23 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:59.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.751 --rc genhtml_branch_coverage=1 00:05:59.751 --rc genhtml_function_coverage=1 00:05:59.751 --rc genhtml_legend=1 00:05:59.751 --rc geninfo_all_blocks=1 00:05:59.751 --rc geninfo_unexecuted_blocks=1 00:05:59.751 00:05:59.751 ' 00:05:59.751 00:11:23 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:59.751 00:11:23 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=103861 00:05:59.751 00:11:23 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:59.751 00:11:23 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 103861 00:05:59.751 00:11:23 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 103861 ']' 00:05:59.751 00:11:23 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.751 00:11:23 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.751 00:11:23 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.752 00:11:23 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.752 00:11:23 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.752 [2024-11-18 00:11:23.445793] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:05:59.752 [2024-11-18 00:11:23.445880] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103861 ] 00:05:59.752 [2024-11-18 00:11:23.511708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.752 [2024-11-18 00:11:23.556712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.014 00:11:23 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.014 00:11:23 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:00.014 00:11:23 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:00.582 00:11:24 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 103861 00:06:00.582 00:11:24 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 103861 ']' 00:06:00.582 00:11:24 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 103861 00:06:00.582 00:11:24 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:00.582 00:11:24 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:00.582 00:11:24 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103861 00:06:00.582 00:11:24 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:00.582 00:11:24 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:00.582 00:11:24 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103861' 00:06:00.582 killing process with pid 103861 00:06:00.582 00:11:24 alias_rpc -- common/autotest_common.sh@973 -- # kill 103861 00:06:00.582 00:11:24 alias_rpc -- common/autotest_common.sh@978 -- # wait 103861 00:06:00.840 00:06:00.840 real 0m1.259s 00:06:00.840 user 0m1.380s 00:06:00.840 sys 0m0.434s 00:06:00.840 00:11:24 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.840 00:11:24 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.840 ************************************ 00:06:00.840 END TEST alias_rpc 00:06:00.840 ************************************ 00:06:00.840 00:11:24 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:00.840 00:11:24 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:00.840 00:11:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.840 00:11:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.840 00:11:24 -- common/autotest_common.sh@10 -- # set +x 00:06:00.840 ************************************ 00:06:00.840 START TEST spdkcli_tcp 00:06:00.840 ************************************ 00:06:00.840 00:11:24 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:00.840 * Looking for test storage... 00:06:00.840 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:00.840 00:11:24 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:00.840 00:11:24 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:00.840 00:11:24 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:01.099 00:11:24 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:01.099 00:11:24 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:01.099 00:11:24 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:01.099 00:11:24 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:01.099 00:11:24 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:01.099 00:11:24 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:01.099 00:11:24 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:01.099 00:11:24 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:01.099 00:11:24 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:01.099 00:11:24 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:01.099 00:11:24 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:01.099 00:11:24 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:01.099 00:11:24 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:01.099 00:11:24 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:01.099 00:11:24 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:01.099 00:11:24 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:01.099 00:11:24 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:01.099 00:11:24 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:01.099 00:11:24 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:01.099 00:11:24 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:01.099 00:11:24 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:01.099 00:11:24 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:01.099 00:11:24 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:01.099 00:11:24 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:01.099 00:11:24 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:01.099 00:11:24 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:01.099 00:11:24 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:01.099 00:11:24 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:01.099 00:11:24 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:01.100 00:11:24 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:01.100 00:11:24 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:01.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.100 --rc genhtml_branch_coverage=1 00:06:01.100 --rc genhtml_function_coverage=1 00:06:01.100 --rc genhtml_legend=1 00:06:01.100 --rc geninfo_all_blocks=1 00:06:01.100 --rc geninfo_unexecuted_blocks=1 00:06:01.100 00:06:01.100 ' 00:06:01.100 00:11:24 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:01.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.100 --rc genhtml_branch_coverage=1 00:06:01.100 --rc genhtml_function_coverage=1 00:06:01.100 --rc genhtml_legend=1 00:06:01.100 --rc geninfo_all_blocks=1 00:06:01.100 --rc geninfo_unexecuted_blocks=1 00:06:01.100 00:06:01.100 ' 00:06:01.100 00:11:24 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:01.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.100 --rc genhtml_branch_coverage=1 00:06:01.100 --rc genhtml_function_coverage=1 00:06:01.100 --rc genhtml_legend=1 00:06:01.100 --rc geninfo_all_blocks=1 00:06:01.100 --rc geninfo_unexecuted_blocks=1 00:06:01.100 00:06:01.100 ' 00:06:01.100 00:11:24 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:01.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.100 --rc genhtml_branch_coverage=1 00:06:01.100 --rc genhtml_function_coverage=1 00:06:01.100 --rc genhtml_legend=1 00:06:01.100 --rc geninfo_all_blocks=1 00:06:01.100 --rc geninfo_unexecuted_blocks=1 00:06:01.100 00:06:01.100 ' 00:06:01.100 00:11:24 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:01.100 00:11:24 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:01.100 00:11:24 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:01.100 00:11:24 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:01.100 00:11:24 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:01.100 00:11:24 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:01.100 00:11:24 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:01.100 00:11:24 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:01.100 00:11:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:01.100 00:11:24 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=104058 00:06:01.100 00:11:24 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:01.100 00:11:24 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 104058 00:06:01.100 00:11:24 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 104058 ']' 00:06:01.100 00:11:24 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.100 00:11:24 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.100 00:11:24 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.100 00:11:24 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.100 00:11:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:01.100 [2024-11-18 00:11:24.769397] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:01.100 [2024-11-18 00:11:24.769496] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104058 ] 00:06:01.100 [2024-11-18 00:11:24.835857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:01.100 [2024-11-18 00:11:24.885481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.100 [2024-11-18 00:11:24.885486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.359 00:11:25 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.359 00:11:25 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:01.359 00:11:25 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=104071 00:06:01.359 00:11:25 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:01.359 00:11:25 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:01.617 [ 00:06:01.617 "bdev_malloc_delete", 00:06:01.617 "bdev_malloc_create", 00:06:01.617 "bdev_null_resize", 00:06:01.617 "bdev_null_delete", 00:06:01.617 "bdev_null_create", 00:06:01.617 "bdev_nvme_cuse_unregister", 00:06:01.617 "bdev_nvme_cuse_register", 00:06:01.617 "bdev_opal_new_user", 00:06:01.617 "bdev_opal_set_lock_state", 00:06:01.617 "bdev_opal_delete", 00:06:01.617 "bdev_opal_get_info", 00:06:01.617 "bdev_opal_create", 00:06:01.617 "bdev_nvme_opal_revert", 00:06:01.617 "bdev_nvme_opal_init", 00:06:01.617 "bdev_nvme_send_cmd", 00:06:01.617 "bdev_nvme_set_keys", 00:06:01.617 "bdev_nvme_get_path_iostat", 00:06:01.617 "bdev_nvme_get_mdns_discovery_info", 00:06:01.617 "bdev_nvme_stop_mdns_discovery", 00:06:01.617 "bdev_nvme_start_mdns_discovery", 00:06:01.617 "bdev_nvme_set_multipath_policy", 00:06:01.617 "bdev_nvme_set_preferred_path", 00:06:01.617 "bdev_nvme_get_io_paths", 00:06:01.617 "bdev_nvme_remove_error_injection", 00:06:01.617 "bdev_nvme_add_error_injection", 00:06:01.617 "bdev_nvme_get_discovery_info", 00:06:01.617 "bdev_nvme_stop_discovery", 00:06:01.617 "bdev_nvme_start_discovery", 00:06:01.617 "bdev_nvme_get_controller_health_info", 00:06:01.617 "bdev_nvme_disable_controller", 00:06:01.617 "bdev_nvme_enable_controller", 00:06:01.617 "bdev_nvme_reset_controller", 00:06:01.617 "bdev_nvme_get_transport_statistics", 00:06:01.617 "bdev_nvme_apply_firmware", 00:06:01.617 "bdev_nvme_detach_controller", 00:06:01.617 "bdev_nvme_get_controllers", 00:06:01.617 "bdev_nvme_attach_controller", 00:06:01.617 "bdev_nvme_set_hotplug", 00:06:01.617 "bdev_nvme_set_options", 00:06:01.617 "bdev_passthru_delete", 00:06:01.617 "bdev_passthru_create", 00:06:01.617 "bdev_lvol_set_parent_bdev", 00:06:01.617 "bdev_lvol_set_parent", 00:06:01.617 "bdev_lvol_check_shallow_copy", 00:06:01.617 "bdev_lvol_start_shallow_copy", 00:06:01.617 "bdev_lvol_grow_lvstore", 00:06:01.617 "bdev_lvol_get_lvols", 00:06:01.617 "bdev_lvol_get_lvstores", 00:06:01.617 "bdev_lvol_delete", 00:06:01.617 "bdev_lvol_set_read_only", 00:06:01.617 "bdev_lvol_resize", 00:06:01.617 "bdev_lvol_decouple_parent", 00:06:01.617 "bdev_lvol_inflate", 00:06:01.617 "bdev_lvol_rename", 00:06:01.617 "bdev_lvol_clone_bdev", 00:06:01.617 "bdev_lvol_clone", 00:06:01.617 "bdev_lvol_snapshot", 00:06:01.617 "bdev_lvol_create", 00:06:01.617 "bdev_lvol_delete_lvstore", 00:06:01.617 "bdev_lvol_rename_lvstore", 00:06:01.617 "bdev_lvol_create_lvstore", 00:06:01.617 "bdev_raid_set_options", 00:06:01.617 "bdev_raid_remove_base_bdev", 00:06:01.617 "bdev_raid_add_base_bdev", 00:06:01.617 "bdev_raid_delete", 00:06:01.617 "bdev_raid_create", 00:06:01.617 "bdev_raid_get_bdevs", 00:06:01.617 "bdev_error_inject_error", 00:06:01.617 "bdev_error_delete", 00:06:01.617 "bdev_error_create", 00:06:01.618 "bdev_split_delete", 00:06:01.618 "bdev_split_create", 00:06:01.618 "bdev_delay_delete", 00:06:01.618 "bdev_delay_create", 00:06:01.618 "bdev_delay_update_latency", 00:06:01.618 "bdev_zone_block_delete", 00:06:01.618 "bdev_zone_block_create", 00:06:01.618 "blobfs_create", 00:06:01.618 "blobfs_detect", 00:06:01.618 "blobfs_set_cache_size", 00:06:01.618 "bdev_aio_delete", 00:06:01.618 "bdev_aio_rescan", 00:06:01.618 "bdev_aio_create", 00:06:01.618 "bdev_ftl_set_property", 00:06:01.618 "bdev_ftl_get_properties", 00:06:01.618 "bdev_ftl_get_stats", 00:06:01.618 "bdev_ftl_unmap", 00:06:01.618 "bdev_ftl_unload", 00:06:01.618 "bdev_ftl_delete", 00:06:01.618 "bdev_ftl_load", 00:06:01.618 "bdev_ftl_create", 00:06:01.618 "bdev_virtio_attach_controller", 00:06:01.618 "bdev_virtio_scsi_get_devices", 00:06:01.618 "bdev_virtio_detach_controller", 00:06:01.618 "bdev_virtio_blk_set_hotplug", 00:06:01.618 "bdev_iscsi_delete", 00:06:01.618 "bdev_iscsi_create", 00:06:01.618 "bdev_iscsi_set_options", 00:06:01.618 "accel_error_inject_error", 00:06:01.618 "ioat_scan_accel_module", 00:06:01.618 "dsa_scan_accel_module", 00:06:01.618 "iaa_scan_accel_module", 00:06:01.618 "vfu_virtio_create_fs_endpoint", 00:06:01.618 "vfu_virtio_create_scsi_endpoint", 00:06:01.618 "vfu_virtio_scsi_remove_target", 00:06:01.618 "vfu_virtio_scsi_add_target", 00:06:01.618 "vfu_virtio_create_blk_endpoint", 00:06:01.618 "vfu_virtio_delete_endpoint", 00:06:01.618 "keyring_file_remove_key", 00:06:01.618 "keyring_file_add_key", 00:06:01.618 "keyring_linux_set_options", 00:06:01.618 "fsdev_aio_delete", 00:06:01.618 "fsdev_aio_create", 00:06:01.618 "iscsi_get_histogram", 00:06:01.618 "iscsi_enable_histogram", 00:06:01.618 "iscsi_set_options", 00:06:01.618 "iscsi_get_auth_groups", 00:06:01.618 "iscsi_auth_group_remove_secret", 00:06:01.618 "iscsi_auth_group_add_secret", 00:06:01.618 "iscsi_delete_auth_group", 00:06:01.618 "iscsi_create_auth_group", 00:06:01.618 "iscsi_set_discovery_auth", 00:06:01.618 "iscsi_get_options", 00:06:01.618 "iscsi_target_node_request_logout", 00:06:01.618 "iscsi_target_node_set_redirect", 00:06:01.618 "iscsi_target_node_set_auth", 00:06:01.618 "iscsi_target_node_add_lun", 00:06:01.618 "iscsi_get_stats", 00:06:01.618 "iscsi_get_connections", 00:06:01.618 "iscsi_portal_group_set_auth", 00:06:01.618 "iscsi_start_portal_group", 00:06:01.618 "iscsi_delete_portal_group", 00:06:01.618 "iscsi_create_portal_group", 00:06:01.618 "iscsi_get_portal_groups", 00:06:01.618 "iscsi_delete_target_node", 00:06:01.618 "iscsi_target_node_remove_pg_ig_maps", 00:06:01.618 "iscsi_target_node_add_pg_ig_maps", 00:06:01.618 "iscsi_create_target_node", 00:06:01.618 "iscsi_get_target_nodes", 00:06:01.618 "iscsi_delete_initiator_group", 00:06:01.618 "iscsi_initiator_group_remove_initiators", 00:06:01.618 "iscsi_initiator_group_add_initiators", 00:06:01.618 "iscsi_create_initiator_group", 00:06:01.618 "iscsi_get_initiator_groups", 00:06:01.618 "nvmf_set_crdt", 00:06:01.618 "nvmf_set_config", 00:06:01.618 "nvmf_set_max_subsystems", 00:06:01.618 "nvmf_stop_mdns_prr", 00:06:01.618 "nvmf_publish_mdns_prr", 00:06:01.618 "nvmf_subsystem_get_listeners", 00:06:01.618 "nvmf_subsystem_get_qpairs", 00:06:01.618 "nvmf_subsystem_get_controllers", 00:06:01.618 "nvmf_get_stats", 00:06:01.618 "nvmf_get_transports", 00:06:01.618 "nvmf_create_transport", 00:06:01.618 "nvmf_get_targets", 00:06:01.618 "nvmf_delete_target", 00:06:01.618 "nvmf_create_target", 00:06:01.618 "nvmf_subsystem_allow_any_host", 00:06:01.618 "nvmf_subsystem_set_keys", 00:06:01.618 "nvmf_subsystem_remove_host", 00:06:01.618 "nvmf_subsystem_add_host", 00:06:01.618 "nvmf_ns_remove_host", 00:06:01.618 "nvmf_ns_add_host", 00:06:01.618 "nvmf_subsystem_remove_ns", 00:06:01.618 "nvmf_subsystem_set_ns_ana_group", 00:06:01.618 "nvmf_subsystem_add_ns", 00:06:01.618 "nvmf_subsystem_listener_set_ana_state", 00:06:01.618 "nvmf_discovery_get_referrals", 00:06:01.618 "nvmf_discovery_remove_referral", 00:06:01.618 "nvmf_discovery_add_referral", 00:06:01.618 "nvmf_subsystem_remove_listener", 00:06:01.618 "nvmf_subsystem_add_listener", 00:06:01.618 "nvmf_delete_subsystem", 00:06:01.618 "nvmf_create_subsystem", 00:06:01.618 "nvmf_get_subsystems", 00:06:01.618 "env_dpdk_get_mem_stats", 00:06:01.618 "nbd_get_disks", 00:06:01.618 "nbd_stop_disk", 00:06:01.618 "nbd_start_disk", 00:06:01.618 "ublk_recover_disk", 00:06:01.618 "ublk_get_disks", 00:06:01.618 "ublk_stop_disk", 00:06:01.618 "ublk_start_disk", 00:06:01.618 "ublk_destroy_target", 00:06:01.618 "ublk_create_target", 00:06:01.618 "virtio_blk_create_transport", 00:06:01.618 "virtio_blk_get_transports", 00:06:01.618 "vhost_controller_set_coalescing", 00:06:01.618 "vhost_get_controllers", 00:06:01.618 "vhost_delete_controller", 00:06:01.618 "vhost_create_blk_controller", 00:06:01.618 "vhost_scsi_controller_remove_target", 00:06:01.618 "vhost_scsi_controller_add_target", 00:06:01.618 "vhost_start_scsi_controller", 00:06:01.618 "vhost_create_scsi_controller", 00:06:01.618 "thread_set_cpumask", 00:06:01.618 "scheduler_set_options", 00:06:01.618 "framework_get_governor", 00:06:01.618 "framework_get_scheduler", 00:06:01.618 "framework_set_scheduler", 00:06:01.618 "framework_get_reactors", 00:06:01.618 "thread_get_io_channels", 00:06:01.618 "thread_get_pollers", 00:06:01.618 "thread_get_stats", 00:06:01.618 "framework_monitor_context_switch", 00:06:01.618 "spdk_kill_instance", 00:06:01.618 "log_enable_timestamps", 00:06:01.618 "log_get_flags", 00:06:01.618 "log_clear_flag", 00:06:01.618 "log_set_flag", 00:06:01.618 "log_get_level", 00:06:01.618 "log_set_level", 00:06:01.618 "log_get_print_level", 00:06:01.618 "log_set_print_level", 00:06:01.618 "framework_enable_cpumask_locks", 00:06:01.618 "framework_disable_cpumask_locks", 00:06:01.618 "framework_wait_init", 00:06:01.618 "framework_start_init", 00:06:01.618 "scsi_get_devices", 00:06:01.618 "bdev_get_histogram", 00:06:01.618 "bdev_enable_histogram", 00:06:01.618 "bdev_set_qos_limit", 00:06:01.618 "bdev_set_qd_sampling_period", 00:06:01.618 "bdev_get_bdevs", 00:06:01.618 "bdev_reset_iostat", 00:06:01.618 "bdev_get_iostat", 00:06:01.618 "bdev_examine", 00:06:01.618 "bdev_wait_for_examine", 00:06:01.618 "bdev_set_options", 00:06:01.618 "accel_get_stats", 00:06:01.618 "accel_set_options", 00:06:01.618 "accel_set_driver", 00:06:01.618 "accel_crypto_key_destroy", 00:06:01.618 "accel_crypto_keys_get", 00:06:01.618 "accel_crypto_key_create", 00:06:01.618 "accel_assign_opc", 00:06:01.618 "accel_get_module_info", 00:06:01.618 "accel_get_opc_assignments", 00:06:01.618 "vmd_rescan", 00:06:01.618 "vmd_remove_device", 00:06:01.618 "vmd_enable", 00:06:01.618 "sock_get_default_impl", 00:06:01.618 "sock_set_default_impl", 00:06:01.618 "sock_impl_set_options", 00:06:01.618 "sock_impl_get_options", 00:06:01.618 "iobuf_get_stats", 00:06:01.618 "iobuf_set_options", 00:06:01.618 "keyring_get_keys", 00:06:01.618 "vfu_tgt_set_base_path", 00:06:01.618 "framework_get_pci_devices", 00:06:01.618 "framework_get_config", 00:06:01.618 "framework_get_subsystems", 00:06:01.618 "fsdev_set_opts", 00:06:01.618 "fsdev_get_opts", 00:06:01.618 "trace_get_info", 00:06:01.618 "trace_get_tpoint_group_mask", 00:06:01.618 "trace_disable_tpoint_group", 00:06:01.618 "trace_enable_tpoint_group", 00:06:01.618 "trace_clear_tpoint_mask", 00:06:01.618 "trace_set_tpoint_mask", 00:06:01.618 "notify_get_notifications", 00:06:01.618 "notify_get_types", 00:06:01.618 "spdk_get_version", 00:06:01.618 "rpc_get_methods" 00:06:01.618 ] 00:06:01.618 00:11:25 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:01.618 00:11:25 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:01.618 00:11:25 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:01.618 00:11:25 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:01.618 00:11:25 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 104058 00:06:01.618 00:11:25 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 104058 ']' 00:06:01.618 00:11:25 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 104058 00:06:01.618 00:11:25 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:01.618 00:11:25 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:01.618 00:11:25 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104058 00:06:01.876 00:11:25 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:01.876 00:11:25 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:01.876 00:11:25 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104058' 00:06:01.876 killing process with pid 104058 00:06:01.876 00:11:25 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 104058 00:06:01.876 00:11:25 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 104058 00:06:02.134 00:06:02.134 real 0m1.291s 00:06:02.134 user 0m2.340s 00:06:02.134 sys 0m0.461s 00:06:02.134 00:11:25 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.134 00:11:25 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:02.134 ************************************ 00:06:02.134 END TEST spdkcli_tcp 00:06:02.134 ************************************ 00:06:02.134 00:11:25 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:02.134 00:11:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.134 00:11:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.134 00:11:25 -- common/autotest_common.sh@10 -- # set +x 00:06:02.134 ************************************ 00:06:02.134 START TEST dpdk_mem_utility 00:06:02.134 ************************************ 00:06:02.134 00:11:25 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:02.134 * Looking for test storage... 00:06:02.393 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:02.394 00:11:25 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:02.394 00:11:25 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:06:02.394 00:11:25 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:02.394 00:11:26 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:02.394 00:11:26 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:02.394 00:11:26 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:02.394 00:11:26 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:02.394 00:11:26 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:02.394 00:11:26 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:02.394 00:11:26 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:02.394 00:11:26 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:02.394 00:11:26 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:02.394 00:11:26 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:02.394 00:11:26 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:02.394 00:11:26 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:02.394 00:11:26 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:02.394 00:11:26 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:02.394 00:11:26 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:02.394 00:11:26 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:02.394 00:11:26 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:02.394 00:11:26 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:02.394 00:11:26 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:02.394 00:11:26 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:02.394 00:11:26 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:02.394 00:11:26 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:02.394 00:11:26 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:02.394 00:11:26 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:02.394 00:11:26 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:02.394 00:11:26 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:02.394 00:11:26 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:02.394 00:11:26 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:02.394 00:11:26 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:02.394 00:11:26 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:02.394 00:11:26 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:02.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.394 --rc genhtml_branch_coverage=1 00:06:02.394 --rc genhtml_function_coverage=1 00:06:02.394 --rc genhtml_legend=1 00:06:02.394 --rc geninfo_all_blocks=1 00:06:02.394 --rc geninfo_unexecuted_blocks=1 00:06:02.394 00:06:02.394 ' 00:06:02.394 00:11:26 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:02.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.394 --rc genhtml_branch_coverage=1 00:06:02.394 --rc genhtml_function_coverage=1 00:06:02.394 --rc genhtml_legend=1 00:06:02.394 --rc geninfo_all_blocks=1 00:06:02.394 --rc geninfo_unexecuted_blocks=1 00:06:02.394 00:06:02.394 ' 00:06:02.394 00:11:26 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:02.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.394 --rc genhtml_branch_coverage=1 00:06:02.394 --rc genhtml_function_coverage=1 00:06:02.394 --rc genhtml_legend=1 00:06:02.394 --rc geninfo_all_blocks=1 00:06:02.394 --rc geninfo_unexecuted_blocks=1 00:06:02.394 00:06:02.394 ' 00:06:02.394 00:11:26 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:02.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.394 --rc genhtml_branch_coverage=1 00:06:02.394 --rc genhtml_function_coverage=1 00:06:02.394 --rc genhtml_legend=1 00:06:02.394 --rc geninfo_all_blocks=1 00:06:02.394 --rc geninfo_unexecuted_blocks=1 00:06:02.394 00:06:02.394 ' 00:06:02.394 00:11:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:02.394 00:11:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=104270 00:06:02.394 00:11:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:02.394 00:11:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 104270 00:06:02.394 00:11:26 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 104270 ']' 00:06:02.394 00:11:26 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.394 00:11:26 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:02.394 00:11:26 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.394 00:11:26 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:02.394 00:11:26 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:02.394 [2024-11-18 00:11:26.106047] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:02.394 [2024-11-18 00:11:26.106139] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104270 ] 00:06:02.394 [2024-11-18 00:11:26.170956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.652 [2024-11-18 00:11:26.216784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.652 00:11:26 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.652 00:11:26 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:02.652 00:11:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:02.652 00:11:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:02.652 00:11:26 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.652 00:11:26 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:02.652 { 00:06:02.652 "filename": "/tmp/spdk_mem_dump.txt" 00:06:02.652 } 00:06:02.910 00:11:26 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.910 00:11:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:02.910 DPDK memory size 810.000000 MiB in 1 heap(s) 00:06:02.910 1 heaps totaling size 810.000000 MiB 00:06:02.910 size: 810.000000 MiB heap id: 0 00:06:02.910 end heaps---------- 00:06:02.910 9 mempools totaling size 595.772034 MiB 00:06:02.910 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:02.910 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:02.910 size: 92.545471 MiB name: bdev_io_104270 00:06:02.910 size: 50.003479 MiB name: msgpool_104270 00:06:02.910 size: 36.509338 MiB name: fsdev_io_104270 00:06:02.910 size: 21.763794 MiB name: PDU_Pool 00:06:02.910 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:02.910 size: 4.133484 MiB name: evtpool_104270 00:06:02.910 size: 0.026123 MiB name: Session_Pool 00:06:02.910 end mempools------- 00:06:02.910 6 memzones totaling size 4.142822 MiB 00:06:02.910 size: 1.000366 MiB name: RG_ring_0_104270 00:06:02.910 size: 1.000366 MiB name: RG_ring_1_104270 00:06:02.910 size: 1.000366 MiB name: RG_ring_4_104270 00:06:02.910 size: 1.000366 MiB name: RG_ring_5_104270 00:06:02.910 size: 0.125366 MiB name: RG_ring_2_104270 00:06:02.910 size: 0.015991 MiB name: RG_ring_3_104270 00:06:02.910 end memzones------- 00:06:02.910 00:11:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:02.910 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:06:02.910 list of free elements. size: 10.862488 MiB 00:06:02.910 element at address: 0x200018a00000 with size: 0.999878 MiB 00:06:02.910 element at address: 0x200018c00000 with size: 0.999878 MiB 00:06:02.910 element at address: 0x200000400000 with size: 0.998535 MiB 00:06:02.910 element at address: 0x200031800000 with size: 0.994446 MiB 00:06:02.910 element at address: 0x200006400000 with size: 0.959839 MiB 00:06:02.910 element at address: 0x200012c00000 with size: 0.954285 MiB 00:06:02.910 element at address: 0x200018e00000 with size: 0.936584 MiB 00:06:02.910 element at address: 0x200000200000 with size: 0.717346 MiB 00:06:02.910 element at address: 0x20001a600000 with size: 0.582886 MiB 00:06:02.910 element at address: 0x200000c00000 with size: 0.495422 MiB 00:06:02.910 element at address: 0x20000a600000 with size: 0.490723 MiB 00:06:02.910 element at address: 0x200019000000 with size: 0.485657 MiB 00:06:02.910 element at address: 0x200003e00000 with size: 0.481934 MiB 00:06:02.910 element at address: 0x200027a00000 with size: 0.410034 MiB 00:06:02.910 element at address: 0x200000800000 with size: 0.355042 MiB 00:06:02.910 list of standard malloc elements. size: 199.218628 MiB 00:06:02.910 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:06:02.910 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:06:02.910 element at address: 0x200018afff80 with size: 1.000122 MiB 00:06:02.910 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:06:02.910 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:02.910 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:02.910 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:06:02.910 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:02.910 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:06:02.910 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:02.910 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:02.910 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:02.910 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:02.910 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:06:02.910 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:02.910 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:02.910 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:06:02.910 element at address: 0x20000085b040 with size: 0.000183 MiB 00:06:02.910 element at address: 0x20000085f300 with size: 0.000183 MiB 00:06:02.910 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:06:02.910 element at address: 0x20000087f680 with size: 0.000183 MiB 00:06:02.910 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:06:02.910 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:02.911 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:02.911 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:02.911 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:02.911 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:06:02.911 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:06:02.911 element at address: 0x200003efb980 with size: 0.000183 MiB 00:06:02.911 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:06:02.911 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:06:02.911 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:06:02.911 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:06:02.911 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:06:02.911 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:06:02.911 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:06:02.911 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:06:02.911 element at address: 0x20001a695380 with size: 0.000183 MiB 00:06:02.911 element at address: 0x20001a695440 with size: 0.000183 MiB 00:06:02.911 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:06:02.911 element at address: 0x200027a69040 with size: 0.000183 MiB 00:06:02.911 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:06:02.911 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:06:02.911 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:06:02.911 list of memzone associated elements. size: 599.918884 MiB 00:06:02.911 element at address: 0x20001a695500 with size: 211.416748 MiB 00:06:02.911 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:02.911 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:06:02.911 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:02.911 element at address: 0x200012df4780 with size: 92.045044 MiB 00:06:02.911 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_104270_0 00:06:02.911 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:02.911 associated memzone info: size: 48.002930 MiB name: MP_msgpool_104270_0 00:06:02.911 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:06:02.911 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_104270_0 00:06:02.911 element at address: 0x2000191be940 with size: 20.255554 MiB 00:06:02.911 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:02.911 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:06:02.911 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:02.911 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:02.911 associated memzone info: size: 3.000122 MiB name: MP_evtpool_104270_0 00:06:02.911 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:02.911 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_104270 00:06:02.911 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:02.911 associated memzone info: size: 1.007996 MiB name: MP_evtpool_104270 00:06:02.911 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:06:02.911 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:02.911 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:06:02.911 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:02.911 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:06:02.911 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:02.911 element at address: 0x200003efba40 with size: 1.008118 MiB 00:06:02.911 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:02.911 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:02.911 associated memzone info: size: 1.000366 MiB name: RG_ring_0_104270 00:06:02.911 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:02.911 associated memzone info: size: 1.000366 MiB name: RG_ring_1_104270 00:06:02.911 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:06:02.911 associated memzone info: size: 1.000366 MiB name: RG_ring_4_104270 00:06:02.911 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:06:02.911 associated memzone info: size: 1.000366 MiB name: RG_ring_5_104270 00:06:02.911 element at address: 0x20000087f740 with size: 0.500488 MiB 00:06:02.911 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_104270 00:06:02.911 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:02.911 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_104270 00:06:02.911 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:06:02.911 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:02.911 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:06:02.911 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:02.911 element at address: 0x20001907c540 with size: 0.250488 MiB 00:06:02.911 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:02.911 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:06:02.911 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_104270 00:06:02.911 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:06:02.911 associated memzone info: size: 0.125366 MiB name: RG_ring_2_104270 00:06:02.911 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:06:02.911 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:02.911 element at address: 0x200027a69100 with size: 0.023743 MiB 00:06:02.911 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:02.911 element at address: 0x20000085b100 with size: 0.016113 MiB 00:06:02.911 associated memzone info: size: 0.015991 MiB name: RG_ring_3_104270 00:06:02.911 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:06:02.911 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:02.911 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:06:02.911 associated memzone info: size: 0.000183 MiB name: MP_msgpool_104270 00:06:02.911 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:06:02.911 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_104270 00:06:02.911 element at address: 0x20000085af00 with size: 0.000305 MiB 00:06:02.911 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_104270 00:06:02.911 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:06:02.911 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:02.911 00:11:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:02.911 00:11:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 104270 00:06:02.911 00:11:26 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 104270 ']' 00:06:02.911 00:11:26 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 104270 00:06:02.911 00:11:26 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:02.911 00:11:26 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:02.911 00:11:26 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104270 00:06:02.911 00:11:26 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:02.911 00:11:26 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:02.911 00:11:26 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104270' 00:06:02.911 killing process with pid 104270 00:06:02.911 00:11:26 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 104270 00:06:02.911 00:11:26 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 104270 00:06:03.478 00:06:03.478 real 0m1.108s 00:06:03.478 user 0m1.106s 00:06:03.478 sys 0m0.410s 00:06:03.478 00:11:27 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.478 00:11:27 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:03.478 ************************************ 00:06:03.478 END TEST dpdk_mem_utility 00:06:03.478 ************************************ 00:06:03.478 00:11:27 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:03.478 00:11:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:03.478 00:11:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.478 00:11:27 -- common/autotest_common.sh@10 -- # set +x 00:06:03.478 ************************************ 00:06:03.478 START TEST event 00:06:03.478 ************************************ 00:06:03.478 00:11:27 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:03.478 * Looking for test storage... 00:06:03.478 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:03.478 00:11:27 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:03.478 00:11:27 event -- common/autotest_common.sh@1693 -- # lcov --version 00:06:03.478 00:11:27 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:03.478 00:11:27 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:03.478 00:11:27 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:03.478 00:11:27 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:03.478 00:11:27 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:03.478 00:11:27 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:03.478 00:11:27 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:03.478 00:11:27 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:03.478 00:11:27 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:03.478 00:11:27 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:03.478 00:11:27 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:03.478 00:11:27 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:03.478 00:11:27 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:03.478 00:11:27 event -- scripts/common.sh@344 -- # case "$op" in 00:06:03.478 00:11:27 event -- scripts/common.sh@345 -- # : 1 00:06:03.478 00:11:27 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:03.478 00:11:27 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:03.478 00:11:27 event -- scripts/common.sh@365 -- # decimal 1 00:06:03.478 00:11:27 event -- scripts/common.sh@353 -- # local d=1 00:06:03.478 00:11:27 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:03.478 00:11:27 event -- scripts/common.sh@355 -- # echo 1 00:06:03.478 00:11:27 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:03.478 00:11:27 event -- scripts/common.sh@366 -- # decimal 2 00:06:03.478 00:11:27 event -- scripts/common.sh@353 -- # local d=2 00:06:03.478 00:11:27 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:03.478 00:11:27 event -- scripts/common.sh@355 -- # echo 2 00:06:03.478 00:11:27 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:03.478 00:11:27 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:03.478 00:11:27 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:03.478 00:11:27 event -- scripts/common.sh@368 -- # return 0 00:06:03.478 00:11:27 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:03.478 00:11:27 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:03.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.478 --rc genhtml_branch_coverage=1 00:06:03.478 --rc genhtml_function_coverage=1 00:06:03.478 --rc genhtml_legend=1 00:06:03.478 --rc geninfo_all_blocks=1 00:06:03.478 --rc geninfo_unexecuted_blocks=1 00:06:03.478 00:06:03.478 ' 00:06:03.478 00:11:27 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:03.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.478 --rc genhtml_branch_coverage=1 00:06:03.478 --rc genhtml_function_coverage=1 00:06:03.478 --rc genhtml_legend=1 00:06:03.478 --rc geninfo_all_blocks=1 00:06:03.478 --rc geninfo_unexecuted_blocks=1 00:06:03.478 00:06:03.478 ' 00:06:03.478 00:11:27 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:03.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.478 --rc genhtml_branch_coverage=1 00:06:03.478 --rc genhtml_function_coverage=1 00:06:03.478 --rc genhtml_legend=1 00:06:03.478 --rc geninfo_all_blocks=1 00:06:03.478 --rc geninfo_unexecuted_blocks=1 00:06:03.478 00:06:03.478 ' 00:06:03.478 00:11:27 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:03.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.478 --rc genhtml_branch_coverage=1 00:06:03.478 --rc genhtml_function_coverage=1 00:06:03.478 --rc genhtml_legend=1 00:06:03.479 --rc geninfo_all_blocks=1 00:06:03.479 --rc geninfo_unexecuted_blocks=1 00:06:03.479 00:06:03.479 ' 00:06:03.479 00:11:27 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:03.479 00:11:27 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:03.479 00:11:27 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:03.479 00:11:27 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:03.479 00:11:27 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.479 00:11:27 event -- common/autotest_common.sh@10 -- # set +x 00:06:03.479 ************************************ 00:06:03.479 START TEST event_perf 00:06:03.479 ************************************ 00:06:03.479 00:11:27 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:03.479 Running I/O for 1 seconds...[2024-11-18 00:11:27.249401] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:03.479 [2024-11-18 00:11:27.249463] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104467 ] 00:06:03.737 [2024-11-18 00:11:27.314324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:03.737 [2024-11-18 00:11:27.363290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.737 [2024-11-18 00:11:27.363366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:03.737 [2024-11-18 00:11:27.363421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:03.737 [2024-11-18 00:11:27.363423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.678 Running I/O for 1 seconds... 00:06:04.678 lcore 0: 234018 00:06:04.678 lcore 1: 234016 00:06:04.678 lcore 2: 234017 00:06:04.678 lcore 3: 234018 00:06:04.678 done. 00:06:04.678 00:06:04.678 real 0m1.175s 00:06:04.678 user 0m4.099s 00:06:04.678 sys 0m0.071s 00:06:04.678 00:11:28 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.678 00:11:28 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:04.678 ************************************ 00:06:04.678 END TEST event_perf 00:06:04.678 ************************************ 00:06:04.678 00:11:28 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:04.678 00:11:28 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:04.678 00:11:28 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.678 00:11:28 event -- common/autotest_common.sh@10 -- # set +x 00:06:04.678 ************************************ 00:06:04.678 START TEST event_reactor 00:06:04.678 ************************************ 00:06:04.678 00:11:28 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:04.678 [2024-11-18 00:11:28.463343] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:04.678 [2024-11-18 00:11:28.463402] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104626 ] 00:06:04.936 [2024-11-18 00:11:28.528447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.936 [2024-11-18 00:11:28.572868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.870 test_start 00:06:05.870 oneshot 00:06:05.870 tick 100 00:06:05.870 tick 100 00:06:05.870 tick 250 00:06:05.870 tick 100 00:06:05.870 tick 100 00:06:05.870 tick 100 00:06:05.870 tick 250 00:06:05.870 tick 500 00:06:05.870 tick 100 00:06:05.870 tick 100 00:06:05.870 tick 250 00:06:05.870 tick 100 00:06:05.870 tick 100 00:06:05.870 test_end 00:06:05.870 00:06:05.870 real 0m1.161s 00:06:05.870 user 0m1.104s 00:06:05.870 sys 0m0.054s 00:06:05.870 00:11:29 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.870 00:11:29 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:05.870 ************************************ 00:06:05.870 END TEST event_reactor 00:06:05.870 ************************************ 00:06:05.870 00:11:29 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:05.870 00:11:29 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:05.870 00:11:29 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.870 00:11:29 event -- common/autotest_common.sh@10 -- # set +x 00:06:05.870 ************************************ 00:06:05.870 START TEST event_reactor_perf 00:06:05.870 ************************************ 00:06:05.870 00:11:29 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:05.870 [2024-11-18 00:11:29.677031] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:05.870 [2024-11-18 00:11:29.677098] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104786 ] 00:06:06.129 [2024-11-18 00:11:29.743289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.129 [2024-11-18 00:11:29.788513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.070 test_start 00:06:07.070 test_end 00:06:07.070 Performance: 439919 events per second 00:06:07.070 00:06:07.070 real 0m1.166s 00:06:07.070 user 0m1.092s 00:06:07.070 sys 0m0.068s 00:06:07.070 00:11:30 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.070 00:11:30 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:07.070 ************************************ 00:06:07.070 END TEST event_reactor_perf 00:06:07.070 ************************************ 00:06:07.070 00:11:30 event -- event/event.sh@49 -- # uname -s 00:06:07.070 00:11:30 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:07.070 00:11:30 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:07.070 00:11:30 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.070 00:11:30 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.070 00:11:30 event -- common/autotest_common.sh@10 -- # set +x 00:06:07.070 ************************************ 00:06:07.070 START TEST event_scheduler 00:06:07.070 ************************************ 00:06:07.070 00:11:30 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:07.329 * Looking for test storage... 00:06:07.329 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:07.329 00:11:30 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:07.329 00:11:30 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:06:07.329 00:11:30 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:07.329 00:11:31 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:07.329 00:11:31 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:07.329 00:11:31 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:07.329 00:11:31 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:07.329 00:11:31 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:07.329 00:11:31 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:07.329 00:11:31 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:07.329 00:11:31 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:07.329 00:11:31 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:07.330 00:11:31 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:07.330 00:11:31 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:07.330 00:11:31 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:07.330 00:11:31 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:07.330 00:11:31 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:07.330 00:11:31 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:07.330 00:11:31 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:07.330 00:11:31 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:07.330 00:11:31 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:07.330 00:11:31 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:07.330 00:11:31 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:07.330 00:11:31 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:07.330 00:11:31 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:07.330 00:11:31 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:07.330 00:11:31 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:07.330 00:11:31 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:07.330 00:11:31 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:07.330 00:11:31 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:07.330 00:11:31 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:07.330 00:11:31 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:07.330 00:11:31 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:07.330 00:11:31 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:07.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.330 --rc genhtml_branch_coverage=1 00:06:07.330 --rc genhtml_function_coverage=1 00:06:07.330 --rc genhtml_legend=1 00:06:07.330 --rc geninfo_all_blocks=1 00:06:07.330 --rc geninfo_unexecuted_blocks=1 00:06:07.330 00:06:07.330 ' 00:06:07.330 00:11:31 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:07.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.330 --rc genhtml_branch_coverage=1 00:06:07.330 --rc genhtml_function_coverage=1 00:06:07.330 --rc genhtml_legend=1 00:06:07.330 --rc geninfo_all_blocks=1 00:06:07.330 --rc geninfo_unexecuted_blocks=1 00:06:07.330 00:06:07.330 ' 00:06:07.330 00:11:31 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:07.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.330 --rc genhtml_branch_coverage=1 00:06:07.330 --rc genhtml_function_coverage=1 00:06:07.330 --rc genhtml_legend=1 00:06:07.330 --rc geninfo_all_blocks=1 00:06:07.330 --rc geninfo_unexecuted_blocks=1 00:06:07.330 00:06:07.330 ' 00:06:07.330 00:11:31 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:07.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.330 --rc genhtml_branch_coverage=1 00:06:07.330 --rc genhtml_function_coverage=1 00:06:07.330 --rc genhtml_legend=1 00:06:07.330 --rc geninfo_all_blocks=1 00:06:07.330 --rc geninfo_unexecuted_blocks=1 00:06:07.330 00:06:07.330 ' 00:06:07.330 00:11:31 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:07.330 00:11:31 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=104976 00:06:07.330 00:11:31 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:07.330 00:11:31 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:07.330 00:11:31 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 104976 00:06:07.330 00:11:31 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 104976 ']' 00:06:07.330 00:11:31 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.330 00:11:31 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.330 00:11:31 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.330 00:11:31 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.330 00:11:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:07.330 [2024-11-18 00:11:31.074987] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:07.330 [2024-11-18 00:11:31.075070] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104976 ] 00:06:07.330 [2024-11-18 00:11:31.144572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:07.589 [2024-11-18 00:11:31.195274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.589 [2024-11-18 00:11:31.195347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.589 [2024-11-18 00:11:31.195406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:07.589 [2024-11-18 00:11:31.195409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:07.589 00:11:31 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:07.589 00:11:31 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:07.589 00:11:31 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:07.589 00:11:31 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.589 00:11:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:07.589 [2024-11-18 00:11:31.312344] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:07.589 [2024-11-18 00:11:31.312386] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:07.589 [2024-11-18 00:11:31.312404] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:07.589 [2024-11-18 00:11:31.312415] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:07.589 [2024-11-18 00:11:31.312426] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:07.589 00:11:31 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.589 00:11:31 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:07.589 00:11:31 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.589 00:11:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:07.589 [2024-11-18 00:11:31.408944] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:07.589 00:11:31 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.589 00:11:31 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:07.589 00:11:31 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.589 00:11:31 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.589 00:11:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:07.847 ************************************ 00:06:07.847 START TEST scheduler_create_thread 00:06:07.847 ************************************ 00:06:07.847 00:11:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:07.847 00:11:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:07.847 00:11:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.847 00:11:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.847 2 00:06:07.847 00:11:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.847 00:11:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:07.847 00:11:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.847 00:11:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.847 3 00:06:07.847 00:11:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.847 00:11:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:07.847 00:11:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.847 00:11:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.847 4 00:06:07.847 00:11:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.847 00:11:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:07.847 00:11:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.847 00:11:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.847 5 00:06:07.847 00:11:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.847 00:11:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:07.847 00:11:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.847 00:11:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.847 6 00:06:07.847 00:11:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.847 00:11:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:07.847 00:11:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.847 00:11:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.847 7 00:06:07.847 00:11:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.847 00:11:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:07.847 00:11:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.847 00:11:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.847 8 00:06:07.847 00:11:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.847 00:11:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:07.847 00:11:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.847 00:11:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.847 9 00:06:07.847 00:11:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.847 00:11:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:07.847 00:11:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.847 00:11:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.847 10 00:06:07.847 00:11:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.847 00:11:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:07.848 00:11:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.848 00:11:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.848 00:11:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.848 00:11:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:07.848 00:11:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:07.848 00:11:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.848 00:11:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.848 00:11:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.848 00:11:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:07.848 00:11:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.848 00:11:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.848 00:11:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.848 00:11:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:07.848 00:11:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:07.848 00:11:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.848 00:11:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.414 00:11:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.414 00:06:08.414 real 0m0.590s 00:06:08.414 user 0m0.012s 00:06:08.414 sys 0m0.001s 00:06:08.414 00:11:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.414 00:11:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.414 ************************************ 00:06:08.414 END TEST scheduler_create_thread 00:06:08.414 ************************************ 00:06:08.414 00:11:32 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:08.414 00:11:32 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 104976 00:06:08.414 00:11:32 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 104976 ']' 00:06:08.414 00:11:32 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 104976 00:06:08.414 00:11:32 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:08.414 00:11:32 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:08.414 00:11:32 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104976 00:06:08.414 00:11:32 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:08.414 00:11:32 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:08.414 00:11:32 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104976' 00:06:08.414 killing process with pid 104976 00:06:08.414 00:11:32 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 104976 00:06:08.414 00:11:32 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 104976 00:06:08.982 [2024-11-18 00:11:32.509029] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:08.982 00:06:08.982 real 0m1.811s 00:06:08.982 user 0m2.495s 00:06:08.982 sys 0m0.353s 00:06:08.982 00:11:32 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.982 00:11:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:08.982 ************************************ 00:06:08.982 END TEST event_scheduler 00:06:08.982 ************************************ 00:06:08.982 00:11:32 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:08.982 00:11:32 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:08.982 00:11:32 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.982 00:11:32 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.982 00:11:32 event -- common/autotest_common.sh@10 -- # set +x 00:06:08.982 ************************************ 00:06:08.982 START TEST app_repeat 00:06:08.982 ************************************ 00:06:08.982 00:11:32 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:08.982 00:11:32 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.982 00:11:32 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.982 00:11:32 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:08.982 00:11:32 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:08.982 00:11:32 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:08.982 00:11:32 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:08.982 00:11:32 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:08.982 00:11:32 event.app_repeat -- event/event.sh@19 -- # repeat_pid=105284 00:06:08.982 00:11:32 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:08.982 00:11:32 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:08.982 00:11:32 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 105284' 00:06:08.982 Process app_repeat pid: 105284 00:06:08.982 00:11:32 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:08.982 00:11:32 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:08.982 spdk_app_start Round 0 00:06:08.982 00:11:32 event.app_repeat -- event/event.sh@25 -- # waitforlisten 105284 /var/tmp/spdk-nbd.sock 00:06:08.982 00:11:32 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 105284 ']' 00:06:08.982 00:11:32 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:08.982 00:11:32 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.982 00:11:32 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:08.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:08.982 00:11:32 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.982 00:11:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:08.983 [2024-11-18 00:11:32.775420] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:08.983 [2024-11-18 00:11:32.775490] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105284 ] 00:06:09.242 [2024-11-18 00:11:32.838778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:09.242 [2024-11-18 00:11:32.882442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.242 [2024-11-18 00:11:32.882446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.242 00:11:33 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.242 00:11:33 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:09.242 00:11:33 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:09.503 Malloc0 00:06:09.503 00:11:33 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:10.074 Malloc1 00:06:10.074 00:11:33 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:10.074 00:11:33 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.074 00:11:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.074 00:11:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:10.074 00:11:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.074 00:11:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:10.074 00:11:33 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:10.074 00:11:33 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.074 00:11:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.074 00:11:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:10.074 00:11:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.074 00:11:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:10.074 00:11:33 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:10.074 00:11:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:10.074 00:11:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.074 00:11:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:10.333 /dev/nbd0 00:06:10.333 00:11:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:10.333 00:11:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:10.333 00:11:33 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:10.333 00:11:33 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:10.333 00:11:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:10.333 00:11:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:10.333 00:11:33 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:10.333 00:11:33 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:10.333 00:11:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:10.333 00:11:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:10.333 00:11:33 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:10.333 1+0 records in 00:06:10.333 1+0 records out 00:06:10.333 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000209239 s, 19.6 MB/s 00:06:10.333 00:11:33 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:10.333 00:11:33 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:10.333 00:11:33 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:10.333 00:11:33 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:10.333 00:11:33 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:10.333 00:11:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:10.333 00:11:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.333 00:11:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:10.591 /dev/nbd1 00:06:10.591 00:11:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:10.591 00:11:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:10.591 00:11:34 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:10.591 00:11:34 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:10.591 00:11:34 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:10.591 00:11:34 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:10.591 00:11:34 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:10.591 00:11:34 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:10.591 00:11:34 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:10.591 00:11:34 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:10.591 00:11:34 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:10.591 1+0 records in 00:06:10.591 1+0 records out 00:06:10.591 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000152989 s, 26.8 MB/s 00:06:10.591 00:11:34 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:10.591 00:11:34 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:10.591 00:11:34 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:10.591 00:11:34 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:10.592 00:11:34 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:10.592 00:11:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:10.592 00:11:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.592 00:11:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:10.592 00:11:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.592 00:11:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:10.850 00:11:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:10.850 { 00:06:10.850 "nbd_device": "/dev/nbd0", 00:06:10.850 "bdev_name": "Malloc0" 00:06:10.850 }, 00:06:10.850 { 00:06:10.850 "nbd_device": "/dev/nbd1", 00:06:10.850 "bdev_name": "Malloc1" 00:06:10.850 } 00:06:10.850 ]' 00:06:10.850 00:11:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:10.850 { 00:06:10.850 "nbd_device": "/dev/nbd0", 00:06:10.850 "bdev_name": "Malloc0" 00:06:10.850 }, 00:06:10.850 { 00:06:10.850 "nbd_device": "/dev/nbd1", 00:06:10.850 "bdev_name": "Malloc1" 00:06:10.850 } 00:06:10.850 ]' 00:06:10.850 00:11:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:10.850 00:11:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:10.850 /dev/nbd1' 00:06:10.850 00:11:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:10.850 /dev/nbd1' 00:06:10.850 00:11:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:10.850 00:11:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:10.850 00:11:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:10.850 00:11:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:10.850 00:11:34 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:10.850 00:11:34 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:10.850 00:11:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.850 00:11:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:10.850 00:11:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:10.850 00:11:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:10.850 00:11:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:10.850 00:11:34 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:10.850 256+0 records in 00:06:10.850 256+0 records out 00:06:10.850 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00507511 s, 207 MB/s 00:06:10.850 00:11:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:10.850 00:11:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:10.850 256+0 records in 00:06:10.850 256+0 records out 00:06:10.850 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0204906 s, 51.2 MB/s 00:06:10.850 00:11:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:10.850 00:11:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:10.850 256+0 records in 00:06:10.850 256+0 records out 00:06:10.850 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0220234 s, 47.6 MB/s 00:06:10.850 00:11:34 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:10.850 00:11:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.850 00:11:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:10.850 00:11:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:10.850 00:11:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:10.850 00:11:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:10.850 00:11:34 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:10.850 00:11:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:10.850 00:11:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:10.850 00:11:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:10.850 00:11:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:10.850 00:11:34 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:10.850 00:11:34 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:10.850 00:11:34 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.850 00:11:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.850 00:11:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:10.850 00:11:34 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:10.850 00:11:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:10.850 00:11:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:11.109 00:11:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:11.109 00:11:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:11.109 00:11:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:11.109 00:11:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.109 00:11:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.109 00:11:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:11.109 00:11:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:11.109 00:11:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.109 00:11:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.109 00:11:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:11.676 00:11:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:11.676 00:11:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:11.676 00:11:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:11.676 00:11:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.676 00:11:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.676 00:11:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:11.676 00:11:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:11.676 00:11:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.676 00:11:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:11.676 00:11:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.676 00:11:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:11.934 00:11:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:11.934 00:11:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:11.934 00:11:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:11.934 00:11:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:11.934 00:11:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:11.934 00:11:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:11.934 00:11:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:11.934 00:11:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:11.934 00:11:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:11.934 00:11:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:11.934 00:11:35 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:11.934 00:11:35 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:11.934 00:11:35 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:12.193 00:11:35 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:12.451 [2024-11-18 00:11:36.019875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:12.451 [2024-11-18 00:11:36.063426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.451 [2024-11-18 00:11:36.063429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.451 [2024-11-18 00:11:36.121724] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:12.451 [2024-11-18 00:11:36.121789] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:15.736 00:11:38 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:15.736 00:11:38 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:15.736 spdk_app_start Round 1 00:06:15.736 00:11:38 event.app_repeat -- event/event.sh@25 -- # waitforlisten 105284 /var/tmp/spdk-nbd.sock 00:06:15.736 00:11:38 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 105284 ']' 00:06:15.736 00:11:38 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:15.736 00:11:38 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.736 00:11:38 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:15.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:15.736 00:11:38 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.736 00:11:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:15.736 00:11:39 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.736 00:11:39 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:15.736 00:11:39 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:15.736 Malloc0 00:06:15.736 00:11:39 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:15.995 Malloc1 00:06:15.995 00:11:39 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:15.995 00:11:39 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.995 00:11:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:15.995 00:11:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:15.995 00:11:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.995 00:11:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:15.995 00:11:39 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:15.995 00:11:39 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.995 00:11:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:15.995 00:11:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:15.995 00:11:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.995 00:11:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:15.995 00:11:39 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:15.995 00:11:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:15.995 00:11:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.995 00:11:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:16.253 /dev/nbd0 00:06:16.253 00:11:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:16.253 00:11:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:16.253 00:11:40 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:16.253 00:11:40 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:16.253 00:11:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:16.253 00:11:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:16.253 00:11:40 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:16.253 00:11:40 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:16.253 00:11:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:16.253 00:11:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:16.253 00:11:40 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:16.253 1+0 records in 00:06:16.253 1+0 records out 00:06:16.253 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000201036 s, 20.4 MB/s 00:06:16.253 00:11:40 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:16.253 00:11:40 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:16.253 00:11:40 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:16.253 00:11:40 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:16.253 00:11:40 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:16.253 00:11:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:16.253 00:11:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.253 00:11:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:16.510 /dev/nbd1 00:06:16.769 00:11:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:16.769 00:11:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:16.769 00:11:40 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:16.769 00:11:40 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:16.769 00:11:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:16.769 00:11:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:16.769 00:11:40 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:16.769 00:11:40 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:16.769 00:11:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:16.769 00:11:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:16.769 00:11:40 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:16.769 1+0 records in 00:06:16.769 1+0 records out 00:06:16.769 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225697 s, 18.1 MB/s 00:06:16.769 00:11:40 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:16.769 00:11:40 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:16.769 00:11:40 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:16.769 00:11:40 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:16.769 00:11:40 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:16.769 00:11:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:16.769 00:11:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.769 00:11:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:16.769 00:11:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.769 00:11:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:17.032 00:11:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:17.032 { 00:06:17.032 "nbd_device": "/dev/nbd0", 00:06:17.032 "bdev_name": "Malloc0" 00:06:17.032 }, 00:06:17.032 { 00:06:17.032 "nbd_device": "/dev/nbd1", 00:06:17.032 "bdev_name": "Malloc1" 00:06:17.032 } 00:06:17.032 ]' 00:06:17.032 00:11:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:17.032 { 00:06:17.032 "nbd_device": "/dev/nbd0", 00:06:17.032 "bdev_name": "Malloc0" 00:06:17.032 }, 00:06:17.032 { 00:06:17.032 "nbd_device": "/dev/nbd1", 00:06:17.032 "bdev_name": "Malloc1" 00:06:17.032 } 00:06:17.032 ]' 00:06:17.032 00:11:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:17.032 00:11:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:17.032 /dev/nbd1' 00:06:17.032 00:11:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:17.032 /dev/nbd1' 00:06:17.032 00:11:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:17.032 00:11:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:17.032 00:11:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:17.032 00:11:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:17.032 00:11:40 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:17.032 00:11:40 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:17.032 00:11:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.032 00:11:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:17.032 00:11:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:17.032 00:11:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:17.032 00:11:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:17.032 00:11:40 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:17.032 256+0 records in 00:06:17.032 256+0 records out 00:06:17.032 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00526659 s, 199 MB/s 00:06:17.032 00:11:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:17.032 00:11:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:17.032 256+0 records in 00:06:17.032 256+0 records out 00:06:17.032 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0206706 s, 50.7 MB/s 00:06:17.032 00:11:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:17.032 00:11:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:17.032 256+0 records in 00:06:17.032 256+0 records out 00:06:17.032 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0219858 s, 47.7 MB/s 00:06:17.032 00:11:40 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:17.032 00:11:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.032 00:11:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:17.032 00:11:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:17.032 00:11:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:17.032 00:11:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:17.032 00:11:40 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:17.032 00:11:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.032 00:11:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:17.032 00:11:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.032 00:11:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:17.032 00:11:40 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:17.032 00:11:40 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:17.032 00:11:40 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.032 00:11:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.032 00:11:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:17.032 00:11:40 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:17.032 00:11:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.032 00:11:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:17.291 00:11:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:17.291 00:11:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:17.291 00:11:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:17.291 00:11:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.291 00:11:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.291 00:11:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:17.291 00:11:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:17.291 00:11:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.291 00:11:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.291 00:11:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:17.549 00:11:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:17.549 00:11:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:17.549 00:11:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:17.549 00:11:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.549 00:11:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.549 00:11:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:17.549 00:11:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:17.549 00:11:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.549 00:11:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:17.549 00:11:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.550 00:11:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:17.808 00:11:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:17.808 00:11:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:17.808 00:11:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:18.066 00:11:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:18.066 00:11:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:18.066 00:11:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:18.066 00:11:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:18.066 00:11:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:18.066 00:11:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:18.066 00:11:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:18.066 00:11:41 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:18.066 00:11:41 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:18.066 00:11:41 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:18.324 00:11:41 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:18.324 [2024-11-18 00:11:42.120413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:18.583 [2024-11-18 00:11:42.165581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.583 [2024-11-18 00:11:42.165581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.583 [2024-11-18 00:11:42.221342] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:18.583 [2024-11-18 00:11:42.221423] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:21.865 00:11:44 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:21.865 00:11:44 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:21.865 spdk_app_start Round 2 00:06:21.865 00:11:44 event.app_repeat -- event/event.sh@25 -- # waitforlisten 105284 /var/tmp/spdk-nbd.sock 00:06:21.865 00:11:44 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 105284 ']' 00:06:21.865 00:11:44 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:21.865 00:11:44 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.865 00:11:44 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:21.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:21.865 00:11:44 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.865 00:11:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:21.865 00:11:45 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.865 00:11:45 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:21.865 00:11:45 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:21.865 Malloc0 00:06:21.865 00:11:45 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:22.124 Malloc1 00:06:22.124 00:11:45 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:22.124 00:11:45 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.124 00:11:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:22.124 00:11:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:22.124 00:11:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.124 00:11:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:22.124 00:11:45 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:22.124 00:11:45 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.124 00:11:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:22.124 00:11:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:22.124 00:11:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.124 00:11:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:22.124 00:11:45 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:22.124 00:11:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:22.124 00:11:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:22.124 00:11:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:22.382 /dev/nbd0 00:06:22.382 00:11:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:22.382 00:11:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:22.382 00:11:46 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:22.382 00:11:46 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:22.382 00:11:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:22.382 00:11:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:22.382 00:11:46 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:22.382 00:11:46 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:22.382 00:11:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:22.382 00:11:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:22.382 00:11:46 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:22.382 1+0 records in 00:06:22.382 1+0 records out 00:06:22.382 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000186812 s, 21.9 MB/s 00:06:22.382 00:11:46 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:22.382 00:11:46 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:22.382 00:11:46 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:22.382 00:11:46 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:22.382 00:11:46 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:22.382 00:11:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:22.382 00:11:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:22.382 00:11:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:22.640 /dev/nbd1 00:06:22.640 00:11:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:22.640 00:11:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:22.640 00:11:46 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:22.640 00:11:46 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:22.640 00:11:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:22.640 00:11:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:22.640 00:11:46 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:22.640 00:11:46 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:22.640 00:11:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:22.640 00:11:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:22.640 00:11:46 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:22.640 1+0 records in 00:06:22.640 1+0 records out 00:06:22.640 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000186575 s, 22.0 MB/s 00:06:22.640 00:11:46 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:22.640 00:11:46 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:22.641 00:11:46 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:22.641 00:11:46 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:22.641 00:11:46 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:22.641 00:11:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:22.641 00:11:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:22.641 00:11:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:22.641 00:11:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.641 00:11:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:22.899 00:11:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:22.899 { 00:06:22.899 "nbd_device": "/dev/nbd0", 00:06:22.899 "bdev_name": "Malloc0" 00:06:22.899 }, 00:06:22.899 { 00:06:22.899 "nbd_device": "/dev/nbd1", 00:06:22.899 "bdev_name": "Malloc1" 00:06:22.899 } 00:06:22.899 ]' 00:06:22.900 00:11:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:22.900 { 00:06:22.900 "nbd_device": "/dev/nbd0", 00:06:22.900 "bdev_name": "Malloc0" 00:06:22.900 }, 00:06:22.900 { 00:06:22.900 "nbd_device": "/dev/nbd1", 00:06:22.900 "bdev_name": "Malloc1" 00:06:22.900 } 00:06:22.900 ]' 00:06:22.900 00:11:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:23.157 00:11:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:23.157 /dev/nbd1' 00:06:23.157 00:11:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:23.157 /dev/nbd1' 00:06:23.157 00:11:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:23.157 00:11:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:23.157 00:11:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:23.157 00:11:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:23.157 00:11:46 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:23.157 00:11:46 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:23.157 00:11:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.157 00:11:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:23.157 00:11:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:23.157 00:11:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:23.157 00:11:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:23.157 00:11:46 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:23.157 256+0 records in 00:06:23.157 256+0 records out 00:06:23.157 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00436282 s, 240 MB/s 00:06:23.157 00:11:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:23.157 00:11:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:23.157 256+0 records in 00:06:23.157 256+0 records out 00:06:23.157 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0207057 s, 50.6 MB/s 00:06:23.157 00:11:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:23.157 00:11:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:23.157 256+0 records in 00:06:23.157 256+0 records out 00:06:23.157 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0227834 s, 46.0 MB/s 00:06:23.157 00:11:46 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:23.157 00:11:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.157 00:11:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:23.157 00:11:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:23.157 00:11:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:23.157 00:11:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:23.157 00:11:46 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:23.157 00:11:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:23.157 00:11:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:23.157 00:11:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:23.157 00:11:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:23.157 00:11:46 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:23.157 00:11:46 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:23.157 00:11:46 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.157 00:11:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.157 00:11:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:23.157 00:11:46 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:23.157 00:11:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:23.157 00:11:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:23.415 00:11:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:23.415 00:11:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:23.415 00:11:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:23.415 00:11:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:23.415 00:11:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:23.415 00:11:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:23.415 00:11:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:23.415 00:11:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:23.415 00:11:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:23.415 00:11:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:23.673 00:11:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:23.673 00:11:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:23.673 00:11:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:23.673 00:11:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:23.673 00:11:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:23.673 00:11:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:23.673 00:11:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:23.673 00:11:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:23.673 00:11:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:23.673 00:11:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.673 00:11:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:23.931 00:11:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:23.931 00:11:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:23.931 00:11:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:23.931 00:11:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:23.931 00:11:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:23.931 00:11:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:23.931 00:11:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:23.931 00:11:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:23.931 00:11:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:23.931 00:11:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:23.931 00:11:47 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:23.931 00:11:47 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:23.931 00:11:47 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:24.496 00:11:48 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:24.496 [2024-11-18 00:11:48.204527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:24.496 [2024-11-18 00:11:48.247682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.496 [2024-11-18 00:11:48.247686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.496 [2024-11-18 00:11:48.305813] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:24.496 [2024-11-18 00:11:48.305878] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:27.784 00:11:51 event.app_repeat -- event/event.sh@38 -- # waitforlisten 105284 /var/tmp/spdk-nbd.sock 00:06:27.784 00:11:51 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 105284 ']' 00:06:27.784 00:11:51 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:27.784 00:11:51 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:27.784 00:11:51 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:27.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:27.784 00:11:51 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:27.784 00:11:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:27.784 00:11:51 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:27.784 00:11:51 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:27.784 00:11:51 event.app_repeat -- event/event.sh@39 -- # killprocess 105284 00:06:27.784 00:11:51 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 105284 ']' 00:06:27.784 00:11:51 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 105284 00:06:27.784 00:11:51 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:27.784 00:11:51 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:27.784 00:11:51 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105284 00:06:27.784 00:11:51 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:27.784 00:11:51 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:27.784 00:11:51 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105284' 00:06:27.784 killing process with pid 105284 00:06:27.784 00:11:51 event.app_repeat -- common/autotest_common.sh@973 -- # kill 105284 00:06:27.784 00:11:51 event.app_repeat -- common/autotest_common.sh@978 -- # wait 105284 00:06:27.784 spdk_app_start is called in Round 0. 00:06:27.784 Shutdown signal received, stop current app iteration 00:06:27.784 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 reinitialization... 00:06:27.784 spdk_app_start is called in Round 1. 00:06:27.784 Shutdown signal received, stop current app iteration 00:06:27.784 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 reinitialization... 00:06:27.784 spdk_app_start is called in Round 2. 00:06:27.784 Shutdown signal received, stop current app iteration 00:06:27.784 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 reinitialization... 00:06:27.784 spdk_app_start is called in Round 3. 00:06:27.784 Shutdown signal received, stop current app iteration 00:06:27.784 00:11:51 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:27.784 00:11:51 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:27.784 00:06:27.784 real 0m18.754s 00:06:27.784 user 0m41.479s 00:06:27.784 sys 0m3.381s 00:06:27.784 00:11:51 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.784 00:11:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:27.784 ************************************ 00:06:27.784 END TEST app_repeat 00:06:27.784 ************************************ 00:06:27.784 00:11:51 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:27.784 00:11:51 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:27.784 00:11:51 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:27.784 00:11:51 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.784 00:11:51 event -- common/autotest_common.sh@10 -- # set +x 00:06:27.784 ************************************ 00:06:27.784 START TEST cpu_locks 00:06:27.784 ************************************ 00:06:27.784 00:11:51 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:28.051 * Looking for test storage... 00:06:28.051 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:28.051 00:11:51 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:28.051 00:11:51 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:28.051 00:11:51 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:28.051 00:11:51 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:28.051 00:11:51 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:28.051 00:11:51 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:28.051 00:11:51 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:28.051 00:11:51 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:28.051 00:11:51 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:28.051 00:11:51 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:28.051 00:11:51 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:28.051 00:11:51 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:28.051 00:11:51 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:28.051 00:11:51 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:28.051 00:11:51 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:28.051 00:11:51 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:28.051 00:11:51 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:28.051 00:11:51 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:28.051 00:11:51 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:28.051 00:11:51 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:28.051 00:11:51 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:28.051 00:11:51 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:28.051 00:11:51 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:28.051 00:11:51 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:28.051 00:11:51 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:28.051 00:11:51 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:28.051 00:11:51 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:28.051 00:11:51 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:28.051 00:11:51 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:28.051 00:11:51 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:28.051 00:11:51 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:28.051 00:11:51 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:28.051 00:11:51 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:28.051 00:11:51 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:28.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.051 --rc genhtml_branch_coverage=1 00:06:28.051 --rc genhtml_function_coverage=1 00:06:28.051 --rc genhtml_legend=1 00:06:28.051 --rc geninfo_all_blocks=1 00:06:28.051 --rc geninfo_unexecuted_blocks=1 00:06:28.051 00:06:28.051 ' 00:06:28.051 00:11:51 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:28.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.051 --rc genhtml_branch_coverage=1 00:06:28.051 --rc genhtml_function_coverage=1 00:06:28.051 --rc genhtml_legend=1 00:06:28.051 --rc geninfo_all_blocks=1 00:06:28.051 --rc geninfo_unexecuted_blocks=1 00:06:28.051 00:06:28.052 ' 00:06:28.052 00:11:51 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:28.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.052 --rc genhtml_branch_coverage=1 00:06:28.052 --rc genhtml_function_coverage=1 00:06:28.052 --rc genhtml_legend=1 00:06:28.052 --rc geninfo_all_blocks=1 00:06:28.052 --rc geninfo_unexecuted_blocks=1 00:06:28.052 00:06:28.052 ' 00:06:28.052 00:11:51 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:28.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.052 --rc genhtml_branch_coverage=1 00:06:28.052 --rc genhtml_function_coverage=1 00:06:28.052 --rc genhtml_legend=1 00:06:28.052 --rc geninfo_all_blocks=1 00:06:28.052 --rc geninfo_unexecuted_blocks=1 00:06:28.052 00:06:28.052 ' 00:06:28.052 00:11:51 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:28.052 00:11:51 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:28.052 00:11:51 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:28.052 00:11:51 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:28.052 00:11:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:28.052 00:11:51 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.052 00:11:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.052 ************************************ 00:06:28.052 START TEST default_locks 00:06:28.052 ************************************ 00:06:28.052 00:11:51 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:28.052 00:11:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=107772 00:06:28.052 00:11:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:28.052 00:11:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 107772 00:06:28.052 00:11:51 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 107772 ']' 00:06:28.052 00:11:51 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.052 00:11:51 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:28.052 00:11:51 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.052 00:11:51 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:28.052 00:11:51 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.052 [2024-11-18 00:11:51.789928] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:28.052 [2024-11-18 00:11:51.790007] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107772 ] 00:06:28.052 [2024-11-18 00:11:51.859771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.311 [2024-11-18 00:11:51.904871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.569 00:11:52 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:28.569 00:11:52 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:28.569 00:11:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 107772 00:06:28.569 00:11:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 107772 00:06:28.569 00:11:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:28.569 lslocks: write error 00:06:28.569 00:11:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 107772 00:06:28.569 00:11:52 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 107772 ']' 00:06:28.569 00:11:52 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 107772 00:06:28.569 00:11:52 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:28.569 00:11:52 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:28.569 00:11:52 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 107772 00:06:28.827 00:11:52 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:28.827 00:11:52 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:28.827 00:11:52 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 107772' 00:06:28.827 killing process with pid 107772 00:06:28.827 00:11:52 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 107772 00:06:28.827 00:11:52 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 107772 00:06:29.086 00:11:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 107772 00:06:29.086 00:11:52 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:29.086 00:11:52 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 107772 00:06:29.086 00:11:52 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:29.086 00:11:52 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:29.086 00:11:52 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:29.086 00:11:52 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:29.086 00:11:52 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 107772 00:06:29.086 00:11:52 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 107772 ']' 00:06:29.086 00:11:52 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.086 00:11:52 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.086 00:11:52 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.086 00:11:52 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.086 00:11:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:29.086 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (107772) - No such process 00:06:29.086 ERROR: process (pid: 107772) is no longer running 00:06:29.086 00:11:52 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:29.086 00:11:52 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:29.086 00:11:52 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:29.086 00:11:52 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:29.086 00:11:52 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:29.086 00:11:52 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:29.086 00:11:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:29.086 00:11:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:29.086 00:11:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:29.086 00:11:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:29.086 00:06:29.086 real 0m1.066s 00:06:29.086 user 0m1.048s 00:06:29.086 sys 0m0.471s 00:06:29.086 00:11:52 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.086 00:11:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:29.086 ************************************ 00:06:29.086 END TEST default_locks 00:06:29.086 ************************************ 00:06:29.086 00:11:52 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:29.086 00:11:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:29.086 00:11:52 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.086 00:11:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:29.086 ************************************ 00:06:29.086 START TEST default_locks_via_rpc 00:06:29.086 ************************************ 00:06:29.086 00:11:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:29.086 00:11:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=107940 00:06:29.086 00:11:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:29.086 00:11:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 107940 00:06:29.087 00:11:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 107940 ']' 00:06:29.087 00:11:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.087 00:11:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.087 00:11:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.087 00:11:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.087 00:11:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.087 [2024-11-18 00:11:52.906266] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:29.087 [2024-11-18 00:11:52.906370] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107940 ] 00:06:29.346 [2024-11-18 00:11:52.974265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.346 [2024-11-18 00:11:53.023897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.604 00:11:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:29.604 00:11:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:29.604 00:11:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:29.604 00:11:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.604 00:11:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.604 00:11:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.604 00:11:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:29.604 00:11:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:29.604 00:11:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:29.604 00:11:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:29.604 00:11:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:29.604 00:11:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.604 00:11:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.604 00:11:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.604 00:11:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 107940 00:06:29.604 00:11:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 107940 00:06:29.604 00:11:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:29.862 00:11:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 107940 00:06:29.862 00:11:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 107940 ']' 00:06:29.862 00:11:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 107940 00:06:29.862 00:11:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:29.862 00:11:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:29.862 00:11:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 107940 00:06:29.862 00:11:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:29.862 00:11:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:29.862 00:11:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 107940' 00:06:29.862 killing process with pid 107940 00:06:29.862 00:11:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 107940 00:06:29.862 00:11:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 107940 00:06:30.428 00:06:30.428 real 0m1.101s 00:06:30.428 user 0m1.070s 00:06:30.428 sys 0m0.503s 00:06:30.428 00:11:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.428 00:11:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.428 ************************************ 00:06:30.428 END TEST default_locks_via_rpc 00:06:30.428 ************************************ 00:06:30.428 00:11:53 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:30.428 00:11:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:30.428 00:11:53 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.428 00:11:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:30.428 ************************************ 00:06:30.428 START TEST non_locking_app_on_locked_coremask 00:06:30.428 ************************************ 00:06:30.428 00:11:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:30.428 00:11:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=108100 00:06:30.428 00:11:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:30.428 00:11:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 108100 /var/tmp/spdk.sock 00:06:30.428 00:11:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 108100 ']' 00:06:30.428 00:11:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.428 00:11:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:30.428 00:11:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.428 00:11:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:30.428 00:11:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:30.428 [2024-11-18 00:11:54.056528] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:30.428 [2024-11-18 00:11:54.056610] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108100 ] 00:06:30.428 [2024-11-18 00:11:54.123273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.428 [2024-11-18 00:11:54.172738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.688 00:11:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:30.688 00:11:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:30.688 00:11:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=108103 00:06:30.688 00:11:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:30.688 00:11:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 108103 /var/tmp/spdk2.sock 00:06:30.688 00:11:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 108103 ']' 00:06:30.688 00:11:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:30.688 00:11:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:30.688 00:11:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:30.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:30.688 00:11:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:30.688 00:11:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:30.688 [2024-11-18 00:11:54.480825] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:30.688 [2024-11-18 00:11:54.480912] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108103 ] 00:06:30.947 [2024-11-18 00:11:54.579988] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:30.947 [2024-11-18 00:11:54.580017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.947 [2024-11-18 00:11:54.681948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.515 00:11:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.515 00:11:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:31.515 00:11:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 108100 00:06:31.515 00:11:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 108100 00:06:31.515 00:11:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:31.773 lslocks: write error 00:06:31.773 00:11:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 108100 00:06:31.773 00:11:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 108100 ']' 00:06:31.773 00:11:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 108100 00:06:31.773 00:11:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:31.773 00:11:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:31.773 00:11:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108100 00:06:32.032 00:11:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:32.032 00:11:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:32.032 00:11:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108100' 00:06:32.032 killing process with pid 108100 00:06:32.032 00:11:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 108100 00:06:32.032 00:11:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 108100 00:06:32.599 00:11:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 108103 00:06:32.599 00:11:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 108103 ']' 00:06:32.599 00:11:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 108103 00:06:32.599 00:11:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:32.599 00:11:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:32.599 00:11:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108103 00:06:32.599 00:11:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:32.599 00:11:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:32.599 00:11:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108103' 00:06:32.599 killing process with pid 108103 00:06:32.599 00:11:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 108103 00:06:32.599 00:11:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 108103 00:06:33.166 00:06:33.166 real 0m2.757s 00:06:33.166 user 0m2.773s 00:06:33.166 sys 0m0.970s 00:06:33.166 00:11:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.166 00:11:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.166 ************************************ 00:06:33.166 END TEST non_locking_app_on_locked_coremask 00:06:33.166 ************************************ 00:06:33.166 00:11:56 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:33.166 00:11:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:33.166 00:11:56 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.166 00:11:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:33.166 ************************************ 00:06:33.166 START TEST locking_app_on_unlocked_coremask 00:06:33.166 ************************************ 00:06:33.166 00:11:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:33.166 00:11:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=108418 00:06:33.166 00:11:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:33.166 00:11:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 108418 /var/tmp/spdk.sock 00:06:33.166 00:11:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 108418 ']' 00:06:33.166 00:11:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.166 00:11:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.166 00:11:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.166 00:11:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.166 00:11:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.166 [2024-11-18 00:11:56.866740] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:33.166 [2024-11-18 00:11:56.866816] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108418 ] 00:06:33.166 [2024-11-18 00:11:56.935991] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:33.166 [2024-11-18 00:11:56.936024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.166 [2024-11-18 00:11:56.980836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.431 00:11:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.431 00:11:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:33.431 00:11:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=108535 00:06:33.431 00:11:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:33.431 00:11:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 108535 /var/tmp/spdk2.sock 00:06:33.432 00:11:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 108535 ']' 00:06:33.432 00:11:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:33.432 00:11:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.432 00:11:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:33.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:33.432 00:11:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.432 00:11:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.690 [2024-11-18 00:11:57.286228] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:33.690 [2024-11-18 00:11:57.286334] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108535 ] 00:06:33.690 [2024-11-18 00:11:57.385149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.690 [2024-11-18 00:11:57.473201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.256 00:11:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:34.256 00:11:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:34.256 00:11:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 108535 00:06:34.256 00:11:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 108535 00:06:34.256 00:11:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:34.823 lslocks: write error 00:06:34.823 00:11:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 108418 00:06:34.823 00:11:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 108418 ']' 00:06:34.823 00:11:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 108418 00:06:34.823 00:11:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:34.823 00:11:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:34.823 00:11:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108418 00:06:34.823 00:11:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:34.823 00:11:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:34.823 00:11:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108418' 00:06:34.823 killing process with pid 108418 00:06:34.823 00:11:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 108418 00:06:34.823 00:11:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 108418 00:06:35.391 00:11:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 108535 00:06:35.391 00:11:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 108535 ']' 00:06:35.391 00:11:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 108535 00:06:35.391 00:11:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:35.391 00:11:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:35.391 00:11:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108535 00:06:35.649 00:11:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:35.649 00:11:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:35.649 00:11:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108535' 00:06:35.649 killing process with pid 108535 00:06:35.649 00:11:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 108535 00:06:35.649 00:11:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 108535 00:06:35.908 00:06:35.908 real 0m2.782s 00:06:35.908 user 0m2.805s 00:06:35.908 sys 0m0.988s 00:06:35.908 00:11:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.908 00:11:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.908 ************************************ 00:06:35.908 END TEST locking_app_on_unlocked_coremask 00:06:35.908 ************************************ 00:06:35.908 00:11:59 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:35.908 00:11:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:35.908 00:11:59 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.908 00:11:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.908 ************************************ 00:06:35.908 START TEST locking_app_on_locked_coremask 00:06:35.908 ************************************ 00:06:35.908 00:11:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:35.908 00:11:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=108833 00:06:35.908 00:11:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:35.908 00:11:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 108833 /var/tmp/spdk.sock 00:06:35.908 00:11:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 108833 ']' 00:06:35.908 00:11:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.908 00:11:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.908 00:11:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.908 00:11:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.908 00:11:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.908 [2024-11-18 00:11:59.701103] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:35.908 [2024-11-18 00:11:59.701195] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108833 ] 00:06:36.166 [2024-11-18 00:11:59.765262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.166 [2024-11-18 00:11:59.806812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.435 00:12:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.435 00:12:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:36.435 00:12:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=108858 00:06:36.435 00:12:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:36.435 00:12:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 108858 /var/tmp/spdk2.sock 00:06:36.435 00:12:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:36.435 00:12:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 108858 /var/tmp/spdk2.sock 00:06:36.435 00:12:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:36.435 00:12:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:36.435 00:12:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:36.435 00:12:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:36.435 00:12:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 108858 /var/tmp/spdk2.sock 00:06:36.435 00:12:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 108858 ']' 00:06:36.435 00:12:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:36.435 00:12:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.435 00:12:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:36.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:36.436 00:12:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.436 00:12:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.436 [2024-11-18 00:12:00.104168] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:36.436 [2024-11-18 00:12:00.104251] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108858 ] 00:06:36.436 [2024-11-18 00:12:00.212891] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 108833 has claimed it. 00:06:36.436 [2024-11-18 00:12:00.212962] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:37.376 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (108858) - No such process 00:06:37.376 ERROR: process (pid: 108858) is no longer running 00:06:37.376 00:12:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:37.376 00:12:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:37.376 00:12:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:37.376 00:12:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:37.376 00:12:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:37.376 00:12:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:37.376 00:12:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 108833 00:06:37.376 00:12:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 108833 00:06:37.376 00:12:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:37.376 lslocks: write error 00:06:37.376 00:12:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 108833 00:06:37.376 00:12:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 108833 ']' 00:06:37.376 00:12:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 108833 00:06:37.376 00:12:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:37.376 00:12:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:37.376 00:12:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108833 00:06:37.376 00:12:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:37.376 00:12:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:37.376 00:12:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108833' 00:06:37.376 killing process with pid 108833 00:06:37.376 00:12:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 108833 00:06:37.376 00:12:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 108833 00:06:37.943 00:06:37.943 real 0m1.874s 00:06:37.943 user 0m2.115s 00:06:37.943 sys 0m0.583s 00:06:37.943 00:12:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.943 00:12:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:37.943 ************************************ 00:06:37.943 END TEST locking_app_on_locked_coremask 00:06:37.943 ************************************ 00:06:37.943 00:12:01 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:37.943 00:12:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:37.943 00:12:01 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.943 00:12:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.943 ************************************ 00:06:37.943 START TEST locking_overlapped_coremask 00:06:37.943 ************************************ 00:06:37.943 00:12:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:37.943 00:12:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=109226 00:06:37.943 00:12:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:37.943 00:12:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 109226 /var/tmp/spdk.sock 00:06:37.943 00:12:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 109226 ']' 00:06:37.943 00:12:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.943 00:12:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:37.943 00:12:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.943 00:12:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:37.943 00:12:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:37.943 [2024-11-18 00:12:01.628770] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:37.943 [2024-11-18 00:12:01.628847] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109226 ] 00:06:37.943 [2024-11-18 00:12:01.694853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:37.943 [2024-11-18 00:12:01.745670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.943 [2024-11-18 00:12:01.745728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:37.943 [2024-11-18 00:12:01.745731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.201 00:12:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:38.201 00:12:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:38.201 00:12:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=109259 00:06:38.201 00:12:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 109259 /var/tmp/spdk2.sock 00:06:38.201 00:12:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:38.201 00:12:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 109259 /var/tmp/spdk2.sock 00:06:38.201 00:12:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:38.201 00:12:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:38.201 00:12:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:38.201 00:12:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:38.201 00:12:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:38.201 00:12:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 109259 /var/tmp/spdk2.sock 00:06:38.201 00:12:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 109259 ']' 00:06:38.201 00:12:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:38.201 00:12:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.201 00:12:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:38.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:38.201 00:12:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.201 00:12:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.459 [2024-11-18 00:12:02.070916] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:38.459 [2024-11-18 00:12:02.071000] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109259 ] 00:06:38.459 [2024-11-18 00:12:02.177085] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 109226 has claimed it. 00:06:38.459 [2024-11-18 00:12:02.177146] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:39.027 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (109259) - No such process 00:06:39.027 ERROR: process (pid: 109259) is no longer running 00:06:39.027 00:12:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.027 00:12:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:39.027 00:12:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:39.027 00:12:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:39.027 00:12:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:39.027 00:12:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:39.027 00:12:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:39.027 00:12:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:39.027 00:12:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:39.027 00:12:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:39.027 00:12:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 109226 00:06:39.027 00:12:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 109226 ']' 00:06:39.027 00:12:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 109226 00:06:39.027 00:12:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:39.027 00:12:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:39.027 00:12:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 109226 00:06:39.027 00:12:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:39.027 00:12:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:39.027 00:12:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 109226' 00:06:39.027 killing process with pid 109226 00:06:39.027 00:12:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 109226 00:06:39.027 00:12:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 109226 00:06:39.595 00:06:39.595 real 0m1.625s 00:06:39.595 user 0m4.578s 00:06:39.595 sys 0m0.471s 00:06:39.595 00:12:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.595 00:12:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.595 ************************************ 00:06:39.595 END TEST locking_overlapped_coremask 00:06:39.595 ************************************ 00:06:39.595 00:12:03 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:39.595 00:12:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:39.595 00:12:03 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.595 00:12:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:39.595 ************************************ 00:06:39.595 START TEST locking_overlapped_coremask_via_rpc 00:06:39.595 ************************************ 00:06:39.595 00:12:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:39.595 00:12:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=109421 00:06:39.595 00:12:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:39.595 00:12:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 109421 /var/tmp/spdk.sock 00:06:39.595 00:12:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 109421 ']' 00:06:39.595 00:12:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.595 00:12:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:39.595 00:12:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.595 00:12:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:39.595 00:12:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.595 [2024-11-18 00:12:03.306079] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:39.595 [2024-11-18 00:12:03.306144] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109421 ] 00:06:39.595 [2024-11-18 00:12:03.372586] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:39.595 [2024-11-18 00:12:03.372642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:39.854 [2024-11-18 00:12:03.426210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.854 [2024-11-18 00:12:03.426275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:39.854 [2024-11-18 00:12:03.426278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.113 00:12:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:40.113 00:12:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:40.113 00:12:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=109548 00:06:40.113 00:12:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:40.113 00:12:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 109548 /var/tmp/spdk2.sock 00:06:40.113 00:12:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 109548 ']' 00:06:40.113 00:12:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:40.113 00:12:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.113 00:12:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:40.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:40.113 00:12:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.113 00:12:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.113 [2024-11-18 00:12:03.739396] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:40.113 [2024-11-18 00:12:03.739485] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109548 ] 00:06:40.113 [2024-11-18 00:12:03.842365] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:40.113 [2024-11-18 00:12:03.842407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:40.372 [2024-11-18 00:12:03.939867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:40.372 [2024-11-18 00:12:03.943443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:40.372 [2024-11-18 00:12:03.943446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:40.939 00:12:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:40.939 00:12:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:40.939 00:12:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:40.939 00:12:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.939 00:12:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.939 00:12:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.939 00:12:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:40.939 00:12:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:40.939 00:12:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:40.939 00:12:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:40.939 00:12:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:40.939 00:12:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:40.939 00:12:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:40.939 00:12:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:40.939 00:12:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.939 00:12:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.198 [2024-11-18 00:12:04.767421] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 109421 has claimed it. 00:06:41.198 request: 00:06:41.198 { 00:06:41.198 "method": "framework_enable_cpumask_locks", 00:06:41.198 "req_id": 1 00:06:41.198 } 00:06:41.198 Got JSON-RPC error response 00:06:41.198 response: 00:06:41.198 { 00:06:41.198 "code": -32603, 00:06:41.198 "message": "Failed to claim CPU core: 2" 00:06:41.198 } 00:06:41.198 00:12:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:41.198 00:12:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:41.198 00:12:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:41.198 00:12:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:41.198 00:12:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:41.198 00:12:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 109421 /var/tmp/spdk.sock 00:06:41.198 00:12:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 109421 ']' 00:06:41.198 00:12:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.198 00:12:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:41.198 00:12:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.198 00:12:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:41.198 00:12:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.456 00:12:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.456 00:12:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:41.456 00:12:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 109548 /var/tmp/spdk2.sock 00:06:41.456 00:12:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 109548 ']' 00:06:41.456 00:12:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:41.456 00:12:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:41.456 00:12:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:41.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:41.456 00:12:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:41.456 00:12:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.715 00:12:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.715 00:12:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:41.715 00:12:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:41.715 00:12:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:41.715 00:12:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:41.715 00:12:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:41.715 00:06:41.715 real 0m2.076s 00:06:41.715 user 0m1.171s 00:06:41.715 sys 0m0.188s 00:06:41.715 00:12:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.715 00:12:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.715 ************************************ 00:06:41.715 END TEST locking_overlapped_coremask_via_rpc 00:06:41.715 ************************************ 00:06:41.715 00:12:05 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:41.715 00:12:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 109421 ]] 00:06:41.715 00:12:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 109421 00:06:41.715 00:12:05 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 109421 ']' 00:06:41.715 00:12:05 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 109421 00:06:41.715 00:12:05 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:41.715 00:12:05 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:41.715 00:12:05 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 109421 00:06:41.715 00:12:05 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:41.715 00:12:05 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:41.715 00:12:05 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 109421' 00:06:41.715 killing process with pid 109421 00:06:41.715 00:12:05 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 109421 00:06:41.715 00:12:05 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 109421 00:06:41.974 00:12:05 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 109548 ]] 00:06:41.974 00:12:05 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 109548 00:06:41.974 00:12:05 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 109548 ']' 00:06:41.974 00:12:05 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 109548 00:06:41.974 00:12:05 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:41.974 00:12:05 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:41.974 00:12:05 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 109548 00:06:42.233 00:12:05 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:42.233 00:12:05 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:42.233 00:12:05 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 109548' 00:06:42.233 killing process with pid 109548 00:06:42.233 00:12:05 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 109548 00:06:42.233 00:12:05 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 109548 00:06:42.492 00:12:06 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:42.492 00:12:06 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:42.492 00:12:06 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 109421 ]] 00:06:42.492 00:12:06 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 109421 00:06:42.492 00:12:06 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 109421 ']' 00:06:42.492 00:12:06 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 109421 00:06:42.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (109421) - No such process 00:06:42.492 00:12:06 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 109421 is not found' 00:06:42.492 Process with pid 109421 is not found 00:06:42.492 00:12:06 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 109548 ]] 00:06:42.492 00:12:06 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 109548 00:06:42.492 00:12:06 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 109548 ']' 00:06:42.492 00:12:06 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 109548 00:06:42.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (109548) - No such process 00:06:42.492 00:12:06 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 109548 is not found' 00:06:42.492 Process with pid 109548 is not found 00:06:42.492 00:12:06 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:42.492 00:06:42.492 real 0m14.666s 00:06:42.493 user 0m27.297s 00:06:42.493 sys 0m5.134s 00:06:42.493 00:12:06 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.493 00:12:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:42.493 ************************************ 00:06:42.493 END TEST cpu_locks 00:06:42.493 ************************************ 00:06:42.493 00:06:42.493 real 0m39.189s 00:06:42.493 user 1m17.790s 00:06:42.493 sys 0m9.315s 00:06:42.493 00:12:06 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.493 00:12:06 event -- common/autotest_common.sh@10 -- # set +x 00:06:42.493 ************************************ 00:06:42.493 END TEST event 00:06:42.493 ************************************ 00:06:42.493 00:12:06 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:42.493 00:12:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:42.493 00:12:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.493 00:12:06 -- common/autotest_common.sh@10 -- # set +x 00:06:42.493 ************************************ 00:06:42.493 START TEST thread 00:06:42.493 ************************************ 00:06:42.493 00:12:06 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:42.752 * Looking for test storage... 00:06:42.752 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:42.752 00:12:06 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:42.752 00:12:06 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:42.752 00:12:06 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:42.752 00:12:06 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:42.752 00:12:06 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:42.752 00:12:06 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:42.752 00:12:06 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:42.752 00:12:06 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:42.752 00:12:06 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:42.752 00:12:06 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:42.752 00:12:06 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:42.752 00:12:06 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:42.752 00:12:06 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:42.752 00:12:06 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:42.752 00:12:06 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:42.752 00:12:06 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:42.752 00:12:06 thread -- scripts/common.sh@345 -- # : 1 00:06:42.752 00:12:06 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:42.752 00:12:06 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:42.752 00:12:06 thread -- scripts/common.sh@365 -- # decimal 1 00:06:42.752 00:12:06 thread -- scripts/common.sh@353 -- # local d=1 00:06:42.752 00:12:06 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:42.752 00:12:06 thread -- scripts/common.sh@355 -- # echo 1 00:06:42.752 00:12:06 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:42.752 00:12:06 thread -- scripts/common.sh@366 -- # decimal 2 00:06:42.752 00:12:06 thread -- scripts/common.sh@353 -- # local d=2 00:06:42.752 00:12:06 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:42.752 00:12:06 thread -- scripts/common.sh@355 -- # echo 2 00:06:42.752 00:12:06 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:42.752 00:12:06 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:42.752 00:12:06 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:42.752 00:12:06 thread -- scripts/common.sh@368 -- # return 0 00:06:42.752 00:12:06 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:42.752 00:12:06 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:42.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.752 --rc genhtml_branch_coverage=1 00:06:42.753 --rc genhtml_function_coverage=1 00:06:42.753 --rc genhtml_legend=1 00:06:42.753 --rc geninfo_all_blocks=1 00:06:42.753 --rc geninfo_unexecuted_blocks=1 00:06:42.753 00:06:42.753 ' 00:06:42.753 00:12:06 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:42.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.753 --rc genhtml_branch_coverage=1 00:06:42.753 --rc genhtml_function_coverage=1 00:06:42.753 --rc genhtml_legend=1 00:06:42.753 --rc geninfo_all_blocks=1 00:06:42.753 --rc geninfo_unexecuted_blocks=1 00:06:42.753 00:06:42.753 ' 00:06:42.753 00:12:06 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:42.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.753 --rc genhtml_branch_coverage=1 00:06:42.753 --rc genhtml_function_coverage=1 00:06:42.753 --rc genhtml_legend=1 00:06:42.753 --rc geninfo_all_blocks=1 00:06:42.753 --rc geninfo_unexecuted_blocks=1 00:06:42.753 00:06:42.753 ' 00:06:42.753 00:12:06 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:42.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.753 --rc genhtml_branch_coverage=1 00:06:42.753 --rc genhtml_function_coverage=1 00:06:42.753 --rc genhtml_legend=1 00:06:42.753 --rc geninfo_all_blocks=1 00:06:42.753 --rc geninfo_unexecuted_blocks=1 00:06:42.753 00:06:42.753 ' 00:06:42.753 00:12:06 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:42.753 00:12:06 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:42.753 00:12:06 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.753 00:12:06 thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.753 ************************************ 00:06:42.753 START TEST thread_poller_perf 00:06:42.753 ************************************ 00:06:42.753 00:12:06 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:42.753 [2024-11-18 00:12:06.479083] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:42.753 [2024-11-18 00:12:06.479139] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109928 ] 00:06:42.753 [2024-11-18 00:12:06.544830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.011 [2024-11-18 00:12:06.590763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.011 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:43.961 [2024-11-17T23:12:07.783Z] ====================================== 00:06:43.961 [2024-11-17T23:12:07.783Z] busy:2710016553 (cyc) 00:06:43.961 [2024-11-17T23:12:07.783Z] total_run_count: 360000 00:06:43.961 [2024-11-17T23:12:07.783Z] tsc_hz: 2700000000 (cyc) 00:06:43.961 [2024-11-17T23:12:07.783Z] ====================================== 00:06:43.961 [2024-11-17T23:12:07.783Z] poller_cost: 7527 (cyc), 2787 (nsec) 00:06:43.961 00:06:43.961 real 0m1.173s 00:06:43.961 user 0m1.102s 00:06:43.961 sys 0m0.065s 00:06:43.961 00:12:07 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.961 00:12:07 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:43.961 ************************************ 00:06:43.961 END TEST thread_poller_perf 00:06:43.961 ************************************ 00:06:43.961 00:12:07 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:43.961 00:12:07 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:43.961 00:12:07 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.961 00:12:07 thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.961 ************************************ 00:06:43.961 START TEST thread_poller_perf 00:06:43.961 ************************************ 00:06:43.961 00:12:07 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:43.961 [2024-11-18 00:12:07.708133] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:43.961 [2024-11-18 00:12:07.708201] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110303 ] 00:06:43.961 [2024-11-18 00:12:07.775363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.221 [2024-11-18 00:12:07.822079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.221 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:45.155 [2024-11-17T23:12:08.977Z] ====================================== 00:06:45.155 [2024-11-17T23:12:08.977Z] busy:2702654754 (cyc) 00:06:45.155 [2024-11-17T23:12:08.977Z] total_run_count: 4722000 00:06:45.155 [2024-11-17T23:12:08.977Z] tsc_hz: 2700000000 (cyc) 00:06:45.155 [2024-11-17T23:12:08.977Z] ====================================== 00:06:45.155 [2024-11-17T23:12:08.977Z] poller_cost: 572 (cyc), 211 (nsec) 00:06:45.155 00:06:45.155 real 0m1.172s 00:06:45.155 user 0m1.097s 00:06:45.155 sys 0m0.069s 00:06:45.155 00:12:08 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.155 00:12:08 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:45.155 ************************************ 00:06:45.155 END TEST thread_poller_perf 00:06:45.155 ************************************ 00:06:45.155 00:12:08 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:45.155 00:06:45.155 real 0m2.588s 00:06:45.155 user 0m2.333s 00:06:45.155 sys 0m0.258s 00:06:45.155 00:12:08 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.155 00:12:08 thread -- common/autotest_common.sh@10 -- # set +x 00:06:45.155 ************************************ 00:06:45.155 END TEST thread 00:06:45.155 ************************************ 00:06:45.155 00:12:08 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:45.155 00:12:08 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:45.155 00:12:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:45.155 00:12:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.155 00:12:08 -- common/autotest_common.sh@10 -- # set +x 00:06:45.155 ************************************ 00:06:45.155 START TEST app_cmdline 00:06:45.155 ************************************ 00:06:45.155 00:12:08 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:45.412 * Looking for test storage... 00:06:45.412 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:45.412 00:12:08 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:45.412 00:12:08 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:45.412 00:12:08 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:45.412 00:12:09 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:45.412 00:12:09 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:45.412 00:12:09 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:45.412 00:12:09 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:45.412 00:12:09 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:45.412 00:12:09 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:45.412 00:12:09 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:45.412 00:12:09 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:45.412 00:12:09 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:45.412 00:12:09 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:45.412 00:12:09 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:45.412 00:12:09 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:45.412 00:12:09 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:45.412 00:12:09 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:45.412 00:12:09 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:45.412 00:12:09 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:45.412 00:12:09 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:45.412 00:12:09 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:45.412 00:12:09 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:45.412 00:12:09 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:45.412 00:12:09 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:45.412 00:12:09 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:45.412 00:12:09 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:45.412 00:12:09 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:45.412 00:12:09 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:45.412 00:12:09 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:45.412 00:12:09 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:45.412 00:12:09 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:45.412 00:12:09 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:45.413 00:12:09 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:45.413 00:12:09 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:45.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.413 --rc genhtml_branch_coverage=1 00:06:45.413 --rc genhtml_function_coverage=1 00:06:45.413 --rc genhtml_legend=1 00:06:45.413 --rc geninfo_all_blocks=1 00:06:45.413 --rc geninfo_unexecuted_blocks=1 00:06:45.413 00:06:45.413 ' 00:06:45.413 00:12:09 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:45.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.413 --rc genhtml_branch_coverage=1 00:06:45.413 --rc genhtml_function_coverage=1 00:06:45.413 --rc genhtml_legend=1 00:06:45.413 --rc geninfo_all_blocks=1 00:06:45.413 --rc geninfo_unexecuted_blocks=1 00:06:45.413 00:06:45.413 ' 00:06:45.413 00:12:09 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:45.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.413 --rc genhtml_branch_coverage=1 00:06:45.413 --rc genhtml_function_coverage=1 00:06:45.413 --rc genhtml_legend=1 00:06:45.413 --rc geninfo_all_blocks=1 00:06:45.413 --rc geninfo_unexecuted_blocks=1 00:06:45.413 00:06:45.413 ' 00:06:45.413 00:12:09 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:45.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.413 --rc genhtml_branch_coverage=1 00:06:45.413 --rc genhtml_function_coverage=1 00:06:45.413 --rc genhtml_legend=1 00:06:45.413 --rc geninfo_all_blocks=1 00:06:45.413 --rc geninfo_unexecuted_blocks=1 00:06:45.413 00:06:45.413 ' 00:06:45.413 00:12:09 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:45.413 00:12:09 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=110779 00:06:45.413 00:12:09 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:45.413 00:12:09 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 110779 00:06:45.413 00:12:09 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 110779 ']' 00:06:45.413 00:12:09 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.413 00:12:09 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:45.413 00:12:09 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.413 00:12:09 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:45.413 00:12:09 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:45.413 [2024-11-18 00:12:09.131188] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:45.413 [2024-11-18 00:12:09.131278] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110779 ] 00:06:45.413 [2024-11-18 00:12:09.203407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.670 [2024-11-18 00:12:09.250925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.670 00:12:09 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:45.670 00:12:09 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:45.670 00:12:09 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:45.929 { 00:06:45.929 "version": "SPDK v25.01-pre git sha1 83e8405e4", 00:06:45.929 "fields": { 00:06:45.929 "major": 25, 00:06:45.929 "minor": 1, 00:06:45.929 "patch": 0, 00:06:45.929 "suffix": "-pre", 00:06:45.929 "commit": "83e8405e4" 00:06:45.929 } 00:06:45.929 } 00:06:45.929 00:12:09 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:45.929 00:12:09 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:45.929 00:12:09 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:45.929 00:12:09 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:46.195 00:12:09 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:46.195 00:12:09 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.195 00:12:09 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:46.195 00:12:09 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:46.195 00:12:09 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:46.195 00:12:09 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.195 00:12:09 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:46.195 00:12:09 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:46.195 00:12:09 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:46.195 00:12:09 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:46.195 00:12:09 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:46.195 00:12:09 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:46.195 00:12:09 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:46.195 00:12:09 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:46.195 00:12:09 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:46.195 00:12:09 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:46.195 00:12:09 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:46.195 00:12:09 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:46.195 00:12:09 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:46.195 00:12:09 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:46.459 request: 00:06:46.459 { 00:06:46.459 "method": "env_dpdk_get_mem_stats", 00:06:46.459 "req_id": 1 00:06:46.459 } 00:06:46.459 Got JSON-RPC error response 00:06:46.459 response: 00:06:46.459 { 00:06:46.459 "code": -32601, 00:06:46.459 "message": "Method not found" 00:06:46.459 } 00:06:46.459 00:12:10 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:46.459 00:12:10 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:46.459 00:12:10 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:46.459 00:12:10 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:46.459 00:12:10 app_cmdline -- app/cmdline.sh@1 -- # killprocess 110779 00:06:46.459 00:12:10 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 110779 ']' 00:06:46.459 00:12:10 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 110779 00:06:46.459 00:12:10 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:46.459 00:12:10 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:46.459 00:12:10 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110779 00:06:46.459 00:12:10 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:46.459 00:12:10 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:46.459 00:12:10 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110779' 00:06:46.459 killing process with pid 110779 00:06:46.459 00:12:10 app_cmdline -- common/autotest_common.sh@973 -- # kill 110779 00:06:46.459 00:12:10 app_cmdline -- common/autotest_common.sh@978 -- # wait 110779 00:06:46.718 00:06:46.718 real 0m1.539s 00:06:46.718 user 0m1.925s 00:06:46.718 sys 0m0.488s 00:06:46.718 00:12:10 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.718 00:12:10 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:46.718 ************************************ 00:06:46.718 END TEST app_cmdline 00:06:46.718 ************************************ 00:06:46.718 00:12:10 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:46.718 00:12:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:46.718 00:12:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.718 00:12:10 -- common/autotest_common.sh@10 -- # set +x 00:06:46.718 ************************************ 00:06:46.718 START TEST version 00:06:46.718 ************************************ 00:06:46.718 00:12:10 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:46.977 * Looking for test storage... 00:06:46.977 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:46.977 00:12:10 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:46.977 00:12:10 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:46.977 00:12:10 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:46.977 00:12:10 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:46.977 00:12:10 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:46.977 00:12:10 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:46.977 00:12:10 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:46.977 00:12:10 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.977 00:12:10 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:46.977 00:12:10 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:46.977 00:12:10 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:46.977 00:12:10 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:46.977 00:12:10 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:46.977 00:12:10 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:46.977 00:12:10 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:46.977 00:12:10 version -- scripts/common.sh@344 -- # case "$op" in 00:06:46.977 00:12:10 version -- scripts/common.sh@345 -- # : 1 00:06:46.977 00:12:10 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:46.977 00:12:10 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.977 00:12:10 version -- scripts/common.sh@365 -- # decimal 1 00:06:46.977 00:12:10 version -- scripts/common.sh@353 -- # local d=1 00:06:46.977 00:12:10 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.977 00:12:10 version -- scripts/common.sh@355 -- # echo 1 00:06:46.977 00:12:10 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:46.977 00:12:10 version -- scripts/common.sh@366 -- # decimal 2 00:06:46.977 00:12:10 version -- scripts/common.sh@353 -- # local d=2 00:06:46.977 00:12:10 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.977 00:12:10 version -- scripts/common.sh@355 -- # echo 2 00:06:46.977 00:12:10 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:46.977 00:12:10 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:46.977 00:12:10 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:46.977 00:12:10 version -- scripts/common.sh@368 -- # return 0 00:06:46.977 00:12:10 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:46.977 00:12:10 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:46.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.977 --rc genhtml_branch_coverage=1 00:06:46.977 --rc genhtml_function_coverage=1 00:06:46.977 --rc genhtml_legend=1 00:06:46.977 --rc geninfo_all_blocks=1 00:06:46.977 --rc geninfo_unexecuted_blocks=1 00:06:46.977 00:06:46.977 ' 00:06:46.977 00:12:10 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:46.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.977 --rc genhtml_branch_coverage=1 00:06:46.977 --rc genhtml_function_coverage=1 00:06:46.977 --rc genhtml_legend=1 00:06:46.977 --rc geninfo_all_blocks=1 00:06:46.977 --rc geninfo_unexecuted_blocks=1 00:06:46.977 00:06:46.977 ' 00:06:46.977 00:12:10 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:46.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.977 --rc genhtml_branch_coverage=1 00:06:46.977 --rc genhtml_function_coverage=1 00:06:46.977 --rc genhtml_legend=1 00:06:46.977 --rc geninfo_all_blocks=1 00:06:46.977 --rc geninfo_unexecuted_blocks=1 00:06:46.977 00:06:46.978 ' 00:06:46.978 00:12:10 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:46.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.978 --rc genhtml_branch_coverage=1 00:06:46.978 --rc genhtml_function_coverage=1 00:06:46.978 --rc genhtml_legend=1 00:06:46.978 --rc geninfo_all_blocks=1 00:06:46.978 --rc geninfo_unexecuted_blocks=1 00:06:46.978 00:06:46.978 ' 00:06:46.978 00:12:10 version -- app/version.sh@17 -- # get_header_version major 00:06:46.978 00:12:10 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:46.978 00:12:10 version -- app/version.sh@14 -- # cut -f2 00:06:46.978 00:12:10 version -- app/version.sh@14 -- # tr -d '"' 00:06:46.978 00:12:10 version -- app/version.sh@17 -- # major=25 00:06:46.978 00:12:10 version -- app/version.sh@18 -- # get_header_version minor 00:06:46.978 00:12:10 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:46.978 00:12:10 version -- app/version.sh@14 -- # cut -f2 00:06:46.978 00:12:10 version -- app/version.sh@14 -- # tr -d '"' 00:06:46.978 00:12:10 version -- app/version.sh@18 -- # minor=1 00:06:46.978 00:12:10 version -- app/version.sh@19 -- # get_header_version patch 00:06:46.978 00:12:10 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:46.978 00:12:10 version -- app/version.sh@14 -- # cut -f2 00:06:46.978 00:12:10 version -- app/version.sh@14 -- # tr -d '"' 00:06:46.978 00:12:10 version -- app/version.sh@19 -- # patch=0 00:06:46.978 00:12:10 version -- app/version.sh@20 -- # get_header_version suffix 00:06:46.978 00:12:10 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:46.978 00:12:10 version -- app/version.sh@14 -- # cut -f2 00:06:46.978 00:12:10 version -- app/version.sh@14 -- # tr -d '"' 00:06:46.978 00:12:10 version -- app/version.sh@20 -- # suffix=-pre 00:06:46.978 00:12:10 version -- app/version.sh@22 -- # version=25.1 00:06:46.978 00:12:10 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:46.978 00:12:10 version -- app/version.sh@28 -- # version=25.1rc0 00:06:46.978 00:12:10 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:46.978 00:12:10 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:46.978 00:12:10 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:46.978 00:12:10 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:46.978 00:06:46.978 real 0m0.195s 00:06:46.978 user 0m0.133s 00:06:46.978 sys 0m0.087s 00:06:46.978 00:12:10 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.978 00:12:10 version -- common/autotest_common.sh@10 -- # set +x 00:06:46.978 ************************************ 00:06:46.978 END TEST version 00:06:46.978 ************************************ 00:06:46.978 00:12:10 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:46.978 00:12:10 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:46.978 00:12:10 -- spdk/autotest.sh@194 -- # uname -s 00:06:46.978 00:12:10 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:46.978 00:12:10 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:46.978 00:12:10 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:46.978 00:12:10 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:46.978 00:12:10 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:46.978 00:12:10 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:46.978 00:12:10 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:46.978 00:12:10 -- common/autotest_common.sh@10 -- # set +x 00:06:46.978 00:12:10 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:46.978 00:12:10 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:06:46.978 00:12:10 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:06:46.978 00:12:10 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:06:46.978 00:12:10 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:06:46.978 00:12:10 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:06:46.978 00:12:10 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:46.978 00:12:10 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:46.978 00:12:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.978 00:12:10 -- common/autotest_common.sh@10 -- # set +x 00:06:46.978 ************************************ 00:06:46.978 START TEST nvmf_tcp 00:06:46.978 ************************************ 00:06:46.978 00:12:10 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:47.237 * Looking for test storage... 00:06:47.237 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:47.237 00:12:10 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:47.237 00:12:10 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:47.237 00:12:10 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:47.237 00:12:10 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:47.237 00:12:10 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:47.237 00:12:10 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:47.237 00:12:10 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:47.237 00:12:10 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:47.237 00:12:10 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:47.237 00:12:10 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:47.237 00:12:10 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:47.237 00:12:10 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:47.237 00:12:10 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:47.237 00:12:10 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:47.237 00:12:10 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:47.237 00:12:10 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:47.237 00:12:10 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:47.237 00:12:10 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:47.237 00:12:10 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:47.237 00:12:10 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:47.237 00:12:10 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:47.237 00:12:10 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:47.237 00:12:10 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:47.237 00:12:10 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:47.237 00:12:10 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:47.237 00:12:10 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:47.237 00:12:10 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:47.237 00:12:10 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:47.237 00:12:10 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:47.237 00:12:10 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:47.237 00:12:10 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:47.237 00:12:10 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:47.237 00:12:10 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:47.237 00:12:10 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:47.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.237 --rc genhtml_branch_coverage=1 00:06:47.237 --rc genhtml_function_coverage=1 00:06:47.237 --rc genhtml_legend=1 00:06:47.237 --rc geninfo_all_blocks=1 00:06:47.237 --rc geninfo_unexecuted_blocks=1 00:06:47.237 00:06:47.237 ' 00:06:47.237 00:12:10 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:47.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.237 --rc genhtml_branch_coverage=1 00:06:47.237 --rc genhtml_function_coverage=1 00:06:47.237 --rc genhtml_legend=1 00:06:47.237 --rc geninfo_all_blocks=1 00:06:47.237 --rc geninfo_unexecuted_blocks=1 00:06:47.237 00:06:47.237 ' 00:06:47.237 00:12:10 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:47.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.237 --rc genhtml_branch_coverage=1 00:06:47.237 --rc genhtml_function_coverage=1 00:06:47.237 --rc genhtml_legend=1 00:06:47.237 --rc geninfo_all_blocks=1 00:06:47.237 --rc geninfo_unexecuted_blocks=1 00:06:47.237 00:06:47.237 ' 00:06:47.237 00:12:10 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:47.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.237 --rc genhtml_branch_coverage=1 00:06:47.237 --rc genhtml_function_coverage=1 00:06:47.237 --rc genhtml_legend=1 00:06:47.237 --rc geninfo_all_blocks=1 00:06:47.237 --rc geninfo_unexecuted_blocks=1 00:06:47.237 00:06:47.237 ' 00:06:47.237 00:12:10 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:47.237 00:12:10 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:47.237 00:12:10 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:47.237 00:12:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:47.237 00:12:10 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.237 00:12:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:47.237 ************************************ 00:06:47.237 START TEST nvmf_target_core 00:06:47.237 ************************************ 00:06:47.237 00:12:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:47.237 * Looking for test storage... 00:06:47.237 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:47.237 00:12:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:47.237 00:12:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:06:47.237 00:12:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:47.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.498 --rc genhtml_branch_coverage=1 00:06:47.498 --rc genhtml_function_coverage=1 00:06:47.498 --rc genhtml_legend=1 00:06:47.498 --rc geninfo_all_blocks=1 00:06:47.498 --rc geninfo_unexecuted_blocks=1 00:06:47.498 00:06:47.498 ' 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:47.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.498 --rc genhtml_branch_coverage=1 00:06:47.498 --rc genhtml_function_coverage=1 00:06:47.498 --rc genhtml_legend=1 00:06:47.498 --rc geninfo_all_blocks=1 00:06:47.498 --rc geninfo_unexecuted_blocks=1 00:06:47.498 00:06:47.498 ' 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:47.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.498 --rc genhtml_branch_coverage=1 00:06:47.498 --rc genhtml_function_coverage=1 00:06:47.498 --rc genhtml_legend=1 00:06:47.498 --rc geninfo_all_blocks=1 00:06:47.498 --rc geninfo_unexecuted_blocks=1 00:06:47.498 00:06:47.498 ' 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:47.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.498 --rc genhtml_branch_coverage=1 00:06:47.498 --rc genhtml_function_coverage=1 00:06:47.498 --rc genhtml_legend=1 00:06:47.498 --rc geninfo_all_blocks=1 00:06:47.498 --rc geninfo_unexecuted_blocks=1 00:06:47.498 00:06:47.498 ' 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:47.498 00:12:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:47.499 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:47.499 ************************************ 00:06:47.499 START TEST nvmf_abort 00:06:47.499 ************************************ 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:47.499 * Looking for test storage... 00:06:47.499 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:47.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.499 --rc genhtml_branch_coverage=1 00:06:47.499 --rc genhtml_function_coverage=1 00:06:47.499 --rc genhtml_legend=1 00:06:47.499 --rc geninfo_all_blocks=1 00:06:47.499 --rc geninfo_unexecuted_blocks=1 00:06:47.499 00:06:47.499 ' 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:47.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.499 --rc genhtml_branch_coverage=1 00:06:47.499 --rc genhtml_function_coverage=1 00:06:47.499 --rc genhtml_legend=1 00:06:47.499 --rc geninfo_all_blocks=1 00:06:47.499 --rc geninfo_unexecuted_blocks=1 00:06:47.499 00:06:47.499 ' 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:47.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.499 --rc genhtml_branch_coverage=1 00:06:47.499 --rc genhtml_function_coverage=1 00:06:47.499 --rc genhtml_legend=1 00:06:47.499 --rc geninfo_all_blocks=1 00:06:47.499 --rc geninfo_unexecuted_blocks=1 00:06:47.499 00:06:47.499 ' 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:47.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.499 --rc genhtml_branch_coverage=1 00:06:47.499 --rc genhtml_function_coverage=1 00:06:47.499 --rc genhtml_legend=1 00:06:47.499 --rc geninfo_all_blocks=1 00:06:47.499 --rc geninfo_unexecuted_blocks=1 00:06:47.499 00:06:47.499 ' 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.499 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.500 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.500 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:47.500 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.500 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:06:47.500 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:47.500 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:47.500 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:47.500 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:47.500 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:47.500 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:47.500 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:47.500 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:47.500 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:47.500 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:47.500 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:47.500 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:47.500 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:47.500 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:47.500 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:47.500 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:47.500 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:47.500 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:47.500 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:47.500 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:47.500 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:47.500 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:47.500 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:47.500 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:06:47.500 00:12:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:50.038 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:50.038 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:50.038 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:50.038 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:50.038 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:50.039 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:50.039 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:50.039 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:50.039 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:06:50.039 00:06:50.039 --- 10.0.0.2 ping statistics --- 00:06:50.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:50.039 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:06:50.039 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:50.039 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:50.039 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:06:50.039 00:06:50.039 --- 10.0.0.1 ping statistics --- 00:06:50.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:50.039 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:06:50.039 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:50.039 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:06:50.039 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:50.039 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:50.039 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:50.039 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:50.039 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:50.039 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:50.039 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:50.039 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:50.039 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:50.039 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:50.039 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:50.039 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=112997 00:06:50.039 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:50.039 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 112997 00:06:50.039 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 112997 ']' 00:06:50.039 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.039 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:50.039 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.039 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:50.039 00:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:50.039 [2024-11-18 00:12:13.756177] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:50.039 [2024-11-18 00:12:13.756253] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:50.039 [2024-11-18 00:12:13.843161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:50.297 [2024-11-18 00:12:13.903987] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:50.297 [2024-11-18 00:12:13.904051] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:50.297 [2024-11-18 00:12:13.904092] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:50.297 [2024-11-18 00:12:13.904114] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:50.297 [2024-11-18 00:12:13.904132] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:50.297 [2024-11-18 00:12:13.906007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.297 [2024-11-18 00:12:13.906074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:50.297 [2024-11-18 00:12:13.906084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.297 00:12:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.297 00:12:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:06:50.297 00:12:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:50.297 00:12:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:50.297 00:12:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:50.556 00:12:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:50.556 00:12:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:50.556 00:12:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.556 00:12:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:50.556 [2024-11-18 00:12:14.148430] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:50.556 00:12:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.556 00:12:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:50.556 00:12:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.556 00:12:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:50.556 Malloc0 00:06:50.556 00:12:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.556 00:12:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:50.556 00:12:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.556 00:12:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:50.556 Delay0 00:06:50.556 00:12:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.556 00:12:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:50.556 00:12:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.556 00:12:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:50.556 00:12:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.556 00:12:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:50.556 00:12:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.556 00:12:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:50.556 00:12:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.556 00:12:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:50.556 00:12:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.556 00:12:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:50.556 [2024-11-18 00:12:14.219412] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:50.556 00:12:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.556 00:12:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:50.556 00:12:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.556 00:12:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:50.556 00:12:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.556 00:12:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:50.556 [2024-11-18 00:12:14.324146] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:53.089 Initializing NVMe Controllers 00:06:53.089 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:53.089 controller IO queue size 128 less than required 00:06:53.089 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:53.089 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:53.089 Initialization complete. Launching workers. 00:06:53.089 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28673 00:06:53.089 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28734, failed to submit 62 00:06:53.089 success 28677, unsuccessful 57, failed 0 00:06:53.089 00:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:53.089 00:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.089 00:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:53.089 00:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.089 00:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:53.089 00:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:53.089 00:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:53.089 00:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:53.089 00:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:53.089 00:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:53.089 00:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:53.089 00:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:53.089 rmmod nvme_tcp 00:06:53.089 rmmod nvme_fabrics 00:06:53.089 rmmod nvme_keyring 00:06:53.089 00:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:53.089 00:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:53.089 00:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:53.089 00:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 112997 ']' 00:06:53.089 00:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 112997 00:06:53.089 00:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 112997 ']' 00:06:53.089 00:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 112997 00:06:53.089 00:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:06:53.089 00:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:53.089 00:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 112997 00:06:53.089 00:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:53.089 00:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:53.089 00:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 112997' 00:06:53.089 killing process with pid 112997 00:06:53.089 00:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 112997 00:06:53.089 00:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 112997 00:06:53.089 00:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:53.089 00:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:53.089 00:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:53.089 00:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:53.089 00:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:06:53.089 00:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:53.089 00:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:06:53.089 00:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:53.089 00:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:53.089 00:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:53.089 00:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:53.089 00:12:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:55.002 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:55.002 00:06:55.002 real 0m7.628s 00:06:55.002 user 0m11.243s 00:06:55.002 sys 0m2.478s 00:06:55.002 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.002 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:55.002 ************************************ 00:06:55.002 END TEST nvmf_abort 00:06:55.002 ************************************ 00:06:55.002 00:12:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:55.002 00:12:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:55.002 00:12:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.002 00:12:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:55.262 ************************************ 00:06:55.262 START TEST nvmf_ns_hotplug_stress 00:06:55.262 ************************************ 00:06:55.262 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:55.262 * Looking for test storage... 00:06:55.262 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:55.262 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:55.262 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:06:55.262 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:55.262 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:55.262 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:55.262 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:55.262 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:55.262 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:55.262 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:55.262 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:55.262 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:55.262 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:55.262 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:55.262 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:55.262 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:55.262 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:55.262 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:55.262 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:55.263 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:55.263 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:55.263 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:55.263 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:55.263 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:55.263 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:55.263 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:55.263 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:55.263 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:55.263 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:55.263 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:55.263 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:55.263 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:55.263 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:55.263 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:55.263 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:55.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.263 --rc genhtml_branch_coverage=1 00:06:55.263 --rc genhtml_function_coverage=1 00:06:55.263 --rc genhtml_legend=1 00:06:55.263 --rc geninfo_all_blocks=1 00:06:55.263 --rc geninfo_unexecuted_blocks=1 00:06:55.263 00:06:55.263 ' 00:06:55.263 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:55.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.263 --rc genhtml_branch_coverage=1 00:06:55.263 --rc genhtml_function_coverage=1 00:06:55.263 --rc genhtml_legend=1 00:06:55.263 --rc geninfo_all_blocks=1 00:06:55.263 --rc geninfo_unexecuted_blocks=1 00:06:55.263 00:06:55.263 ' 00:06:55.263 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:55.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.263 --rc genhtml_branch_coverage=1 00:06:55.263 --rc genhtml_function_coverage=1 00:06:55.263 --rc genhtml_legend=1 00:06:55.263 --rc geninfo_all_blocks=1 00:06:55.263 --rc geninfo_unexecuted_blocks=1 00:06:55.263 00:06:55.263 ' 00:06:55.263 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:55.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.263 --rc genhtml_branch_coverage=1 00:06:55.263 --rc genhtml_function_coverage=1 00:06:55.263 --rc genhtml_legend=1 00:06:55.263 --rc geninfo_all_blocks=1 00:06:55.263 --rc geninfo_unexecuted_blocks=1 00:06:55.263 00:06:55.263 ' 00:06:55.263 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:55.263 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:55.263 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:55.263 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:55.263 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:55.263 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:55.263 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:55.263 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:55.263 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:55.263 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:55.263 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:55.263 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:55.263 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:55.263 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:55.263 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:55.263 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:55.263 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:55.263 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:55.263 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:55.263 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:55.263 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:55.263 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:55.263 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:55.263 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.263 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.263 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.263 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:55.263 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.263 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:55.263 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:55.263 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:55.263 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:55.263 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:55.263 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:55.263 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:55.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:55.263 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:55.263 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:55.263 00:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:55.263 00:12:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:55.263 00:12:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:55.263 00:12:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:55.263 00:12:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:55.263 00:12:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:55.263 00:12:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:55.263 00:12:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:55.263 00:12:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:55.263 00:12:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:55.263 00:12:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:55.263 00:12:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:55.263 00:12:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:55.263 00:12:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:55.263 00:12:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:57.799 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:57.799 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:57.799 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:57.799 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:57.799 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:57.799 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:57.799 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:57.799 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:57.799 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:57.799 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:57.799 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:57.799 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:57.799 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:57.799 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:57.799 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:57.799 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:57.799 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:57.799 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:57.799 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:57.799 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:57.799 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:57.799 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:57.799 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:57.799 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:57.799 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:57.799 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:57.799 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:57.799 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:57.799 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:57.799 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:57.799 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:57.799 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:57.799 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:57.799 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:57.799 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:57.799 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:57.799 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:57.799 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:57.799 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:57.799 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:57.799 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:57.800 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:57.800 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:57.800 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:57.800 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:57.800 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:06:57.800 00:06:57.800 --- 10.0.0.2 ping statistics --- 00:06:57.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:57.800 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:57.800 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:57.800 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:06:57.800 00:06:57.800 --- 10.0.0.1 ping statistics --- 00:06:57.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:57.800 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=115245 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 115245 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 115245 ']' 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:57.800 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:57.800 [2024-11-18 00:12:21.431182] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:57.800 [2024-11-18 00:12:21.431263] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:57.800 [2024-11-18 00:12:21.501586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:57.800 [2024-11-18 00:12:21.551037] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:57.800 [2024-11-18 00:12:21.551081] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:57.800 [2024-11-18 00:12:21.551111] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:57.800 [2024-11-18 00:12:21.551124] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:57.800 [2024-11-18 00:12:21.551134] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:57.800 [2024-11-18 00:12:21.552815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:57.800 [2024-11-18 00:12:21.552869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:57.800 [2024-11-18 00:12:21.552872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.080 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:58.080 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:06:58.080 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:58.080 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:58.080 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:58.080 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:58.080 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:58.080 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:58.338 [2024-11-18 00:12:21.941414] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:58.338 00:12:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:58.596 00:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:58.853 [2024-11-18 00:12:22.472196] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:58.853 00:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:59.111 00:12:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:59.370 Malloc0 00:06:59.370 00:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:59.629 Delay0 00:06:59.629 00:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.887 00:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:00.145 NULL1 00:07:00.145 00:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:00.404 00:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=115660 00:07:00.404 00:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:00.404 00:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115660 00:07:00.404 00:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.780 Read completed with error (sct=0, sc=11) 00:07:01.780 00:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.780 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:01.780 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:01.780 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:01.780 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:01.780 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:01.780 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:02.038 00:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:02.038 00:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:02.296 true 00:07:02.296 00:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115660 00:07:02.296 00:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.861 00:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.119 00:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:03.119 00:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:03.377 true 00:07:03.635 00:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115660 00:07:03.635 00:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.892 00:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.151 00:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:04.151 00:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:04.410 true 00:07:04.410 00:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115660 00:07:04.410 00:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.669 00:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.927 00:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:04.927 00:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:05.186 true 00:07:05.186 00:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115660 00:07:05.186 00:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.139 00:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.397 00:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:06.397 00:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:06.656 true 00:07:06.656 00:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115660 00:07:06.656 00:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.914 00:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.172 00:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:07.172 00:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:07.429 true 00:07:07.429 00:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115660 00:07:07.429 00:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.687 00:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.945 00:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:07.945 00:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:08.203 true 00:07:08.203 00:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115660 00:07:08.203 00:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.578 00:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.578 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:09.578 00:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:09.578 00:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:09.836 true 00:07:09.836 00:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115660 00:07:09.836 00:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.094 00:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:10.352 00:12:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:10.352 00:12:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:10.610 true 00:07:10.610 00:12:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115660 00:07:10.610 00:12:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.867 00:12:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:11.124 00:12:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:11.124 00:12:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:11.382 true 00:07:11.382 00:12:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115660 00:07:11.382 00:12:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.318 00:12:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:12.318 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:12.576 00:12:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:12.576 00:12:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:12.834 true 00:07:12.834 00:12:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115660 00:07:12.834 00:12:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.092 00:12:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:13.656 00:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:13.656 00:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:13.656 true 00:07:13.656 00:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115660 00:07:13.656 00:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.914 00:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:14.172 00:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:14.172 00:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:14.439 true 00:07:14.439 00:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115660 00:07:14.439 00:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.817 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:15.817 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:15.817 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:16.074 true 00:07:16.074 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115660 00:07:16.074 00:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.332 00:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:16.590 00:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:16.590 00:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:16.849 true 00:07:16.849 00:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115660 00:07:16.849 00:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.107 00:12:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:17.366 00:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:17.366 00:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:17.624 true 00:07:17.624 00:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115660 00:07:17.624 00:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.997 00:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:18.997 00:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:18.997 00:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:19.255 true 00:07:19.255 00:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115660 00:07:19.255 00:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.512 00:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:19.769 00:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:19.769 00:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:20.026 true 00:07:20.026 00:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115660 00:07:20.026 00:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.284 00:12:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:20.850 00:12:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:20.850 00:12:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:20.850 true 00:07:20.850 00:12:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115660 00:07:20.850 00:12:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.785 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:21.785 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:21.785 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:22.043 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:22.043 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:22.043 00:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:22.301 true 00:07:22.301 00:12:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115660 00:07:22.301 00:12:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.559 00:12:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:22.817 00:12:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:22.817 00:12:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:23.075 true 00:07:23.075 00:12:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115660 00:07:23.075 00:12:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.007 00:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:24.007 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:24.264 00:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:24.264 00:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:24.522 true 00:07:24.522 00:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115660 00:07:24.522 00:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.779 00:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:25.036 00:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:25.036 00:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:25.293 true 00:07:25.293 00:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115660 00:07:25.293 00:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.224 00:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:26.224 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:26.224 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:26.224 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:26.481 00:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:26.481 00:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:26.738 true 00:07:26.738 00:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115660 00:07:26.738 00:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.996 00:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:27.253 00:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:27.253 00:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:27.511 true 00:07:27.511 00:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115660 00:07:27.511 00:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.442 00:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:28.442 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:28.442 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:28.442 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:28.699 00:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:28.699 00:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:28.974 true 00:07:28.974 00:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115660 00:07:28.974 00:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.230 00:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:29.488 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:29.488 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:29.745 true 00:07:29.745 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115660 00:07:29.745 00:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.678 00:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:30.678 Initializing NVMe Controllers 00:07:30.678 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:30.678 Controller IO queue size 128, less than required. 00:07:30.678 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:30.678 Controller IO queue size 128, less than required. 00:07:30.678 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:30.678 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:30.678 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:30.678 Initialization complete. Launching workers. 00:07:30.678 ======================================================== 00:07:30.678 Latency(us) 00:07:30.678 Device Information : IOPS MiB/s Average min max 00:07:30.678 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 685.08 0.33 84112.65 3455.77 1037238.71 00:07:30.678 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 9344.96 4.56 13698.32 3385.19 536053.07 00:07:30.678 ======================================================== 00:07:30.678 Total : 10030.04 4.90 18507.79 3385.19 1037238.71 00:07:30.678 00:07:30.935 00:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:30.935 00:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:31.193 true 00:07:31.193 00:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115660 00:07:31.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (115660) - No such process 00:07:31.193 00:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 115660 00:07:31.193 00:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.450 00:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:31.707 00:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:31.707 00:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:31.707 00:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:31.707 00:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:31.707 00:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:31.965 null0 00:07:31.965 00:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:31.965 00:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:31.965 00:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:32.223 null1 00:07:32.223 00:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:32.223 00:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:32.223 00:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:32.480 null2 00:07:32.480 00:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:32.480 00:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:32.480 00:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:32.738 null3 00:07:32.738 00:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:32.738 00:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:32.738 00:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:33.008 null4 00:07:33.008 00:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:33.008 00:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:33.008 00:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:33.268 null5 00:07:33.268 00:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:33.268 00:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:33.268 00:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:33.524 null6 00:07:33.524 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:33.524 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:33.524 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:33.782 null7 00:07:33.782 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:33.782 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:33.782 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:33.782 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:33.782 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:33.782 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:33.782 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:33.782 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:33.782 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:33.782 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:33.782 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.782 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:33.782 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:33.783 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:33.783 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:33.783 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:33.783 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:33.783 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:33.783 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.783 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:33.783 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:33.783 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:33.783 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:33.783 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:33.783 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:33.783 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:33.783 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.783 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:33.783 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:33.783 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:33.783 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:33.783 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:33.783 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:33.783 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:33.783 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.783 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:33.783 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:33.783 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:33.783 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:33.783 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:33.783 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:33.783 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:33.783 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.783 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:33.783 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:33.783 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:33.783 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:33.783 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:33.783 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:33.783 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:33.783 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.783 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:33.783 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:33.783 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:33.783 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:33.783 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:33.783 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:33.783 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:33.783 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.783 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:33.783 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:33.783 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:33.783 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:33.783 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:33.783 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:33.783 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 119740 119741 119743 119745 119747 119749 119751 119753 00:07:33.783 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:33.783 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.783 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:34.040 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:34.040 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:34.040 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.040 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:34.040 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:34.040 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:34.297 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:34.297 00:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:34.556 00:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.556 00:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.556 00:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:34.556 00:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.556 00:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.556 00:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:34.556 00:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.556 00:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.556 00:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:34.556 00:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.556 00:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.556 00:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:34.556 00:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.556 00:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.556 00:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:34.556 00:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.556 00:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.556 00:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:34.556 00:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.556 00:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.556 00:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:34.556 00:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.556 00:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.556 00:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:34.814 00:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:34.814 00:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:34.814 00:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:34.814 00:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.814 00:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:34.814 00:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:34.814 00:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:34.814 00:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:35.071 00:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.071 00:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.071 00:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:35.071 00:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.071 00:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.071 00:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:35.071 00:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.071 00:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.071 00:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:35.071 00:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.071 00:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.071 00:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:35.071 00:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.071 00:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.071 00:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:35.071 00:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.071 00:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.071 00:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.071 00:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.072 00:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:35.072 00:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:35.072 00:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.072 00:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.072 00:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:35.329 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:35.329 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.329 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:35.329 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:35.329 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:35.329 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:35.329 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:35.329 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:35.586 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.586 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.587 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:35.587 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.587 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.587 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:35.587 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.587 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.587 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.587 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:35.587 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.587 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:35.587 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.587 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.587 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:35.587 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.587 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.587 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:35.587 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.587 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.587 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:35.587 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.587 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.587 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:35.845 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:35.845 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:35.845 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:35.845 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.845 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:35.845 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:35.845 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:35.845 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:36.410 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.410 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.410 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:36.410 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.410 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.410 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:36.410 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.410 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.410 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:36.410 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.410 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.410 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:36.410 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.410 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.410 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:36.410 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.410 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.410 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.410 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:36.410 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.410 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:36.410 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.410 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.410 00:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:36.668 00:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:36.668 00:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:36.668 00:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:36.668 00:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.668 00:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:36.668 00:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:36.668 00:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:36.668 00:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:36.926 00:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.926 00:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.926 00:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:36.926 00:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.926 00:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.926 00:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:36.926 00:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.926 00:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.926 00:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:36.926 00:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.926 00:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.926 00:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:36.926 00:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.926 00:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.926 00:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:36.926 00:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.926 00:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.926 00:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:36.926 00:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.926 00:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.926 00:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:36.926 00:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.926 00:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.926 00:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:37.183 00:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:37.183 00:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.183 00:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:37.183 00:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:37.183 00:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:37.183 00:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:37.183 00:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:37.183 00:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:37.440 00:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.440 00:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.440 00:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:37.440 00:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.440 00:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.440 00:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:37.440 00:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.440 00:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.440 00:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:37.440 00:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.440 00:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.440 00:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:37.440 00:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.440 00:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.440 00:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:37.440 00:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.440 00:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.440 00:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:37.440 00:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.440 00:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.440 00:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:37.440 00:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.440 00:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.440 00:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:37.698 00:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.698 00:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:37.698 00:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:37.698 00:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:37.698 00:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:37.698 00:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:37.698 00:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:37.698 00:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:37.955 00:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.955 00:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.955 00:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:37.955 00:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.955 00:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.955 00:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:37.955 00:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.955 00:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.955 00:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:37.955 00:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.955 00:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.955 00:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:37.955 00:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.955 00:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.955 00:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:37.955 00:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.955 00:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.956 00:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:37.956 00:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.956 00:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.956 00:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:37.956 00:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.956 00:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.956 00:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:38.520 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.520 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:38.520 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:38.520 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:38.520 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:38.520 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:38.520 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:38.520 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:38.777 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.777 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.777 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:38.777 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.777 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.777 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:38.777 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.777 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.777 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:38.777 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.777 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.777 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:38.777 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.777 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.777 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:38.777 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.777 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.777 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:38.777 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.777 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.777 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:38.777 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.777 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.777 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:39.035 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:39.035 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.035 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:39.035 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:39.035 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:39.035 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:39.035 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:39.035 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:39.293 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.293 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.293 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:39.293 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.293 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.293 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:39.293 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.293 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.293 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:39.293 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.293 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.293 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:39.293 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.293 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.293 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:39.293 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.293 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.293 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:39.293 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.293 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.293 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:39.293 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.293 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.293 00:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:39.551 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:39.551 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.551 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:39.551 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:39.551 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:39.551 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:39.551 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:39.551 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:39.808 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.808 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.808 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.808 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.808 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.808 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.808 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.808 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.808 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.808 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.808 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.808 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.808 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.808 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.808 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.808 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.808 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:39.808 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:39.808 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:39.808 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:39.808 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:39.808 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:39.808 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:39.808 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:39.808 rmmod nvme_tcp 00:07:39.808 rmmod nvme_fabrics 00:07:40.066 rmmod nvme_keyring 00:07:40.066 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:40.066 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:40.066 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:40.066 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 115245 ']' 00:07:40.066 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 115245 00:07:40.066 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 115245 ']' 00:07:40.066 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 115245 00:07:40.066 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:07:40.066 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:40.066 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 115245 00:07:40.066 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:40.066 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:40.066 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 115245' 00:07:40.066 killing process with pid 115245 00:07:40.066 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 115245 00:07:40.066 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 115245 00:07:40.327 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:40.327 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:40.327 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:40.327 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:07:40.327 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:07:40.327 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:40.327 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:07:40.327 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:40.327 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:40.327 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:40.327 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:40.327 00:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:42.242 00:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:42.242 00:07:42.242 real 0m47.132s 00:07:42.242 user 3m38.938s 00:07:42.242 sys 0m15.940s 00:07:42.242 00:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.242 00:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:42.242 ************************************ 00:07:42.242 END TEST nvmf_ns_hotplug_stress 00:07:42.242 ************************************ 00:07:42.242 00:13:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:42.242 00:13:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:42.242 00:13:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.242 00:13:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:42.242 ************************************ 00:07:42.242 START TEST nvmf_delete_subsystem 00:07:42.242 ************************************ 00:07:42.242 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:42.242 * Looking for test storage... 00:07:42.503 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:42.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.503 --rc genhtml_branch_coverage=1 00:07:42.503 --rc genhtml_function_coverage=1 00:07:42.503 --rc genhtml_legend=1 00:07:42.503 --rc geninfo_all_blocks=1 00:07:42.503 --rc geninfo_unexecuted_blocks=1 00:07:42.503 00:07:42.503 ' 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:42.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.503 --rc genhtml_branch_coverage=1 00:07:42.503 --rc genhtml_function_coverage=1 00:07:42.503 --rc genhtml_legend=1 00:07:42.503 --rc geninfo_all_blocks=1 00:07:42.503 --rc geninfo_unexecuted_blocks=1 00:07:42.503 00:07:42.503 ' 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:42.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.503 --rc genhtml_branch_coverage=1 00:07:42.503 --rc genhtml_function_coverage=1 00:07:42.503 --rc genhtml_legend=1 00:07:42.503 --rc geninfo_all_blocks=1 00:07:42.503 --rc geninfo_unexecuted_blocks=1 00:07:42.503 00:07:42.503 ' 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:42.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.503 --rc genhtml_branch_coverage=1 00:07:42.503 --rc genhtml_function_coverage=1 00:07:42.503 --rc genhtml_legend=1 00:07:42.503 --rc geninfo_all_blocks=1 00:07:42.503 --rc geninfo_unexecuted_blocks=1 00:07:42.503 00:07:42.503 ' 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:42.503 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:42.504 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:42.504 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:42.504 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:42.504 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:42.504 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:42.504 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:42.504 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:42.504 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:42.504 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:42.504 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:42.504 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:42.504 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:42.504 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:42.504 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:42.504 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:42.504 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:42.504 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:07:42.504 00:13:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:45.046 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:45.046 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:45.046 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:45.046 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:45.046 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:45.046 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:45.046 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:45.046 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:45.046 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:45.046 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:45.046 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:45.046 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:45.046 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:45.046 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:45.046 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:45.046 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:45.046 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:45.046 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:45.046 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:45.046 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:45.046 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:45.046 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:45.046 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:45.046 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:45.046 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:45.046 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:45.046 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:45.046 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:45.046 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:45.046 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:45.046 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:45.046 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:45.046 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:45.046 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:45.046 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:45.046 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:45.046 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:45.046 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:45.046 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:45.046 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:45.046 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:45.046 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:45.046 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:45.046 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:45.046 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:45.046 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:45.046 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:45.046 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:45.046 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:45.046 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:45.046 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:45.046 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:45.046 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:45.046 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:45.046 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:45.046 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:45.046 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:45.047 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:45.047 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:45.047 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:45.047 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:45.047 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:45.047 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:45.047 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:45.047 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:45.047 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:45.047 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:45.047 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:45.047 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:45.047 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:45.047 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:45.047 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:45.047 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:45.047 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:07:45.047 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:45.047 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:45.047 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:45.047 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:45.047 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:45.047 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:45.047 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:45.047 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:45.047 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:45.047 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:45.047 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:45.047 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:45.047 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:45.047 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:45.047 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:45.047 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:45.047 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:45.047 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:45.047 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:45.047 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:45.047 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:45.047 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:45.047 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:45.047 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:45.047 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:45.047 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:45.047 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:45.047 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:07:45.047 00:07:45.047 --- 10.0.0.2 ping statistics --- 00:07:45.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:45.047 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:07:45.047 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:45.047 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:45.047 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:07:45.047 00:07:45.047 --- 10.0.0.1 ping statistics --- 00:07:45.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:45.047 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:07:45.047 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:45.047 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:07:45.047 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:45.047 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:45.047 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:45.047 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:45.047 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:45.047 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:45.047 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:45.047 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:45.047 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:45.047 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:45.047 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:45.047 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=122644 00:07:45.047 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:45.047 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 122644 00:07:45.047 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 122644 ']' 00:07:45.047 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.047 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:45.047 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.047 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:45.048 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:45.048 [2024-11-18 00:13:08.587690] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:07:45.048 [2024-11-18 00:13:08.587782] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:45.048 [2024-11-18 00:13:08.657493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:45.048 [2024-11-18 00:13:08.699324] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:45.048 [2024-11-18 00:13:08.699399] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:45.048 [2024-11-18 00:13:08.699411] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:45.048 [2024-11-18 00:13:08.699437] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:45.048 [2024-11-18 00:13:08.699445] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:45.048 [2024-11-18 00:13:08.700819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:45.048 [2024-11-18 00:13:08.700825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.048 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:45.048 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:07:45.048 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:45.048 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:45.048 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:45.048 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:45.048 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:45.048 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.048 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:45.048 [2024-11-18 00:13:08.838618] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:45.048 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.048 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:45.048 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.048 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:45.048 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.048 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:45.048 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.048 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:45.048 [2024-11-18 00:13:08.854850] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:45.048 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.048 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:45.048 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.048 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:45.048 NULL1 00:07:45.048 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.048 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:45.048 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.048 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:45.306 Delay0 00:07:45.306 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.306 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.307 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.307 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:45.307 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.307 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=122671 00:07:45.307 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:45.307 00:13:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:45.307 [2024-11-18 00:13:08.939746] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:47.205 00:13:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:47.205 00:13:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.205 00:13:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:47.462 Read completed with error (sct=0, sc=8) 00:07:47.462 Read completed with error (sct=0, sc=8) 00:07:47.462 Write completed with error (sct=0, sc=8) 00:07:47.462 Read completed with error (sct=0, sc=8) 00:07:47.462 starting I/O failed: -6 00:07:47.462 Write completed with error (sct=0, sc=8) 00:07:47.462 Read completed with error (sct=0, sc=8) 00:07:47.462 Read completed with error (sct=0, sc=8) 00:07:47.462 Write completed with error (sct=0, sc=8) 00:07:47.462 starting I/O failed: -6 00:07:47.462 Read completed with error (sct=0, sc=8) 00:07:47.462 Read completed with error (sct=0, sc=8) 00:07:47.462 Read completed with error (sct=0, sc=8) 00:07:47.462 Write completed with error (sct=0, sc=8) 00:07:47.462 starting I/O failed: -6 00:07:47.462 Read completed with error (sct=0, sc=8) 00:07:47.462 Read completed with error (sct=0, sc=8) 00:07:47.462 Write completed with error (sct=0, sc=8) 00:07:47.462 Read completed with error (sct=0, sc=8) 00:07:47.462 starting I/O failed: -6 00:07:47.462 Write completed with error (sct=0, sc=8) 00:07:47.462 Write completed with error (sct=0, sc=8) 00:07:47.462 Write completed with error (sct=0, sc=8) 00:07:47.462 Read completed with error (sct=0, sc=8) 00:07:47.462 starting I/O failed: -6 00:07:47.462 Write completed with error (sct=0, sc=8) 00:07:47.462 Write completed with error (sct=0, sc=8) 00:07:47.462 Read completed with error (sct=0, sc=8) 00:07:47.462 Write completed with error (sct=0, sc=8) 00:07:47.462 starting I/O failed: -6 00:07:47.462 Read completed with error (sct=0, sc=8) 00:07:47.462 Read completed with error (sct=0, sc=8) 00:07:47.462 Read completed with error (sct=0, sc=8) 00:07:47.462 Write completed with error (sct=0, sc=8) 00:07:47.462 starting I/O failed: -6 00:07:47.462 Write completed with error (sct=0, sc=8) 00:07:47.462 Read completed with error (sct=0, sc=8) 00:07:47.462 Read completed with error (sct=0, sc=8) 00:07:47.462 Write completed with error (sct=0, sc=8) 00:07:47.462 Read completed with error (sct=0, sc=8) 00:07:47.462 Read completed with error (sct=0, sc=8) 00:07:47.462 Write completed with error (sct=0, sc=8) 00:07:47.462 starting I/O failed: -6 00:07:47.462 Read completed with error (sct=0, sc=8) 00:07:47.462 starting I/O failed: -6 00:07:47.462 Read completed with error (sct=0, sc=8) 00:07:47.462 Write completed with error (sct=0, sc=8) 00:07:47.462 Read completed with error (sct=0, sc=8) 00:07:47.462 Read completed with error (sct=0, sc=8) 00:07:47.462 Read completed with error (sct=0, sc=8) 00:07:47.462 Read completed with error (sct=0, sc=8) 00:07:47.462 starting I/O failed: -6 00:07:47.462 Read completed with error (sct=0, sc=8) 00:07:47.462 Read completed with error (sct=0, sc=8) 00:07:47.462 starting I/O failed: -6 00:07:47.462 Read completed with error (sct=0, sc=8) 00:07:47.462 Write completed with error (sct=0, sc=8) 00:07:47.462 Read completed with error (sct=0, sc=8) 00:07:47.462 Read completed with error (sct=0, sc=8) 00:07:47.462 Read completed with error (sct=0, sc=8) 00:07:47.462 starting I/O failed: -6 00:07:47.462 Read completed with error (sct=0, sc=8) 00:07:47.462 Write completed with error (sct=0, sc=8) 00:07:47.462 Read completed with error (sct=0, sc=8) 00:07:47.462 [2024-11-18 00:13:11.102133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb27800d020 is same with the state(6) to be set 00:07:47.462 Write completed with error (sct=0, sc=8) 00:07:47.463 starting I/O failed: -6 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 starting I/O failed: -6 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Write completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Write completed with error (sct=0, sc=8) 00:07:47.463 Write completed with error (sct=0, sc=8) 00:07:47.463 Write completed with error (sct=0, sc=8) 00:07:47.463 Write completed with error (sct=0, sc=8) 00:07:47.463 starting I/O failed: -6 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Write completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Write completed with error (sct=0, sc=8) 00:07:47.463 Write completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Write completed with error (sct=0, sc=8) 00:07:47.463 starting I/O failed: -6 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Write completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 starting I/O failed: -6 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Write completed with error (sct=0, sc=8) 00:07:47.463 Write completed with error (sct=0, sc=8) 00:07:47.463 Write completed with error (sct=0, sc=8) 00:07:47.463 Write completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Write completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 starting I/O failed: -6 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Write completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Write completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 starting I/O failed: -6 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Write completed with error (sct=0, sc=8) 00:07:47.463 Write completed with error (sct=0, sc=8) 00:07:47.463 Write completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 starting I/O failed: -6 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Write completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Write completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 starting I/O failed: -6 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Write completed with error (sct=0, sc=8) 00:07:47.463 Write completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 starting I/O failed: -6 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Write completed with error (sct=0, sc=8) 00:07:47.463 Write completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Write completed with error (sct=0, sc=8) 00:07:47.463 starting I/O failed: -6 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Write completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 starting I/O failed: -6 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Write completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Write completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 starting I/O failed: -6 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Write completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Read completed with error (sct=0, sc=8) 00:07:47.463 Write completed with error (sct=0, sc=8) 00:07:47.463 Write completed with error (sct=0, sc=8) 00:07:47.463 starting I/O failed: -6 00:07:47.463 starting I/O failed: -6 00:07:47.463 starting I/O failed: -6 00:07:47.463 starting I/O failed: -6 00:07:47.463 starting I/O failed: -6 00:07:47.463 starting I/O failed: -6 00:07:48.395 [2024-11-18 00:13:12.075858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15325b0 is same with the state(6) to be set 00:07:48.395 Write completed with error (sct=0, sc=8) 00:07:48.395 Write completed with error (sct=0, sc=8) 00:07:48.395 Write completed with error (sct=0, sc=8) 00:07:48.395 Read completed with error (sct=0, sc=8) 00:07:48.395 Read completed with error (sct=0, sc=8) 00:07:48.395 Read completed with error (sct=0, sc=8) 00:07:48.395 Read completed with error (sct=0, sc=8) 00:07:48.395 Read completed with error (sct=0, sc=8) 00:07:48.395 Read completed with error (sct=0, sc=8) 00:07:48.395 Read completed with error (sct=0, sc=8) 00:07:48.395 Read completed with error (sct=0, sc=8) 00:07:48.395 Read completed with error (sct=0, sc=8) 00:07:48.395 Write completed with error (sct=0, sc=8) 00:07:48.395 Read completed with error (sct=0, sc=8) 00:07:48.395 Read completed with error (sct=0, sc=8) 00:07:48.395 [2024-11-18 00:13:12.101107] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb27800d350 is same with the state(6) to be set 00:07:48.395 Read completed with error (sct=0, sc=8) 00:07:48.395 Read completed with error (sct=0, sc=8) 00:07:48.395 Write completed with error (sct=0, sc=8) 00:07:48.395 Write completed with error (sct=0, sc=8) 00:07:48.395 Read completed with error (sct=0, sc=8) 00:07:48.395 Read completed with error (sct=0, sc=8) 00:07:48.395 Write completed with error (sct=0, sc=8) 00:07:48.395 Read completed with error (sct=0, sc=8) 00:07:48.395 Read completed with error (sct=0, sc=8) 00:07:48.395 Write completed with error (sct=0, sc=8) 00:07:48.395 Read completed with error (sct=0, sc=8) 00:07:48.395 Read completed with error (sct=0, sc=8) 00:07:48.395 Read completed with error (sct=0, sc=8) 00:07:48.395 Read completed with error (sct=0, sc=8) 00:07:48.395 Read completed with error (sct=0, sc=8) 00:07:48.395 Write completed with error (sct=0, sc=8) 00:07:48.396 Read completed with error (sct=0, sc=8) 00:07:48.396 Read completed with error (sct=0, sc=8) 00:07:48.396 Read completed with error (sct=0, sc=8) 00:07:48.396 Read completed with error (sct=0, sc=8) 00:07:48.396 Read completed with error (sct=0, sc=8) 00:07:48.396 Read completed with error (sct=0, sc=8) 00:07:48.396 Read completed with error (sct=0, sc=8) 00:07:48.396 Read completed with error (sct=0, sc=8) 00:07:48.396 Read completed with error (sct=0, sc=8) 00:07:48.396 Read completed with error (sct=0, sc=8) 00:07:48.396 Write completed with error (sct=0, sc=8) 00:07:48.396 Write completed with error (sct=0, sc=8) 00:07:48.396 Read completed with error (sct=0, sc=8) 00:07:48.396 Read completed with error (sct=0, sc=8) 00:07:48.396 Read completed with error (sct=0, sc=8) 00:07:48.396 Write completed with error (sct=0, sc=8) 00:07:48.396 Read completed with error (sct=0, sc=8) 00:07:48.396 Read completed with error (sct=0, sc=8) 00:07:48.396 Read completed with error (sct=0, sc=8) 00:07:48.396 [2024-11-18 00:13:12.104669] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1524e70 is same with the state(6) to be set 00:07:48.396 Read completed with error (sct=0, sc=8) 00:07:48.396 Read completed with error (sct=0, sc=8) 00:07:48.396 Read completed with error (sct=0, sc=8) 00:07:48.396 Read completed with error (sct=0, sc=8) 00:07:48.396 Write completed with error (sct=0, sc=8) 00:07:48.396 Read completed with error (sct=0, sc=8) 00:07:48.396 Read completed with error (sct=0, sc=8) 00:07:48.396 Read completed with error (sct=0, sc=8) 00:07:48.396 Read completed with error (sct=0, sc=8) 00:07:48.396 Read completed with error (sct=0, sc=8) 00:07:48.396 Write completed with error (sct=0, sc=8) 00:07:48.396 Write completed with error (sct=0, sc=8) 00:07:48.396 Write completed with error (sct=0, sc=8) 00:07:48.396 Read completed with error (sct=0, sc=8) 00:07:48.396 Write completed with error (sct=0, sc=8) 00:07:48.396 Read completed with error (sct=0, sc=8) 00:07:48.396 Read completed with error (sct=0, sc=8) 00:07:48.396 Read completed with error (sct=0, sc=8) 00:07:48.396 Read completed with error (sct=0, sc=8) 00:07:48.396 Write completed with error (sct=0, sc=8) 00:07:48.396 Read completed with error (sct=0, sc=8) 00:07:48.396 Read completed with error (sct=0, sc=8) 00:07:48.396 Read completed with error (sct=0, sc=8) 00:07:48.396 Read completed with error (sct=0, sc=8) 00:07:48.396 Read completed with error (sct=0, sc=8) 00:07:48.396 Read completed with error (sct=0, sc=8) 00:07:48.396 Write completed with error (sct=0, sc=8) 00:07:48.396 Read completed with error (sct=0, sc=8) 00:07:48.396 Write completed with error (sct=0, sc=8) 00:07:48.396 Write completed with error (sct=0, sc=8) 00:07:48.396 Read completed with error (sct=0, sc=8) 00:07:48.396 Read completed with error (sct=0, sc=8) 00:07:48.396 Read completed with error (sct=0, sc=8) 00:07:48.396 Read completed with error (sct=0, sc=8) 00:07:48.396 Read completed with error (sct=0, sc=8) 00:07:48.396 Write completed with error (sct=0, sc=8) 00:07:48.396 [2024-11-18 00:13:12.105102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15243f0 is same with the state(6) to be set 00:07:48.396 Read completed with error (sct=0, sc=8) 00:07:48.396 Read completed with error (sct=0, sc=8) 00:07:48.396 Read completed with error (sct=0, sc=8) 00:07:48.396 Read completed with error (sct=0, sc=8) 00:07:48.396 Write completed with error (sct=0, sc=8) 00:07:48.396 Write completed with error (sct=0, sc=8) 00:07:48.396 Read completed with error (sct=0, sc=8) 00:07:48.396 Write completed with error (sct=0, sc=8) 00:07:48.396 Read completed with error (sct=0, sc=8) 00:07:48.396 Write completed with error (sct=0, sc=8) 00:07:48.396 Read completed with error (sct=0, sc=8) 00:07:48.396 Write completed with error (sct=0, sc=8) 00:07:48.396 Write completed with error (sct=0, sc=8) 00:07:48.396 Write completed with error (sct=0, sc=8) 00:07:48.396 Read completed with error (sct=0, sc=8) 00:07:48.396 Read completed with error (sct=0, sc=8) 00:07:48.396 Read completed with error (sct=0, sc=8) 00:07:48.396 Read completed with error (sct=0, sc=8) 00:07:48.396 Read completed with error (sct=0, sc=8) 00:07:48.396 Read completed with error (sct=0, sc=8) 00:07:48.396 Write completed with error (sct=0, sc=8) 00:07:48.396 Write completed with error (sct=0, sc=8) 00:07:48.396 Read completed with error (sct=0, sc=8) 00:07:48.396 Read completed with error (sct=0, sc=8) 00:07:48.396 Write completed with error (sct=0, sc=8) 00:07:48.396 Write completed with error (sct=0, sc=8) 00:07:48.396 Read completed with error (sct=0, sc=8) 00:07:48.396 Read completed with error (sct=0, sc=8) 00:07:48.396 Write completed with error (sct=0, sc=8) 00:07:48.396 Read completed with error (sct=0, sc=8) 00:07:48.396 Write completed with error (sct=0, sc=8) 00:07:48.396 Read completed with error (sct=0, sc=8) 00:07:48.396 Read completed with error (sct=0, sc=8) 00:07:48.396 Read completed with error (sct=0, sc=8) 00:07:48.396 Read completed with error (sct=0, sc=8) 00:07:48.396 [2024-11-18 00:13:12.105331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1524810 is same with the state(6) to be set 00:07:48.396 Initializing NVMe Controllers 00:07:48.396 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:48.396 Controller IO queue size 128, less than required. 00:07:48.396 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:48.396 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:48.396 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:48.396 Initialization complete. Launching workers. 00:07:48.396 ======================================================== 00:07:48.396 Latency(us) 00:07:48.396 Device Information : IOPS MiB/s Average min max 00:07:48.396 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 186.06 0.09 970592.83 802.86 1011495.02 00:07:48.396 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 152.32 0.07 889026.04 648.67 1012296.92 00:07:48.396 ======================================================== 00:07:48.396 Total : 338.38 0.17 933875.82 648.67 1012296.92 00:07:48.396 00:07:48.396 [2024-11-18 00:13:12.106269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15325b0 (9): Bad file descriptor 00:07:48.396 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:48.396 00:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.396 00:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:48.396 00:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 122671 00:07:48.396 00:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:48.960 00:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:48.960 00:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 122671 00:07:48.960 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (122671) - No such process 00:07:48.960 00:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 122671 00:07:48.960 00:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:07:48.960 00:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 122671 00:07:48.960 00:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:07:48.960 00:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:48.960 00:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:07:48.960 00:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:48.960 00:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 122671 00:07:48.960 00:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:07:48.960 00:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:48.960 00:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:48.960 00:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:48.960 00:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:48.960 00:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.960 00:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:48.960 00:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.960 00:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:48.960 00:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.960 00:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:48.960 [2024-11-18 00:13:12.629909] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:48.960 00:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.960 00:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.960 00:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.960 00:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:48.960 00:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.960 00:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=123083 00:07:48.960 00:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:48.961 00:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:48.961 00:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 123083 00:07:48.961 00:13:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:48.961 [2024-11-18 00:13:12.701837] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:49.525 00:13:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:49.525 00:13:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 123083 00:07:49.525 00:13:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:50.089 00:13:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:50.089 00:13:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 123083 00:07:50.089 00:13:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:50.347 00:13:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:50.347 00:13:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 123083 00:07:50.347 00:13:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:50.911 00:13:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:50.911 00:13:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 123083 00:07:50.911 00:13:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:51.477 00:13:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:51.477 00:13:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 123083 00:07:51.477 00:13:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:52.041 00:13:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:52.041 00:13:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 123083 00:07:52.041 00:13:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:52.298 Initializing NVMe Controllers 00:07:52.298 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:52.298 Controller IO queue size 128, less than required. 00:07:52.298 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:52.299 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:52.299 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:52.299 Initialization complete. Launching workers. 00:07:52.299 ======================================================== 00:07:52.299 Latency(us) 00:07:52.299 Device Information : IOPS MiB/s Average min max 00:07:52.299 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003637.27 1000168.22 1012102.22 00:07:52.299 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004681.38 1000167.32 1041994.37 00:07:52.299 ======================================================== 00:07:52.299 Total : 256.00 0.12 1004159.33 1000167.32 1041994.37 00:07:52.299 00:07:52.556 00:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:52.556 00:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 123083 00:07:52.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (123083) - No such process 00:07:52.556 00:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 123083 00:07:52.556 00:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:52.556 00:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:52.556 00:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:52.556 00:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:52.556 00:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:52.556 00:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:52.556 00:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:52.556 00:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:52.556 rmmod nvme_tcp 00:07:52.556 rmmod nvme_fabrics 00:07:52.556 rmmod nvme_keyring 00:07:52.556 00:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:52.557 00:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:52.557 00:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:52.557 00:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 122644 ']' 00:07:52.557 00:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 122644 00:07:52.557 00:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 122644 ']' 00:07:52.557 00:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 122644 00:07:52.557 00:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:07:52.557 00:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:52.557 00:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 122644 00:07:52.557 00:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:52.557 00:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:52.557 00:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 122644' 00:07:52.557 killing process with pid 122644 00:07:52.557 00:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 122644 00:07:52.557 00:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 122644 00:07:52.817 00:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:52.817 00:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:52.817 00:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:52.817 00:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:52.817 00:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:07:52.817 00:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:52.817 00:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:07:52.817 00:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:52.817 00:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:52.817 00:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:52.817 00:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:52.817 00:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:54.725 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:54.725 00:07:54.725 real 0m12.499s 00:07:54.725 user 0m28.000s 00:07:54.725 sys 0m3.120s 00:07:54.725 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:54.725 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:54.725 ************************************ 00:07:54.725 END TEST nvmf_delete_subsystem 00:07:54.725 ************************************ 00:07:54.725 00:13:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:54.726 00:13:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:54.726 00:13:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.726 00:13:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:54.985 ************************************ 00:07:54.985 START TEST nvmf_host_management 00:07:54.985 ************************************ 00:07:54.985 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:54.985 * Looking for test storage... 00:07:54.985 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:54.985 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:54.985 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:07:54.985 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:54.985 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:54.985 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:54.985 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:54.985 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:54.985 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:54.985 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:54.985 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:54.985 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:54.985 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:54.985 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:54.985 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:54.985 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:54.985 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:54.985 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:54.985 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:54.985 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:54.985 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:54.985 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:54.985 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:54.985 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:54.985 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:54.985 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:54.985 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:54.985 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:54.985 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:54.985 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:54.985 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:54.985 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:54.985 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:54.985 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:54.985 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:54.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.985 --rc genhtml_branch_coverage=1 00:07:54.985 --rc genhtml_function_coverage=1 00:07:54.985 --rc genhtml_legend=1 00:07:54.985 --rc geninfo_all_blocks=1 00:07:54.985 --rc geninfo_unexecuted_blocks=1 00:07:54.985 00:07:54.985 ' 00:07:54.985 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:54.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.985 --rc genhtml_branch_coverage=1 00:07:54.985 --rc genhtml_function_coverage=1 00:07:54.985 --rc genhtml_legend=1 00:07:54.985 --rc geninfo_all_blocks=1 00:07:54.985 --rc geninfo_unexecuted_blocks=1 00:07:54.985 00:07:54.985 ' 00:07:54.985 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:54.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.985 --rc genhtml_branch_coverage=1 00:07:54.985 --rc genhtml_function_coverage=1 00:07:54.985 --rc genhtml_legend=1 00:07:54.985 --rc geninfo_all_blocks=1 00:07:54.985 --rc geninfo_unexecuted_blocks=1 00:07:54.985 00:07:54.985 ' 00:07:54.985 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:54.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.985 --rc genhtml_branch_coverage=1 00:07:54.985 --rc genhtml_function_coverage=1 00:07:54.985 --rc genhtml_legend=1 00:07:54.985 --rc geninfo_all_blocks=1 00:07:54.985 --rc geninfo_unexecuted_blocks=1 00:07:54.985 00:07:54.985 ' 00:07:54.985 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:54.985 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:54.985 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:54.985 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:54.985 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:54.985 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:54.985 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:54.985 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:54.985 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:54.985 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:54.985 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:54.985 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:54.985 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:54.985 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:54.985 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:54.985 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:54.985 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:54.985 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:54.985 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:54.985 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:54.985 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:54.985 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:54.985 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:54.985 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.986 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.986 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.986 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:54.986 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.986 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:54.986 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:54.986 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:54.986 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:54.986 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:54.986 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:54.986 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:54.986 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:54.986 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:54.986 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:54.986 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:54.986 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:54.986 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:54.986 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:54.986 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:54.986 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:54.986 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:54.986 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:54.986 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:54.986 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:54.986 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:54.986 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:54.986 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:54.986 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:54.986 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:54.986 00:13:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:57.524 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:57.524 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:57.524 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:57.524 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:57.524 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:57.525 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:57.525 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:57.525 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:57.525 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:57.525 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:57.525 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:57.525 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:57.525 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:57.525 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:57.525 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:57.525 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:57.525 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:57.525 00:13:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:57.525 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:57.525 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:57.525 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:57.525 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:57.525 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:57.525 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:57.525 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:57.525 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:57.525 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:57.525 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:07:57.525 00:07:57.525 --- 10.0.0.2 ping statistics --- 00:07:57.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:57.525 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:07:57.525 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:57.525 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:57.525 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:07:57.525 00:07:57.525 --- 10.0.0.1 ping statistics --- 00:07:57.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:57.525 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:07:57.525 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:57.525 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:07:57.525 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:57.525 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:57.525 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:57.525 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:57.525 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:57.525 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:57.525 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:57.525 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:57.525 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:57.525 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:57.525 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:57.525 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:57.525 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:57.525 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=125560 00:07:57.525 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:57.525 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 125560 00:07:57.525 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 125560 ']' 00:07:57.525 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.525 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:57.525 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.525 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:57.525 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:57.525 [2024-11-18 00:13:21.152744] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:07:57.525 [2024-11-18 00:13:21.152837] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:57.525 [2024-11-18 00:13:21.225841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:57.525 [2024-11-18 00:13:21.275867] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:57.525 [2024-11-18 00:13:21.275921] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:57.525 [2024-11-18 00:13:21.275949] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:57.525 [2024-11-18 00:13:21.275960] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:57.525 [2024-11-18 00:13:21.275970] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:57.525 [2024-11-18 00:13:21.277622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:57.525 [2024-11-18 00:13:21.277749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:57.525 [2024-11-18 00:13:21.277816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:57.525 [2024-11-18 00:13:21.277818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:57.784 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:57.784 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:57.784 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:57.784 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:57.784 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:57.784 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:57.784 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:57.784 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.784 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:57.784 [2024-11-18 00:13:21.428558] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:57.784 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.784 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:57.784 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:57.784 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:57.784 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:57.784 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:57.784 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:57.784 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.784 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:57.784 Malloc0 00:07:57.784 [2024-11-18 00:13:21.498142] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:57.784 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.784 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:57.784 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:57.784 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:57.784 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=125605 00:07:57.784 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 125605 /var/tmp/bdevperf.sock 00:07:57.784 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 125605 ']' 00:07:57.784 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:57.784 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:57.784 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:57.784 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:57.784 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:57.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:57.784 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:57.784 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:57.784 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:57.784 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:57.784 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:57.784 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:57.784 { 00:07:57.784 "params": { 00:07:57.784 "name": "Nvme$subsystem", 00:07:57.785 "trtype": "$TEST_TRANSPORT", 00:07:57.785 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:57.785 "adrfam": "ipv4", 00:07:57.785 "trsvcid": "$NVMF_PORT", 00:07:57.785 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:57.785 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:57.785 "hdgst": ${hdgst:-false}, 00:07:57.785 "ddgst": ${ddgst:-false} 00:07:57.785 }, 00:07:57.785 "method": "bdev_nvme_attach_controller" 00:07:57.785 } 00:07:57.785 EOF 00:07:57.785 )") 00:07:57.785 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:57.785 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:57.785 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:57.785 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:57.785 "params": { 00:07:57.785 "name": "Nvme0", 00:07:57.785 "trtype": "tcp", 00:07:57.785 "traddr": "10.0.0.2", 00:07:57.785 "adrfam": "ipv4", 00:07:57.785 "trsvcid": "4420", 00:07:57.785 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:57.785 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:57.785 "hdgst": false, 00:07:57.785 "ddgst": false 00:07:57.785 }, 00:07:57.785 "method": "bdev_nvme_attach_controller" 00:07:57.785 }' 00:07:57.785 [2024-11-18 00:13:21.579111] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:07:57.785 [2024-11-18 00:13:21.579186] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125605 ] 00:07:58.042 [2024-11-18 00:13:21.650594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.042 [2024-11-18 00:13:21.698496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.301 Running I/O for 10 seconds... 00:07:58.301 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:58.301 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:58.301 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:58.301 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.301 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:58.301 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.301 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:58.302 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:58.302 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:58.302 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:58.302 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:58.302 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:58.302 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:58.302 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:58.302 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:58.302 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:58.302 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.302 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:58.302 00:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.302 00:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:07:58.302 00:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:07:58.302 00:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:07:58.562 00:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:07:58.562 00:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:58.562 00:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:58.562 00:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:58.562 00:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.562 00:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:58.562 00:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.562 00:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=550 00:07:58.562 00:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 550 -ge 100 ']' 00:07:58.562 00:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:58.562 00:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:58.562 00:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:58.562 00:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:58.562 00:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.562 00:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:58.562 00:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.562 00:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:58.562 00:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.562 00:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:58.562 00:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.562 00:13:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:58.562 [2024-11-18 00:13:22.321619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:58.562 [2024-11-18 00:13:22.321665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.562 [2024-11-18 00:13:22.321683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:58.562 [2024-11-18 00:13:22.321697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.562 [2024-11-18 00:13:22.321712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:58.562 [2024-11-18 00:13:22.321727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.562 [2024-11-18 00:13:22.321741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:58.562 [2024-11-18 00:13:22.321754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.562 [2024-11-18 00:13:22.321767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5d70 is same with the state(6) to be set 00:07:58.562 [2024-11-18 00:13:22.321872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.562 [2024-11-18 00:13:22.321894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.562 [2024-11-18 00:13:22.321918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.562 [2024-11-18 00:13:22.321934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.562 [2024-11-18 00:13:22.321959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.562 [2024-11-18 00:13:22.321974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.562 [2024-11-18 00:13:22.321989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.562 [2024-11-18 00:13:22.322003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.562 [2024-11-18 00:13:22.322018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.562 [2024-11-18 00:13:22.322032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.562 [2024-11-18 00:13:22.322049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.562 [2024-11-18 00:13:22.322063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.562 [2024-11-18 00:13:22.322078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.562 [2024-11-18 00:13:22.322092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.562 [2024-11-18 00:13:22.322108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.562 [2024-11-18 00:13:22.322121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.562 [2024-11-18 00:13:22.322136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.562 [2024-11-18 00:13:22.322150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.562 [2024-11-18 00:13:22.322165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.562 [2024-11-18 00:13:22.322178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.563 [2024-11-18 00:13:22.322193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.563 [2024-11-18 00:13:22.322207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.563 [2024-11-18 00:13:22.322222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.563 [2024-11-18 00:13:22.322235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.563 [2024-11-18 00:13:22.322250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.563 [2024-11-18 00:13:22.322263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.563 [2024-11-18 00:13:22.322278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.563 [2024-11-18 00:13:22.322292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.563 [2024-11-18 00:13:22.322318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.563 [2024-11-18 00:13:22.322348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.563 [2024-11-18 00:13:22.322364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.563 [2024-11-18 00:13:22.322378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.563 [2024-11-18 00:13:22.322393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.563 [2024-11-18 00:13:22.322407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.563 [2024-11-18 00:13:22.322423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.563 [2024-11-18 00:13:22.322437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.563 [2024-11-18 00:13:22.322452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.563 [2024-11-18 00:13:22.322466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.563 [2024-11-18 00:13:22.322481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.563 [2024-11-18 00:13:22.322496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.563 [2024-11-18 00:13:22.322510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.563 [2024-11-18 00:13:22.322525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.563 [2024-11-18 00:13:22.322539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.563 [2024-11-18 00:13:22.322554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.563 [2024-11-18 00:13:22.322569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.563 [2024-11-18 00:13:22.322584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.563 [2024-11-18 00:13:22.322598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.563 [2024-11-18 00:13:22.322612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.563 [2024-11-18 00:13:22.322627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.563 [2024-11-18 00:13:22.322642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.563 [2024-11-18 00:13:22.322657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.563 [2024-11-18 00:13:22.322671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.563 [2024-11-18 00:13:22.322685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.563 [2024-11-18 00:13:22.322700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.563 [2024-11-18 00:13:22.322719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.563 [2024-11-18 00:13:22.322734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.563 [2024-11-18 00:13:22.322749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.563 [2024-11-18 00:13:22.322763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.563 [2024-11-18 00:13:22.322778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.563 [2024-11-18 00:13:22.322791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.563 [2024-11-18 00:13:22.322807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.563 [2024-11-18 00:13:22.322821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.563 [2024-11-18 00:13:22.322835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.563 [2024-11-18 00:13:22.322848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.563 [2024-11-18 00:13:22.322863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.563 [2024-11-18 00:13:22.322877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.563 [2024-11-18 00:13:22.322892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.563 [2024-11-18 00:13:22.322905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.563 [2024-11-18 00:13:22.322920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.563 [2024-11-18 00:13:22.322933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.563 [2024-11-18 00:13:22.322948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.563 [2024-11-18 00:13:22.322962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.563 [2024-11-18 00:13:22.322977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.563 [2024-11-18 00:13:22.322991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.563 [2024-11-18 00:13:22.323005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.563 [2024-11-18 00:13:22.323019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.563 [2024-11-18 00:13:22.323034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.563 [2024-11-18 00:13:22.323048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.563 [2024-11-18 00:13:22.323063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.563 [2024-11-18 00:13:22.323080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.563 [2024-11-18 00:13:22.323096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.563 [2024-11-18 00:13:22.323110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.563 [2024-11-18 00:13:22.323125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.563 [2024-11-18 00:13:22.323139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.563 [2024-11-18 00:13:22.323153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.563 [2024-11-18 00:13:22.323167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.563 [2024-11-18 00:13:22.323183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.563 [2024-11-18 00:13:22.323197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.563 [2024-11-18 00:13:22.323212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.563 [2024-11-18 00:13:22.323226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.563 [2024-11-18 00:13:22.323241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.563 [2024-11-18 00:13:22.323255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.563 [2024-11-18 00:13:22.323270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.563 [2024-11-18 00:13:22.323283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.563 [2024-11-18 00:13:22.323298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.563 [2024-11-18 00:13:22.323319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.563 [2024-11-18 00:13:22.323336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.563 [2024-11-18 00:13:22.323350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.564 [2024-11-18 00:13:22.323368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.564 [2024-11-18 00:13:22.323382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.564 [2024-11-18 00:13:22.323397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.564 [2024-11-18 00:13:22.323410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.564 [2024-11-18 00:13:22.323425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.564 [2024-11-18 00:13:22.323438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.564 [2024-11-18 00:13:22.323457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.564 [2024-11-18 00:13:22.323471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.564 [2024-11-18 00:13:22.323486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.564 [2024-11-18 00:13:22.323500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.564 [2024-11-18 00:13:22.323515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.564 [2024-11-18 00:13:22.323528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.564 [2024-11-18 00:13:22.323543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.564 [2024-11-18 00:13:22.323556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.564 [2024-11-18 00:13:22.323571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.564 [2024-11-18 00:13:22.323585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.564 [2024-11-18 00:13:22.323600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.564 [2024-11-18 00:13:22.323614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.564 [2024-11-18 00:13:22.323629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.564 [2024-11-18 00:13:22.323648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.564 [2024-11-18 00:13:22.323663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.564 [2024-11-18 00:13:22.323676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.564 [2024-11-18 00:13:22.323692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.564 [2024-11-18 00:13:22.323705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.564 [2024-11-18 00:13:22.323720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.564 [2024-11-18 00:13:22.323734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.564 [2024-11-18 00:13:22.323748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.564 [2024-11-18 00:13:22.323762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.564 [2024-11-18 00:13:22.323776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.564 [2024-11-18 00:13:22.323790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.564 [2024-11-18 00:13:22.324995] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:58.564 task offset: 81920 on job bdev=Nvme0n1 fails 00:07:58.564 00:07:58.564 Latency(us) 00:07:58.564 [2024-11-17T23:13:22.386Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:58.564 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:58.564 Job: Nvme0n1 ended in about 0.41 seconds with error 00:07:58.564 Verification LBA range: start 0x0 length 0x400 00:07:58.564 Nvme0n1 : 0.41 1562.80 97.68 156.28 0.00 36174.20 2585.03 35340.89 00:07:58.564 [2024-11-17T23:13:22.386Z] =================================================================================================================== 00:07:58.564 [2024-11-17T23:13:22.386Z] Total : 1562.80 97.68 156.28 0.00 36174.20 2585.03 35340.89 00:07:58.564 [2024-11-18 00:13:22.326886] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:58.564 [2024-11-18 00:13:22.326929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc5d70 (9): Bad file descriptor 00:07:58.564 [2024-11-18 00:13:22.333291] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:07:59.938 00:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 125605 00:07:59.938 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (125605) - No such process 00:07:59.938 00:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:59.938 00:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:59.938 00:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:59.938 00:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:59.938 00:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:59.938 00:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:59.938 00:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:59.938 00:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:59.938 { 00:07:59.938 "params": { 00:07:59.938 "name": "Nvme$subsystem", 00:07:59.938 "trtype": "$TEST_TRANSPORT", 00:07:59.938 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:59.938 "adrfam": "ipv4", 00:07:59.938 "trsvcid": "$NVMF_PORT", 00:07:59.938 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:59.938 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:59.938 "hdgst": ${hdgst:-false}, 00:07:59.938 "ddgst": ${ddgst:-false} 00:07:59.938 }, 00:07:59.938 "method": "bdev_nvme_attach_controller" 00:07:59.938 } 00:07:59.938 EOF 00:07:59.938 )") 00:07:59.938 00:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:59.938 00:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:59.938 00:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:59.938 00:13:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:59.938 "params": { 00:07:59.938 "name": "Nvme0", 00:07:59.938 "trtype": "tcp", 00:07:59.938 "traddr": "10.0.0.2", 00:07:59.938 "adrfam": "ipv4", 00:07:59.938 "trsvcid": "4420", 00:07:59.938 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:59.938 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:59.938 "hdgst": false, 00:07:59.938 "ddgst": false 00:07:59.938 }, 00:07:59.938 "method": "bdev_nvme_attach_controller" 00:07:59.938 }' 00:07:59.938 [2024-11-18 00:13:23.373830] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:07:59.938 [2024-11-18 00:13:23.373905] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125886 ] 00:07:59.938 [2024-11-18 00:13:23.442469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.938 [2024-11-18 00:13:23.490363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.938 Running I/O for 1 seconds... 00:08:01.315 1664.00 IOPS, 104.00 MiB/s 00:08:01.315 Latency(us) 00:08:01.315 [2024-11-17T23:13:25.137Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:01.315 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:01.315 Verification LBA range: start 0x0 length 0x400 00:08:01.315 Nvme0n1 : 1.02 1686.42 105.40 0.00 0.00 37338.83 5485.61 33204.91 00:08:01.315 [2024-11-17T23:13:25.137Z] =================================================================================================================== 00:08:01.315 [2024-11-17T23:13:25.137Z] Total : 1686.42 105.40 0.00 0.00 37338.83 5485.61 33204.91 00:08:01.315 00:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:01.315 00:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:01.315 00:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:01.315 00:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:01.315 00:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:01.315 00:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:01.315 00:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:01.315 00:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:01.315 00:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:01.315 00:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:01.315 00:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:01.315 rmmod nvme_tcp 00:08:01.315 rmmod nvme_fabrics 00:08:01.315 rmmod nvme_keyring 00:08:01.315 00:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:01.315 00:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:01.315 00:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:01.315 00:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 125560 ']' 00:08:01.315 00:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 125560 00:08:01.315 00:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 125560 ']' 00:08:01.315 00:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 125560 00:08:01.315 00:13:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:08:01.315 00:13:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:01.315 00:13:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 125560 00:08:01.315 00:13:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:01.315 00:13:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:01.315 00:13:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 125560' 00:08:01.315 killing process with pid 125560 00:08:01.315 00:13:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 125560 00:08:01.315 00:13:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 125560 00:08:01.574 [2024-11-18 00:13:25.220236] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:01.574 00:13:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:01.574 00:13:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:01.574 00:13:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:01.574 00:13:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:01.574 00:13:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:08:01.574 00:13:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:01.574 00:13:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:08:01.574 00:13:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:01.574 00:13:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:01.574 00:13:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:01.574 00:13:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:01.574 00:13:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:03.490 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:03.490 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:03.490 00:08:03.490 real 0m8.731s 00:08:03.490 user 0m18.850s 00:08:03.490 sys 0m2.828s 00:08:03.490 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:03.490 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:03.490 ************************************ 00:08:03.490 END TEST nvmf_host_management 00:08:03.490 ************************************ 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:03.750 ************************************ 00:08:03.750 START TEST nvmf_lvol 00:08:03.750 ************************************ 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:03.750 * Looking for test storage... 00:08:03.750 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:03.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.750 --rc genhtml_branch_coverage=1 00:08:03.750 --rc genhtml_function_coverage=1 00:08:03.750 --rc genhtml_legend=1 00:08:03.750 --rc geninfo_all_blocks=1 00:08:03.750 --rc geninfo_unexecuted_blocks=1 00:08:03.750 00:08:03.750 ' 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:03.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.750 --rc genhtml_branch_coverage=1 00:08:03.750 --rc genhtml_function_coverage=1 00:08:03.750 --rc genhtml_legend=1 00:08:03.750 --rc geninfo_all_blocks=1 00:08:03.750 --rc geninfo_unexecuted_blocks=1 00:08:03.750 00:08:03.750 ' 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:03.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.750 --rc genhtml_branch_coverage=1 00:08:03.750 --rc genhtml_function_coverage=1 00:08:03.750 --rc genhtml_legend=1 00:08:03.750 --rc geninfo_all_blocks=1 00:08:03.750 --rc geninfo_unexecuted_blocks=1 00:08:03.750 00:08:03.750 ' 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:03.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.750 --rc genhtml_branch_coverage=1 00:08:03.750 --rc genhtml_function_coverage=1 00:08:03.750 --rc genhtml_legend=1 00:08:03.750 --rc geninfo_all_blocks=1 00:08:03.750 --rc geninfo_unexecuted_blocks=1 00:08:03.750 00:08:03.750 ' 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.750 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:03.751 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:03.751 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:03.751 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:03.751 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:03.751 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:03.751 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:03.751 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:03.751 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:03.751 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:03.751 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:03.751 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:03.751 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:03.751 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:03.751 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:03.751 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:03.751 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:03.751 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:03.751 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:03.751 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:03.751 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:03.751 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:03.751 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:03.751 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:03.751 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:03.751 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:03.751 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:03.751 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:08:03.751 00:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:06.296 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:06.296 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:06.296 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:06.296 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:06.296 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:06.296 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:08:06.296 00:08:06.296 --- 10.0.0.2 ping statistics --- 00:08:06.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.296 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:06.296 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:06.296 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:08:06.296 00:08:06.296 --- 10.0.0.1 ping statistics --- 00:08:06.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.296 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:06.296 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:06.297 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:06.297 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:06.297 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:06.297 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:06.297 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:06.297 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:06.297 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=128099 00:08:06.297 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:06.297 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 128099 00:08:06.297 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 128099 ']' 00:08:06.297 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.297 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:06.297 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.297 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:06.297 00:13:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:06.297 [2024-11-18 00:13:29.988138] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:08:06.297 [2024-11-18 00:13:29.988215] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:06.297 [2024-11-18 00:13:30.064106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:06.297 [2024-11-18 00:13:30.115627] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:06.297 [2024-11-18 00:13:30.115676] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:06.297 [2024-11-18 00:13:30.115690] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:06.297 [2024-11-18 00:13:30.115702] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:06.297 [2024-11-18 00:13:30.115712] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:06.297 [2024-11-18 00:13:30.117351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:06.297 [2024-11-18 00:13:30.117379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:06.297 [2024-11-18 00:13:30.117383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.555 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:06.555 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:08:06.555 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:06.555 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:06.555 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:06.555 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:06.555 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:06.813 [2024-11-18 00:13:30.555977] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:06.813 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:07.072 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:07.072 00:13:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:07.639 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:07.639 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:07.898 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:08.156 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=7a631157-715b-4745-adb1-3b2c01243499 00:08:08.157 00:13:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7a631157-715b-4745-adb1-3b2c01243499 lvol 20 00:08:08.415 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=98183758-4a72-48a2-b865-0273d9627e9c 00:08:08.415 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:08.674 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 98183758-4a72-48a2-b865-0273d9627e9c 00:08:08.932 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:09.190 [2024-11-18 00:13:32.800887] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:09.190 00:13:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:09.449 00:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=128408 00:08:09.449 00:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:09.449 00:13:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:10.385 00:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 98183758-4a72-48a2-b865-0273d9627e9c MY_SNAPSHOT 00:08:10.644 00:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=2b87102f-419e-4095-9c77-e37ede2ca035 00:08:10.644 00:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 98183758-4a72-48a2-b865-0273d9627e9c 30 00:08:11.211 00:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 2b87102f-419e-4095-9c77-e37ede2ca035 MY_CLONE 00:08:11.469 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=bed89f6f-8da5-4228-995e-6aa6de6d3c5a 00:08:11.469 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate bed89f6f-8da5-4228-995e-6aa6de6d3c5a 00:08:12.036 00:13:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 128408 00:08:20.158 Initializing NVMe Controllers 00:08:20.158 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:20.158 Controller IO queue size 128, less than required. 00:08:20.158 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:20.158 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:20.158 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:20.158 Initialization complete. Launching workers. 00:08:20.158 ======================================================== 00:08:20.158 Latency(us) 00:08:20.158 Device Information : IOPS MiB/s Average min max 00:08:20.158 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10526.25 41.12 12163.09 541.91 73479.81 00:08:20.158 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10557.95 41.24 12130.26 1984.36 80919.48 00:08:20.159 ======================================================== 00:08:20.159 Total : 21084.20 82.36 12146.65 541.91 80919.48 00:08:20.159 00:08:20.159 00:13:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:20.159 00:13:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 98183758-4a72-48a2-b865-0273d9627e9c 00:08:20.417 00:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7a631157-715b-4745-adb1-3b2c01243499 00:08:20.675 00:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:20.675 00:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:20.675 00:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:20.675 00:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:20.675 00:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:20.675 00:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:20.675 00:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:20.675 00:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:20.675 00:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:20.675 rmmod nvme_tcp 00:08:20.675 rmmod nvme_fabrics 00:08:20.675 rmmod nvme_keyring 00:08:20.675 00:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:20.675 00:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:20.675 00:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:20.675 00:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 128099 ']' 00:08:20.675 00:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 128099 00:08:20.675 00:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 128099 ']' 00:08:20.675 00:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 128099 00:08:20.675 00:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:08:20.676 00:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:20.676 00:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 128099 00:08:20.676 00:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:20.676 00:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:20.676 00:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 128099' 00:08:20.676 killing process with pid 128099 00:08:20.676 00:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 128099 00:08:20.676 00:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 128099 00:08:20.935 00:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:20.935 00:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:20.935 00:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:20.935 00:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:20.935 00:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:08:20.935 00:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:20.935 00:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:08:20.935 00:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:20.935 00:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:20.935 00:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:20.935 00:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:20.935 00:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:23.484 00:08:23.484 real 0m19.382s 00:08:23.484 user 1m5.206s 00:08:23.484 sys 0m5.952s 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:23.484 ************************************ 00:08:23.484 END TEST nvmf_lvol 00:08:23.484 ************************************ 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:23.484 ************************************ 00:08:23.484 START TEST nvmf_lvs_grow 00:08:23.484 ************************************ 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:23.484 * Looking for test storage... 00:08:23.484 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:23.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.484 --rc genhtml_branch_coverage=1 00:08:23.484 --rc genhtml_function_coverage=1 00:08:23.484 --rc genhtml_legend=1 00:08:23.484 --rc geninfo_all_blocks=1 00:08:23.484 --rc geninfo_unexecuted_blocks=1 00:08:23.484 00:08:23.484 ' 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:23.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.484 --rc genhtml_branch_coverage=1 00:08:23.484 --rc genhtml_function_coverage=1 00:08:23.484 --rc genhtml_legend=1 00:08:23.484 --rc geninfo_all_blocks=1 00:08:23.484 --rc geninfo_unexecuted_blocks=1 00:08:23.484 00:08:23.484 ' 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:23.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.484 --rc genhtml_branch_coverage=1 00:08:23.484 --rc genhtml_function_coverage=1 00:08:23.484 --rc genhtml_legend=1 00:08:23.484 --rc geninfo_all_blocks=1 00:08:23.484 --rc geninfo_unexecuted_blocks=1 00:08:23.484 00:08:23.484 ' 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:23.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.484 --rc genhtml_branch_coverage=1 00:08:23.484 --rc genhtml_function_coverage=1 00:08:23.484 --rc genhtml_legend=1 00:08:23.484 --rc geninfo_all_blocks=1 00:08:23.484 --rc geninfo_unexecuted_blocks=1 00:08:23.484 00:08:23.484 ' 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:23.484 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:23.484 00:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:25.393 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:25.393 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:25.393 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:25.393 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:25.393 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:25.652 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:25.652 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:25.652 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:25.653 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:25.653 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:25.653 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:08:25.653 00:08:25.653 --- 10.0.0.2 ping statistics --- 00:08:25.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.653 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:08:25.653 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:25.653 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:25.653 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:08:25.653 00:08:25.653 --- 10.0.0.1 ping statistics --- 00:08:25.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.653 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:08:25.653 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:25.653 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:08:25.653 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:25.653 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:25.653 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:25.653 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:25.653 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:25.653 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:25.653 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:25.653 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:25.653 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:25.653 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:25.653 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:25.653 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=131813 00:08:25.653 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:25.653 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 131813 00:08:25.653 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 131813 ']' 00:08:25.653 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.653 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:25.653 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.653 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:25.653 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:25.653 [2024-11-18 00:13:49.324522] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:08:25.653 [2024-11-18 00:13:49.324619] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:25.653 [2024-11-18 00:13:49.397078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.653 [2024-11-18 00:13:49.447553] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:25.653 [2024-11-18 00:13:49.447643] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:25.653 [2024-11-18 00:13:49.447658] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:25.653 [2024-11-18 00:13:49.447669] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:25.653 [2024-11-18 00:13:49.447678] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:25.653 [2024-11-18 00:13:49.448368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.911 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:25.911 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:08:25.911 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:25.911 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:25.911 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:25.911 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:25.911 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:26.169 [2024-11-18 00:13:49.834015] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:26.169 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:26.169 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:26.169 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:26.169 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:26.169 ************************************ 00:08:26.169 START TEST lvs_grow_clean 00:08:26.169 ************************************ 00:08:26.169 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:08:26.169 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:26.169 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:26.169 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:26.169 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:26.169 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:26.169 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:26.169 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:26.169 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:26.169 00:13:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:26.428 00:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:26.428 00:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:26.686 00:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=f5d9dcef-502f-4e25-8046-fffde3a747eb 00:08:26.686 00:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f5d9dcef-502f-4e25-8046-fffde3a747eb 00:08:26.686 00:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:27.320 00:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:27.320 00:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:27.320 00:13:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f5d9dcef-502f-4e25-8046-fffde3a747eb lvol 150 00:08:27.320 00:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=46e2ea20-258e-4181-afd3-b1aa7c35f90f 00:08:27.320 00:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:27.320 00:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:27.578 [2024-11-18 00:13:51.292749] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:27.578 [2024-11-18 00:13:51.292821] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:27.578 true 00:08:27.578 00:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f5d9dcef-502f-4e25-8046-fffde3a747eb 00:08:27.578 00:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:27.837 00:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:27.837 00:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:28.095 00:13:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 46e2ea20-258e-4181-afd3-b1aa7c35f90f 00:08:28.354 00:13:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:28.613 [2024-11-18 00:13:52.375990] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:28.613 00:13:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:28.870 00:13:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=132259 00:08:28.870 00:13:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:28.870 00:13:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:28.870 00:13:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 132259 /var/tmp/bdevperf.sock 00:08:28.870 00:13:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 132259 ']' 00:08:28.870 00:13:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:28.870 00:13:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:28.870 00:13:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:28.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:28.870 00:13:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:28.870 00:13:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:29.129 [2024-11-18 00:13:52.705781] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:08:29.129 [2024-11-18 00:13:52.705849] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132259 ] 00:08:29.129 [2024-11-18 00:13:52.770612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.129 [2024-11-18 00:13:52.815062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:29.129 00:13:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:29.129 00:13:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:08:29.129 00:13:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:29.695 Nvme0n1 00:08:29.695 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:29.954 [ 00:08:29.954 { 00:08:29.955 "name": "Nvme0n1", 00:08:29.955 "aliases": [ 00:08:29.955 "46e2ea20-258e-4181-afd3-b1aa7c35f90f" 00:08:29.955 ], 00:08:29.955 "product_name": "NVMe disk", 00:08:29.955 "block_size": 4096, 00:08:29.955 "num_blocks": 38912, 00:08:29.955 "uuid": "46e2ea20-258e-4181-afd3-b1aa7c35f90f", 00:08:29.955 "numa_id": 0, 00:08:29.955 "assigned_rate_limits": { 00:08:29.955 "rw_ios_per_sec": 0, 00:08:29.955 "rw_mbytes_per_sec": 0, 00:08:29.955 "r_mbytes_per_sec": 0, 00:08:29.955 "w_mbytes_per_sec": 0 00:08:29.955 }, 00:08:29.955 "claimed": false, 00:08:29.955 "zoned": false, 00:08:29.955 "supported_io_types": { 00:08:29.955 "read": true, 00:08:29.955 "write": true, 00:08:29.955 "unmap": true, 00:08:29.955 "flush": true, 00:08:29.955 "reset": true, 00:08:29.955 "nvme_admin": true, 00:08:29.955 "nvme_io": true, 00:08:29.955 "nvme_io_md": false, 00:08:29.955 "write_zeroes": true, 00:08:29.955 "zcopy": false, 00:08:29.955 "get_zone_info": false, 00:08:29.955 "zone_management": false, 00:08:29.955 "zone_append": false, 00:08:29.955 "compare": true, 00:08:29.955 "compare_and_write": true, 00:08:29.955 "abort": true, 00:08:29.955 "seek_hole": false, 00:08:29.955 "seek_data": false, 00:08:29.955 "copy": true, 00:08:29.955 "nvme_iov_md": false 00:08:29.955 }, 00:08:29.955 "memory_domains": [ 00:08:29.955 { 00:08:29.955 "dma_device_id": "system", 00:08:29.955 "dma_device_type": 1 00:08:29.955 } 00:08:29.955 ], 00:08:29.955 "driver_specific": { 00:08:29.955 "nvme": [ 00:08:29.955 { 00:08:29.955 "trid": { 00:08:29.955 "trtype": "TCP", 00:08:29.955 "adrfam": "IPv4", 00:08:29.955 "traddr": "10.0.0.2", 00:08:29.955 "trsvcid": "4420", 00:08:29.955 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:29.955 }, 00:08:29.955 "ctrlr_data": { 00:08:29.955 "cntlid": 1, 00:08:29.955 "vendor_id": "0x8086", 00:08:29.955 "model_number": "SPDK bdev Controller", 00:08:29.955 "serial_number": "SPDK0", 00:08:29.955 "firmware_revision": "25.01", 00:08:29.955 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:29.955 "oacs": { 00:08:29.955 "security": 0, 00:08:29.955 "format": 0, 00:08:29.955 "firmware": 0, 00:08:29.955 "ns_manage": 0 00:08:29.955 }, 00:08:29.955 "multi_ctrlr": true, 00:08:29.955 "ana_reporting": false 00:08:29.955 }, 00:08:29.955 "vs": { 00:08:29.955 "nvme_version": "1.3" 00:08:29.955 }, 00:08:29.955 "ns_data": { 00:08:29.955 "id": 1, 00:08:29.955 "can_share": true 00:08:29.955 } 00:08:29.955 } 00:08:29.955 ], 00:08:29.955 "mp_policy": "active_passive" 00:08:29.955 } 00:08:29.955 } 00:08:29.955 ] 00:08:29.955 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=132390 00:08:29.955 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:29.955 00:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:29.955 Running I/O for 10 seconds... 00:08:30.892 Latency(us) 00:08:30.892 [2024-11-17T23:13:54.714Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:30.892 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:30.892 Nvme0n1 : 1.00 15179.00 59.29 0.00 0.00 0.00 0.00 0.00 00:08:30.892 [2024-11-17T23:13:54.714Z] =================================================================================================================== 00:08:30.892 [2024-11-17T23:13:54.714Z] Total : 15179.00 59.29 0.00 0.00 0.00 0.00 0.00 00:08:30.892 00:08:31.828 00:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f5d9dcef-502f-4e25-8046-fffde3a747eb 00:08:32.087 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:32.087 Nvme0n1 : 2.00 15368.00 60.03 0.00 0.00 0.00 0.00 0.00 00:08:32.087 [2024-11-17T23:13:55.909Z] =================================================================================================================== 00:08:32.087 [2024-11-17T23:13:55.909Z] Total : 15368.00 60.03 0.00 0.00 0.00 0.00 0.00 00:08:32.087 00:08:32.087 true 00:08:32.087 00:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f5d9dcef-502f-4e25-8046-fffde3a747eb 00:08:32.087 00:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:32.345 00:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:32.345 00:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:32.345 00:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 132390 00:08:32.915 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:32.915 Nvme0n1 : 3.00 15452.33 60.36 0.00 0.00 0.00 0.00 0.00 00:08:32.915 [2024-11-17T23:13:56.737Z] =================================================================================================================== 00:08:32.915 [2024-11-17T23:13:56.737Z] Total : 15452.33 60.36 0.00 0.00 0.00 0.00 0.00 00:08:32.915 00:08:34.292 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:34.292 Nvme0n1 : 4.00 15575.00 60.84 0.00 0.00 0.00 0.00 0.00 00:08:34.292 [2024-11-17T23:13:58.114Z] =================================================================================================================== 00:08:34.292 [2024-11-17T23:13:58.114Z] Total : 15575.00 60.84 0.00 0.00 0.00 0.00 0.00 00:08:34.292 00:08:35.231 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:35.231 Nvme0n1 : 5.00 15674.00 61.23 0.00 0.00 0.00 0.00 0.00 00:08:35.231 [2024-11-17T23:13:59.053Z] =================================================================================================================== 00:08:35.231 [2024-11-17T23:13:59.053Z] Total : 15674.00 61.23 0.00 0.00 0.00 0.00 0.00 00:08:35.231 00:08:36.180 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.180 Nvme0n1 : 6.00 15739.50 61.48 0.00 0.00 0.00 0.00 0.00 00:08:36.180 [2024-11-17T23:14:00.002Z] =================================================================================================================== 00:08:36.180 [2024-11-17T23:14:00.002Z] Total : 15739.50 61.48 0.00 0.00 0.00 0.00 0.00 00:08:36.180 00:08:37.117 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.117 Nvme0n1 : 7.00 15759.71 61.56 0.00 0.00 0.00 0.00 0.00 00:08:37.117 [2024-11-17T23:14:00.939Z] =================================================================================================================== 00:08:37.117 [2024-11-17T23:14:00.939Z] Total : 15759.71 61.56 0.00 0.00 0.00 0.00 0.00 00:08:37.117 00:08:38.053 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.053 Nvme0n1 : 8.00 15807.00 61.75 0.00 0.00 0.00 0.00 0.00 00:08:38.053 [2024-11-17T23:14:01.875Z] =================================================================================================================== 00:08:38.053 [2024-11-17T23:14:01.875Z] Total : 15807.00 61.75 0.00 0.00 0.00 0.00 0.00 00:08:38.053 00:08:38.988 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.988 Nvme0n1 : 9.00 15850.00 61.91 0.00 0.00 0.00 0.00 0.00 00:08:38.988 [2024-11-17T23:14:02.810Z] =================================================================================================================== 00:08:38.988 [2024-11-17T23:14:02.810Z] Total : 15850.00 61.91 0.00 0.00 0.00 0.00 0.00 00:08:38.989 00:08:39.924 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:39.924 Nvme0n1 : 10.00 15903.90 62.12 0.00 0.00 0.00 0.00 0.00 00:08:39.924 [2024-11-17T23:14:03.746Z] =================================================================================================================== 00:08:39.924 [2024-11-17T23:14:03.746Z] Total : 15903.90 62.12 0.00 0.00 0.00 0.00 0.00 00:08:39.924 00:08:39.924 00:08:39.924 Latency(us) 00:08:39.924 [2024-11-17T23:14:03.746Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:39.924 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:39.924 Nvme0n1 : 10.01 15904.15 62.13 0.00 0.00 8043.76 4587.52 15340.28 00:08:39.924 [2024-11-17T23:14:03.746Z] =================================================================================================================== 00:08:39.924 [2024-11-17T23:14:03.746Z] Total : 15904.15 62.13 0.00 0.00 8043.76 4587.52 15340.28 00:08:39.924 { 00:08:39.924 "results": [ 00:08:39.924 { 00:08:39.924 "job": "Nvme0n1", 00:08:39.924 "core_mask": "0x2", 00:08:39.924 "workload": "randwrite", 00:08:39.924 "status": "finished", 00:08:39.924 "queue_depth": 128, 00:08:39.924 "io_size": 4096, 00:08:39.924 "runtime": 10.00789, 00:08:39.924 "iops": 15904.151624368373, 00:08:39.924 "mibps": 62.125592282688956, 00:08:39.924 "io_failed": 0, 00:08:39.924 "io_timeout": 0, 00:08:39.924 "avg_latency_us": 8043.757802205882, 00:08:39.924 "min_latency_us": 4587.52, 00:08:39.924 "max_latency_us": 15340.278518518518 00:08:39.924 } 00:08:39.924 ], 00:08:39.924 "core_count": 1 00:08:39.924 } 00:08:39.924 00:14:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 132259 00:08:39.924 00:14:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 132259 ']' 00:08:39.924 00:14:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 132259 00:08:39.924 00:14:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:08:39.924 00:14:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:39.924 00:14:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 132259 00:08:40.182 00:14:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:40.182 00:14:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:40.182 00:14:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 132259' 00:08:40.182 killing process with pid 132259 00:08:40.182 00:14:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 132259 00:08:40.182 Received shutdown signal, test time was about 10.000000 seconds 00:08:40.182 00:08:40.182 Latency(us) 00:08:40.182 [2024-11-17T23:14:04.004Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:40.182 [2024-11-17T23:14:04.004Z] =================================================================================================================== 00:08:40.182 [2024-11-17T23:14:04.004Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:40.182 00:14:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 132259 00:08:40.182 00:14:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:40.441 00:14:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:40.699 00:14:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f5d9dcef-502f-4e25-8046-fffde3a747eb 00:08:40.699 00:14:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:40.959 00:14:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:40.959 00:14:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:40.959 00:14:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:41.218 [2024-11-18 00:14:05.010138] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:41.218 00:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f5d9dcef-502f-4e25-8046-fffde3a747eb 00:08:41.218 00:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:08:41.218 00:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f5d9dcef-502f-4e25-8046-fffde3a747eb 00:08:41.218 00:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:41.218 00:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:41.218 00:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:41.218 00:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:41.218 00:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:41.218 00:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:41.218 00:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:41.218 00:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:41.218 00:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f5d9dcef-502f-4e25-8046-fffde3a747eb 00:08:41.476 request: 00:08:41.476 { 00:08:41.476 "uuid": "f5d9dcef-502f-4e25-8046-fffde3a747eb", 00:08:41.476 "method": "bdev_lvol_get_lvstores", 00:08:41.476 "req_id": 1 00:08:41.476 } 00:08:41.476 Got JSON-RPC error response 00:08:41.476 response: 00:08:41.476 { 00:08:41.476 "code": -19, 00:08:41.476 "message": "No such device" 00:08:41.476 } 00:08:41.735 00:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:08:41.735 00:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:41.735 00:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:41.735 00:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:41.735 00:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:41.994 aio_bdev 00:08:41.994 00:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 46e2ea20-258e-4181-afd3-b1aa7c35f90f 00:08:41.994 00:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=46e2ea20-258e-4181-afd3-b1aa7c35f90f 00:08:41.994 00:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:41.994 00:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:08:41.994 00:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:41.994 00:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:41.994 00:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:42.252 00:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 46e2ea20-258e-4181-afd3-b1aa7c35f90f -t 2000 00:08:42.510 [ 00:08:42.510 { 00:08:42.510 "name": "46e2ea20-258e-4181-afd3-b1aa7c35f90f", 00:08:42.510 "aliases": [ 00:08:42.510 "lvs/lvol" 00:08:42.510 ], 00:08:42.510 "product_name": "Logical Volume", 00:08:42.510 "block_size": 4096, 00:08:42.510 "num_blocks": 38912, 00:08:42.510 "uuid": "46e2ea20-258e-4181-afd3-b1aa7c35f90f", 00:08:42.510 "assigned_rate_limits": { 00:08:42.510 "rw_ios_per_sec": 0, 00:08:42.510 "rw_mbytes_per_sec": 0, 00:08:42.510 "r_mbytes_per_sec": 0, 00:08:42.510 "w_mbytes_per_sec": 0 00:08:42.510 }, 00:08:42.510 "claimed": false, 00:08:42.510 "zoned": false, 00:08:42.510 "supported_io_types": { 00:08:42.510 "read": true, 00:08:42.510 "write": true, 00:08:42.510 "unmap": true, 00:08:42.510 "flush": false, 00:08:42.510 "reset": true, 00:08:42.510 "nvme_admin": false, 00:08:42.510 "nvme_io": false, 00:08:42.510 "nvme_io_md": false, 00:08:42.510 "write_zeroes": true, 00:08:42.510 "zcopy": false, 00:08:42.510 "get_zone_info": false, 00:08:42.510 "zone_management": false, 00:08:42.510 "zone_append": false, 00:08:42.510 "compare": false, 00:08:42.510 "compare_and_write": false, 00:08:42.510 "abort": false, 00:08:42.510 "seek_hole": true, 00:08:42.510 "seek_data": true, 00:08:42.510 "copy": false, 00:08:42.510 "nvme_iov_md": false 00:08:42.510 }, 00:08:42.510 "driver_specific": { 00:08:42.510 "lvol": { 00:08:42.510 "lvol_store_uuid": "f5d9dcef-502f-4e25-8046-fffde3a747eb", 00:08:42.510 "base_bdev": "aio_bdev", 00:08:42.510 "thin_provision": false, 00:08:42.510 "num_allocated_clusters": 38, 00:08:42.510 "snapshot": false, 00:08:42.510 "clone": false, 00:08:42.510 "esnap_clone": false 00:08:42.510 } 00:08:42.510 } 00:08:42.510 } 00:08:42.510 ] 00:08:42.510 00:14:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:08:42.510 00:14:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f5d9dcef-502f-4e25-8046-fffde3a747eb 00:08:42.510 00:14:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:42.769 00:14:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:42.769 00:14:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f5d9dcef-502f-4e25-8046-fffde3a747eb 00:08:42.769 00:14:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:43.027 00:14:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:43.027 00:14:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 46e2ea20-258e-4181-afd3-b1aa7c35f90f 00:08:43.286 00:14:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f5d9dcef-502f-4e25-8046-fffde3a747eb 00:08:43.544 00:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:43.801 00:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:43.802 00:08:43.802 real 0m17.618s 00:08:43.802 user 0m16.702s 00:08:43.802 sys 0m2.020s 00:08:43.802 00:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:43.802 00:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:43.802 ************************************ 00:08:43.802 END TEST lvs_grow_clean 00:08:43.802 ************************************ 00:08:43.802 00:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:43.802 00:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:43.802 00:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:43.802 00:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:43.802 ************************************ 00:08:43.802 START TEST lvs_grow_dirty 00:08:43.802 ************************************ 00:08:43.802 00:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:08:43.802 00:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:43.802 00:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:43.802 00:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:43.802 00:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:43.802 00:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:43.802 00:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:43.802 00:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:43.802 00:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:43.802 00:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:44.067 00:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:44.067 00:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:44.325 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=3c395ed1-986d-43a5-957f-b58041b00578 00:08:44.325 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3c395ed1-986d-43a5-957f-b58041b00578 00:08:44.325 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:44.584 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:44.584 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:44.584 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3c395ed1-986d-43a5-957f-b58041b00578 lvol 150 00:08:44.841 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=8c52604d-d3d2-4799-80a9-b5e3313a6c31 00:08:44.841 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:44.842 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:45.099 [2024-11-18 00:14:08.908707] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:45.099 [2024-11-18 00:14:08.908794] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:45.099 true 00:08:45.358 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3c395ed1-986d-43a5-957f-b58041b00578 00:08:45.358 00:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:45.616 00:14:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:45.616 00:14:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:45.874 00:14:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8c52604d-d3d2-4799-80a9-b5e3313a6c31 00:08:46.132 00:14:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:46.391 [2024-11-18 00:14:10.000216] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:46.391 00:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:46.650 00:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=134325 00:08:46.651 00:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:46.651 00:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 134325 /var/tmp/bdevperf.sock 00:08:46.651 00:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 134325 ']' 00:08:46.651 00:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:46.651 00:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:46.651 00:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:46.651 00:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:46.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:46.651 00:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:46.651 00:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:46.651 [2024-11-18 00:14:10.336756] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:08:46.651 [2024-11-18 00:14:10.336834] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134325 ] 00:08:46.651 [2024-11-18 00:14:10.403766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.651 [2024-11-18 00:14:10.455207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:46.909 00:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:46.909 00:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:46.909 00:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:47.168 Nvme0n1 00:08:47.168 00:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:47.427 [ 00:08:47.427 { 00:08:47.427 "name": "Nvme0n1", 00:08:47.427 "aliases": [ 00:08:47.427 "8c52604d-d3d2-4799-80a9-b5e3313a6c31" 00:08:47.427 ], 00:08:47.427 "product_name": "NVMe disk", 00:08:47.427 "block_size": 4096, 00:08:47.427 "num_blocks": 38912, 00:08:47.427 "uuid": "8c52604d-d3d2-4799-80a9-b5e3313a6c31", 00:08:47.427 "numa_id": 0, 00:08:47.427 "assigned_rate_limits": { 00:08:47.427 "rw_ios_per_sec": 0, 00:08:47.427 "rw_mbytes_per_sec": 0, 00:08:47.427 "r_mbytes_per_sec": 0, 00:08:47.427 "w_mbytes_per_sec": 0 00:08:47.427 }, 00:08:47.427 "claimed": false, 00:08:47.427 "zoned": false, 00:08:47.427 "supported_io_types": { 00:08:47.427 "read": true, 00:08:47.427 "write": true, 00:08:47.427 "unmap": true, 00:08:47.427 "flush": true, 00:08:47.427 "reset": true, 00:08:47.427 "nvme_admin": true, 00:08:47.427 "nvme_io": true, 00:08:47.427 "nvme_io_md": false, 00:08:47.427 "write_zeroes": true, 00:08:47.427 "zcopy": false, 00:08:47.427 "get_zone_info": false, 00:08:47.427 "zone_management": false, 00:08:47.427 "zone_append": false, 00:08:47.427 "compare": true, 00:08:47.427 "compare_and_write": true, 00:08:47.427 "abort": true, 00:08:47.427 "seek_hole": false, 00:08:47.427 "seek_data": false, 00:08:47.427 "copy": true, 00:08:47.427 "nvme_iov_md": false 00:08:47.427 }, 00:08:47.427 "memory_domains": [ 00:08:47.427 { 00:08:47.427 "dma_device_id": "system", 00:08:47.427 "dma_device_type": 1 00:08:47.427 } 00:08:47.427 ], 00:08:47.427 "driver_specific": { 00:08:47.427 "nvme": [ 00:08:47.427 { 00:08:47.427 "trid": { 00:08:47.427 "trtype": "TCP", 00:08:47.427 "adrfam": "IPv4", 00:08:47.427 "traddr": "10.0.0.2", 00:08:47.427 "trsvcid": "4420", 00:08:47.427 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:47.427 }, 00:08:47.427 "ctrlr_data": { 00:08:47.427 "cntlid": 1, 00:08:47.427 "vendor_id": "0x8086", 00:08:47.427 "model_number": "SPDK bdev Controller", 00:08:47.427 "serial_number": "SPDK0", 00:08:47.427 "firmware_revision": "25.01", 00:08:47.427 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:47.427 "oacs": { 00:08:47.427 "security": 0, 00:08:47.427 "format": 0, 00:08:47.427 "firmware": 0, 00:08:47.427 "ns_manage": 0 00:08:47.427 }, 00:08:47.427 "multi_ctrlr": true, 00:08:47.427 "ana_reporting": false 00:08:47.427 }, 00:08:47.427 "vs": { 00:08:47.427 "nvme_version": "1.3" 00:08:47.427 }, 00:08:47.427 "ns_data": { 00:08:47.427 "id": 1, 00:08:47.427 "can_share": true 00:08:47.427 } 00:08:47.427 } 00:08:47.427 ], 00:08:47.427 "mp_policy": "active_passive" 00:08:47.427 } 00:08:47.427 } 00:08:47.427 ] 00:08:47.427 00:14:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=134461 00:08:47.427 00:14:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:47.427 00:14:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:47.686 Running I/O for 10 seconds... 00:08:48.621 Latency(us) 00:08:48.621 [2024-11-17T23:14:12.443Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:48.621 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:48.621 Nvme0n1 : 1.00 15179.00 59.29 0.00 0.00 0.00 0.00 0.00 00:08:48.622 [2024-11-17T23:14:12.444Z] =================================================================================================================== 00:08:48.622 [2024-11-17T23:14:12.444Z] Total : 15179.00 59.29 0.00 0.00 0.00 0.00 0.00 00:08:48.622 00:08:49.556 00:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3c395ed1-986d-43a5-957f-b58041b00578 00:08:49.556 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:49.556 Nvme0n1 : 2.00 15338.00 59.91 0.00 0.00 0.00 0.00 0.00 00:08:49.556 [2024-11-17T23:14:13.378Z] =================================================================================================================== 00:08:49.556 [2024-11-17T23:14:13.378Z] Total : 15338.00 59.91 0.00 0.00 0.00 0.00 0.00 00:08:49.556 00:08:49.815 true 00:08:49.815 00:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3c395ed1-986d-43a5-957f-b58041b00578 00:08:49.815 00:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:50.074 00:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:50.074 00:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:50.074 00:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 134461 00:08:50.641 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:50.641 Nvme0n1 : 3.00 15432.33 60.28 0.00 0.00 0.00 0.00 0.00 00:08:50.641 [2024-11-17T23:14:14.463Z] =================================================================================================================== 00:08:50.641 [2024-11-17T23:14:14.463Z] Total : 15432.33 60.28 0.00 0.00 0.00 0.00 0.00 00:08:50.641 00:08:51.576 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:51.576 Nvme0n1 : 4.00 15519.75 60.62 0.00 0.00 0.00 0.00 0.00 00:08:51.576 [2024-11-17T23:14:15.398Z] =================================================================================================================== 00:08:51.576 [2024-11-17T23:14:15.398Z] Total : 15519.75 60.62 0.00 0.00 0.00 0.00 0.00 00:08:51.576 00:08:52.510 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:52.510 Nvme0n1 : 5.00 15590.80 60.90 0.00 0.00 0.00 0.00 0.00 00:08:52.510 [2024-11-17T23:14:16.332Z] =================================================================================================================== 00:08:52.510 [2024-11-17T23:14:16.332Z] Total : 15590.80 60.90 0.00 0.00 0.00 0.00 0.00 00:08:52.510 00:08:53.887 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:53.887 Nvme0n1 : 6.00 15638.17 61.09 0.00 0.00 0.00 0.00 0.00 00:08:53.887 [2024-11-17T23:14:17.709Z] =================================================================================================================== 00:08:53.887 [2024-11-17T23:14:17.709Z] Total : 15638.17 61.09 0.00 0.00 0.00 0.00 0.00 00:08:53.887 00:08:54.822 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:54.822 Nvme0n1 : 7.00 15672.00 61.22 0.00 0.00 0.00 0.00 0.00 00:08:54.822 [2024-11-17T23:14:18.644Z] =================================================================================================================== 00:08:54.822 [2024-11-17T23:14:18.644Z] Total : 15672.00 61.22 0.00 0.00 0.00 0.00 0.00 00:08:54.822 00:08:55.759 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:55.759 Nvme0n1 : 8.00 15697.38 61.32 0.00 0.00 0.00 0.00 0.00 00:08:55.759 [2024-11-17T23:14:19.581Z] =================================================================================================================== 00:08:55.759 [2024-11-17T23:14:19.581Z] Total : 15697.38 61.32 0.00 0.00 0.00 0.00 0.00 00:08:55.759 00:08:56.695 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:56.695 Nvme0n1 : 9.00 15717.11 61.39 0.00 0.00 0.00 0.00 0.00 00:08:56.695 [2024-11-17T23:14:20.517Z] =================================================================================================================== 00:08:56.695 [2024-11-17T23:14:20.517Z] Total : 15717.11 61.39 0.00 0.00 0.00 0.00 0.00 00:08:56.695 00:08:57.628 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:57.628 Nvme0n1 : 10.00 15739.30 61.48 0.00 0.00 0.00 0.00 0.00 00:08:57.628 [2024-11-17T23:14:21.450Z] =================================================================================================================== 00:08:57.628 [2024-11-17T23:14:21.450Z] Total : 15739.30 61.48 0.00 0.00 0.00 0.00 0.00 00:08:57.628 00:08:57.628 00:08:57.628 Latency(us) 00:08:57.628 [2024-11-17T23:14:21.450Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:57.628 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:57.628 Nvme0n1 : 10.00 15739.61 61.48 0.00 0.00 8127.61 2305.90 15922.82 00:08:57.628 [2024-11-17T23:14:21.450Z] =================================================================================================================== 00:08:57.628 [2024-11-17T23:14:21.450Z] Total : 15739.61 61.48 0.00 0.00 8127.61 2305.90 15922.82 00:08:57.628 { 00:08:57.628 "results": [ 00:08:57.628 { 00:08:57.628 "job": "Nvme0n1", 00:08:57.628 "core_mask": "0x2", 00:08:57.628 "workload": "randwrite", 00:08:57.628 "status": "finished", 00:08:57.628 "queue_depth": 128, 00:08:57.628 "io_size": 4096, 00:08:57.628 "runtime": 10.003868, 00:08:57.628 "iops": 15739.611918110075, 00:08:57.628 "mibps": 61.48285905511748, 00:08:57.628 "io_failed": 0, 00:08:57.628 "io_timeout": 0, 00:08:57.628 "avg_latency_us": 8127.607959826305, 00:08:57.628 "min_latency_us": 2305.8962962962964, 00:08:57.628 "max_latency_us": 15922.82074074074 00:08:57.628 } 00:08:57.628 ], 00:08:57.628 "core_count": 1 00:08:57.628 } 00:08:57.628 00:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 134325 00:08:57.628 00:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 134325 ']' 00:08:57.628 00:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 134325 00:08:57.628 00:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:08:57.628 00:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:57.628 00:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 134325 00:08:57.628 00:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:57.628 00:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:57.628 00:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 134325' 00:08:57.628 killing process with pid 134325 00:08:57.628 00:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 134325 00:08:57.628 Received shutdown signal, test time was about 10.000000 seconds 00:08:57.628 00:08:57.628 Latency(us) 00:08:57.628 [2024-11-17T23:14:21.450Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:57.628 [2024-11-17T23:14:21.450Z] =================================================================================================================== 00:08:57.628 [2024-11-17T23:14:21.450Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:57.628 00:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 134325 00:08:57.887 00:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:58.146 00:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:58.403 00:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3c395ed1-986d-43a5-957f-b58041b00578 00:08:58.403 00:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:58.662 00:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:58.662 00:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:58.662 00:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 131813 00:08:58.662 00:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 131813 00:08:58.662 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 131813 Killed "${NVMF_APP[@]}" "$@" 00:08:58.662 00:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:58.662 00:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:58.662 00:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:58.662 00:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:58.662 00:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:58.662 00:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=135797 00:08:58.662 00:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:58.662 00:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 135797 00:08:58.662 00:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 135797 ']' 00:08:58.662 00:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:58.662 00:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:58.662 00:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:58.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:58.662 00:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:58.662 00:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:58.921 [2024-11-18 00:14:22.500523] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:08:58.921 [2024-11-18 00:14:22.500616] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:58.921 [2024-11-18 00:14:22.573937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.921 [2024-11-18 00:14:22.621345] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:58.921 [2024-11-18 00:14:22.621401] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:58.921 [2024-11-18 00:14:22.621414] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:58.921 [2024-11-18 00:14:22.621425] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:58.921 [2024-11-18 00:14:22.621434] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:58.921 [2024-11-18 00:14:22.622025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.921 00:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:58.921 00:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:58.921 00:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:58.921 00:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:58.921 00:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:59.180 00:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:59.180 00:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:59.439 [2024-11-18 00:14:23.007413] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:59.439 [2024-11-18 00:14:23.007536] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:59.439 [2024-11-18 00:14:23.007585] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:59.439 00:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:59.439 00:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 8c52604d-d3d2-4799-80a9-b5e3313a6c31 00:08:59.439 00:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=8c52604d-d3d2-4799-80a9-b5e3313a6c31 00:08:59.439 00:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:59.439 00:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:59.439 00:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:59.439 00:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:59.439 00:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:59.698 00:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8c52604d-d3d2-4799-80a9-b5e3313a6c31 -t 2000 00:08:59.956 [ 00:08:59.956 { 00:08:59.956 "name": "8c52604d-d3d2-4799-80a9-b5e3313a6c31", 00:08:59.956 "aliases": [ 00:08:59.956 "lvs/lvol" 00:08:59.956 ], 00:08:59.957 "product_name": "Logical Volume", 00:08:59.957 "block_size": 4096, 00:08:59.957 "num_blocks": 38912, 00:08:59.957 "uuid": "8c52604d-d3d2-4799-80a9-b5e3313a6c31", 00:08:59.957 "assigned_rate_limits": { 00:08:59.957 "rw_ios_per_sec": 0, 00:08:59.957 "rw_mbytes_per_sec": 0, 00:08:59.957 "r_mbytes_per_sec": 0, 00:08:59.957 "w_mbytes_per_sec": 0 00:08:59.957 }, 00:08:59.957 "claimed": false, 00:08:59.957 "zoned": false, 00:08:59.957 "supported_io_types": { 00:08:59.957 "read": true, 00:08:59.957 "write": true, 00:08:59.957 "unmap": true, 00:08:59.957 "flush": false, 00:08:59.957 "reset": true, 00:08:59.957 "nvme_admin": false, 00:08:59.957 "nvme_io": false, 00:08:59.957 "nvme_io_md": false, 00:08:59.957 "write_zeroes": true, 00:08:59.957 "zcopy": false, 00:08:59.957 "get_zone_info": false, 00:08:59.957 "zone_management": false, 00:08:59.957 "zone_append": false, 00:08:59.957 "compare": false, 00:08:59.957 "compare_and_write": false, 00:08:59.957 "abort": false, 00:08:59.957 "seek_hole": true, 00:08:59.957 "seek_data": true, 00:08:59.957 "copy": false, 00:08:59.957 "nvme_iov_md": false 00:08:59.957 }, 00:08:59.957 "driver_specific": { 00:08:59.957 "lvol": { 00:08:59.957 "lvol_store_uuid": "3c395ed1-986d-43a5-957f-b58041b00578", 00:08:59.957 "base_bdev": "aio_bdev", 00:08:59.957 "thin_provision": false, 00:08:59.957 "num_allocated_clusters": 38, 00:08:59.957 "snapshot": false, 00:08:59.957 "clone": false, 00:08:59.957 "esnap_clone": false 00:08:59.957 } 00:08:59.957 } 00:08:59.957 } 00:08:59.957 ] 00:08:59.957 00:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:59.957 00:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3c395ed1-986d-43a5-957f-b58041b00578 00:08:59.957 00:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:00.215 00:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:00.215 00:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3c395ed1-986d-43a5-957f-b58041b00578 00:09:00.215 00:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:00.474 00:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:00.474 00:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:00.732 [2024-11-18 00:14:24.369006] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:00.733 00:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3c395ed1-986d-43a5-957f-b58041b00578 00:09:00.733 00:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:09:00.733 00:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3c395ed1-986d-43a5-957f-b58041b00578 00:09:00.733 00:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:00.733 00:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:00.733 00:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:00.733 00:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:00.733 00:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:00.733 00:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:00.733 00:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:00.733 00:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:00.733 00:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3c395ed1-986d-43a5-957f-b58041b00578 00:09:00.992 request: 00:09:00.992 { 00:09:00.992 "uuid": "3c395ed1-986d-43a5-957f-b58041b00578", 00:09:00.992 "method": "bdev_lvol_get_lvstores", 00:09:00.992 "req_id": 1 00:09:00.992 } 00:09:00.992 Got JSON-RPC error response 00:09:00.992 response: 00:09:00.992 { 00:09:00.992 "code": -19, 00:09:00.992 "message": "No such device" 00:09:00.992 } 00:09:00.992 00:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:09:00.992 00:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:00.992 00:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:00.992 00:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:00.992 00:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:01.250 aio_bdev 00:09:01.250 00:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 8c52604d-d3d2-4799-80a9-b5e3313a6c31 00:09:01.250 00:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=8c52604d-d3d2-4799-80a9-b5e3313a6c31 00:09:01.250 00:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:01.250 00:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:01.250 00:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:01.250 00:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:01.250 00:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:01.508 00:14:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8c52604d-d3d2-4799-80a9-b5e3313a6c31 -t 2000 00:09:01.768 [ 00:09:01.768 { 00:09:01.768 "name": "8c52604d-d3d2-4799-80a9-b5e3313a6c31", 00:09:01.768 "aliases": [ 00:09:01.768 "lvs/lvol" 00:09:01.768 ], 00:09:01.768 "product_name": "Logical Volume", 00:09:01.768 "block_size": 4096, 00:09:01.768 "num_blocks": 38912, 00:09:01.768 "uuid": "8c52604d-d3d2-4799-80a9-b5e3313a6c31", 00:09:01.768 "assigned_rate_limits": { 00:09:01.768 "rw_ios_per_sec": 0, 00:09:01.768 "rw_mbytes_per_sec": 0, 00:09:01.768 "r_mbytes_per_sec": 0, 00:09:01.768 "w_mbytes_per_sec": 0 00:09:01.768 }, 00:09:01.768 "claimed": false, 00:09:01.768 "zoned": false, 00:09:01.768 "supported_io_types": { 00:09:01.768 "read": true, 00:09:01.768 "write": true, 00:09:01.768 "unmap": true, 00:09:01.768 "flush": false, 00:09:01.768 "reset": true, 00:09:01.768 "nvme_admin": false, 00:09:01.768 "nvme_io": false, 00:09:01.768 "nvme_io_md": false, 00:09:01.768 "write_zeroes": true, 00:09:01.768 "zcopy": false, 00:09:01.768 "get_zone_info": false, 00:09:01.768 "zone_management": false, 00:09:01.768 "zone_append": false, 00:09:01.768 "compare": false, 00:09:01.768 "compare_and_write": false, 00:09:01.768 "abort": false, 00:09:01.768 "seek_hole": true, 00:09:01.768 "seek_data": true, 00:09:01.768 "copy": false, 00:09:01.768 "nvme_iov_md": false 00:09:01.768 }, 00:09:01.768 "driver_specific": { 00:09:01.768 "lvol": { 00:09:01.768 "lvol_store_uuid": "3c395ed1-986d-43a5-957f-b58041b00578", 00:09:01.768 "base_bdev": "aio_bdev", 00:09:01.768 "thin_provision": false, 00:09:01.768 "num_allocated_clusters": 38, 00:09:01.768 "snapshot": false, 00:09:01.768 "clone": false, 00:09:01.768 "esnap_clone": false 00:09:01.768 } 00:09:01.768 } 00:09:01.768 } 00:09:01.768 ] 00:09:01.768 00:14:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:01.768 00:14:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3c395ed1-986d-43a5-957f-b58041b00578 00:09:01.768 00:14:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:02.026 00:14:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:02.026 00:14:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3c395ed1-986d-43a5-957f-b58041b00578 00:09:02.026 00:14:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:02.286 00:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:02.286 00:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8c52604d-d3d2-4799-80a9-b5e3313a6c31 00:09:02.544 00:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3c395ed1-986d-43a5-957f-b58041b00578 00:09:02.803 00:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:03.062 00:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:03.320 00:09:03.320 real 0m19.334s 00:09:03.320 user 0m49.196s 00:09:03.320 sys 0m4.347s 00:09:03.320 00:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:03.320 00:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:03.320 ************************************ 00:09:03.320 END TEST lvs_grow_dirty 00:09:03.320 ************************************ 00:09:03.320 00:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:03.320 00:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:09:03.320 00:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:09:03.320 00:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:09:03.320 00:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:03.320 00:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:09:03.320 00:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:09:03.320 00:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:09:03.320 00:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:03.320 nvmf_trace.0 00:09:03.320 00:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:09:03.320 00:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:03.320 00:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:03.320 00:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:03.320 00:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:03.320 00:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:03.320 00:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:03.320 00:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:03.320 rmmod nvme_tcp 00:09:03.320 rmmod nvme_fabrics 00:09:03.320 rmmod nvme_keyring 00:09:03.320 00:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:03.320 00:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:03.320 00:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:03.320 00:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 135797 ']' 00:09:03.321 00:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 135797 00:09:03.321 00:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 135797 ']' 00:09:03.321 00:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 135797 00:09:03.321 00:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:09:03.321 00:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:03.321 00:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 135797 00:09:03.321 00:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:03.321 00:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:03.321 00:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 135797' 00:09:03.321 killing process with pid 135797 00:09:03.321 00:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 135797 00:09:03.321 00:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 135797 00:09:03.581 00:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:03.581 00:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:03.581 00:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:03.581 00:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:03.581 00:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:09:03.581 00:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:03.581 00:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:09:03.581 00:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:03.581 00:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:03.581 00:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:03.581 00:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:03.581 00:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:05.504 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:05.504 00:09:05.504 real 0m42.501s 00:09:05.504 user 1m11.910s 00:09:05.504 sys 0m8.438s 00:09:05.504 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:05.504 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:05.504 ************************************ 00:09:05.504 END TEST nvmf_lvs_grow 00:09:05.504 ************************************ 00:09:05.504 00:14:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:05.504 00:14:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:05.504 00:14:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:05.504 00:14:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:05.764 ************************************ 00:09:05.764 START TEST nvmf_bdev_io_wait 00:09:05.764 ************************************ 00:09:05.764 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:05.764 * Looking for test storage... 00:09:05.764 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:05.764 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:05.764 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:09:05.764 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:05.764 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:05.764 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:05.764 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:05.764 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:05.764 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:05.764 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:05.764 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:05.764 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:05.764 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:05.764 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:05.764 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:05.764 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:05.764 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:05.764 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:05.764 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:05.764 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:05.764 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:05.764 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:05.764 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:05.764 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:05.764 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:05.764 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:05.764 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:05.764 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:05.764 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:05.764 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:05.764 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:05.764 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:05.764 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:05.764 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:05.764 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:05.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.764 --rc genhtml_branch_coverage=1 00:09:05.764 --rc genhtml_function_coverage=1 00:09:05.764 --rc genhtml_legend=1 00:09:05.764 --rc geninfo_all_blocks=1 00:09:05.764 --rc geninfo_unexecuted_blocks=1 00:09:05.764 00:09:05.764 ' 00:09:05.764 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:05.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.764 --rc genhtml_branch_coverage=1 00:09:05.764 --rc genhtml_function_coverage=1 00:09:05.764 --rc genhtml_legend=1 00:09:05.764 --rc geninfo_all_blocks=1 00:09:05.764 --rc geninfo_unexecuted_blocks=1 00:09:05.764 00:09:05.764 ' 00:09:05.764 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:05.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.764 --rc genhtml_branch_coverage=1 00:09:05.764 --rc genhtml_function_coverage=1 00:09:05.764 --rc genhtml_legend=1 00:09:05.764 --rc geninfo_all_blocks=1 00:09:05.764 --rc geninfo_unexecuted_blocks=1 00:09:05.764 00:09:05.764 ' 00:09:05.764 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:05.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.764 --rc genhtml_branch_coverage=1 00:09:05.764 --rc genhtml_function_coverage=1 00:09:05.764 --rc genhtml_legend=1 00:09:05.764 --rc geninfo_all_blocks=1 00:09:05.764 --rc geninfo_unexecuted_blocks=1 00:09:05.764 00:09:05.764 ' 00:09:05.764 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:05.765 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:05.765 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:05.765 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:05.765 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:05.765 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:05.765 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:05.765 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:05.765 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:05.765 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:05.765 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:05.765 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:05.765 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:05.765 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:05.765 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:05.765 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:05.765 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:05.765 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:05.765 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:05.765 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:05.765 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:05.765 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:05.765 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:05.765 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.765 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.765 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.765 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:05.765 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.765 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:05.765 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:05.765 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:05.765 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:05.765 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:05.765 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:05.765 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:05.765 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:05.765 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:05.765 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:05.765 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:05.765 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:05.765 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:05.765 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:05.765 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:05.765 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:05.765 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:05.765 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:05.765 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:05.765 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:05.765 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:05.765 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:05.765 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:05.765 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:05.765 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:05.765 00:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:08.300 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:08.300 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:08.300 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:08.300 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:08.300 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:08.300 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:08.300 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:08.300 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:08.300 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:08.300 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:08.300 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:08.300 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:08.300 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:08.300 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:08.300 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:08.300 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:08.300 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:08.300 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:08.300 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:08.300 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:08.300 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:08.300 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:08.300 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:08.300 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:08.300 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:08.300 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:08.300 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:08.300 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:08.300 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:08.300 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:08.300 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:08.300 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:08.300 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:08.300 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:08.300 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:08.300 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:08.300 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:08.300 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:08.300 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:08.300 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:08.300 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:08.300 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:08.300 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:08.300 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:08.300 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:08.300 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:08.300 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:08.300 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:08.300 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:08.300 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:08.300 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:08.300 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:08.300 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:08.300 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.300 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:08.300 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:08.300 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:08.300 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:08.300 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.300 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:08.300 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:08.300 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.300 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:08.300 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.300 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:08.300 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:08.300 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:08.301 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:08.301 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.301 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:08.301 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:08.301 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.301 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:08.301 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:09:08.301 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:08.301 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:08.301 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:08.301 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:08.301 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:08.301 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:08.301 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:08.301 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:08.301 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:08.301 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:08.301 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:08.301 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:08.301 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:08.301 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:08.301 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:08.301 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:08.301 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:08.301 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:08.301 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:08.301 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:08.301 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:08.301 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:08.301 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:08.301 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:08.301 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:08.301 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:08.301 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:08.301 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:09:08.301 00:09:08.301 --- 10.0.0.2 ping statistics --- 00:09:08.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.301 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:09:08.301 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:08.301 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:08.301 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:09:08.301 00:09:08.301 --- 10.0.0.1 ping statistics --- 00:09:08.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.301 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:09:08.301 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:08.301 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:09:08.301 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:08.301 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:08.301 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:08.301 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:08.301 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:08.301 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:08.301 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:08.301 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:08.301 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:08.301 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:08.301 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:08.301 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=138335 00:09:08.301 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:08.301 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 138335 00:09:08.301 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 138335 ']' 00:09:08.301 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.301 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:08.301 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.301 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:08.301 00:14:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:08.301 [2024-11-18 00:14:31.973492] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:09:08.301 [2024-11-18 00:14:31.973596] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:08.301 [2024-11-18 00:14:32.044773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:08.301 [2024-11-18 00:14:32.096607] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:08.301 [2024-11-18 00:14:32.096654] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:08.301 [2024-11-18 00:14:32.096678] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:08.301 [2024-11-18 00:14:32.096690] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:08.301 [2024-11-18 00:14:32.096700] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:08.301 [2024-11-18 00:14:32.098274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:08.301 [2024-11-18 00:14:32.098350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:08.301 [2024-11-18 00:14:32.098370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:08.301 [2024-11-18 00:14:32.098374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.560 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:08.560 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:09:08.560 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:08.560 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:08.560 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:08.560 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:08.560 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:08.560 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.560 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:08.560 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.560 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:08.560 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.560 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:08.560 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.560 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:08.560 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.560 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:08.560 [2024-11-18 00:14:32.308462] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:08.560 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.560 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:08.560 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.560 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:08.560 Malloc0 00:09:08.560 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.560 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:08.560 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.560 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:08.560 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.560 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:08.560 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.560 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:08.560 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.560 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:08.560 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.560 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:08.560 [2024-11-18 00:14:32.359043] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:08.560 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.560 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=138482 00:09:08.560 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:08.560 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:08.560 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:08.560 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=138484 00:09:08.560 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:08.560 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:08.560 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:08.560 { 00:09:08.560 "params": { 00:09:08.560 "name": "Nvme$subsystem", 00:09:08.560 "trtype": "$TEST_TRANSPORT", 00:09:08.560 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:08.560 "adrfam": "ipv4", 00:09:08.560 "trsvcid": "$NVMF_PORT", 00:09:08.560 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:08.560 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:08.560 "hdgst": ${hdgst:-false}, 00:09:08.560 "ddgst": ${ddgst:-false} 00:09:08.560 }, 00:09:08.560 "method": "bdev_nvme_attach_controller" 00:09:08.560 } 00:09:08.560 EOF 00:09:08.560 )") 00:09:08.560 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:08.560 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:08.560 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:08.560 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=138486 00:09:08.560 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:08.560 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:08.560 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:08.560 { 00:09:08.560 "params": { 00:09:08.560 "name": "Nvme$subsystem", 00:09:08.560 "trtype": "$TEST_TRANSPORT", 00:09:08.560 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:08.560 "adrfam": "ipv4", 00:09:08.560 "trsvcid": "$NVMF_PORT", 00:09:08.560 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:08.561 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:08.561 "hdgst": ${hdgst:-false}, 00:09:08.561 "ddgst": ${ddgst:-false} 00:09:08.561 }, 00:09:08.561 "method": "bdev_nvme_attach_controller" 00:09:08.561 } 00:09:08.561 EOF 00:09:08.561 )") 00:09:08.561 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:08.561 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:08.561 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:08.561 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=138489 00:09:08.561 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:08.561 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:08.561 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:08.561 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:08.561 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:08.561 { 00:09:08.561 "params": { 00:09:08.561 "name": "Nvme$subsystem", 00:09:08.561 "trtype": "$TEST_TRANSPORT", 00:09:08.561 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:08.561 "adrfam": "ipv4", 00:09:08.561 "trsvcid": "$NVMF_PORT", 00:09:08.561 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:08.561 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:08.561 "hdgst": ${hdgst:-false}, 00:09:08.561 "ddgst": ${ddgst:-false} 00:09:08.561 }, 00:09:08.561 "method": "bdev_nvme_attach_controller" 00:09:08.561 } 00:09:08.561 EOF 00:09:08.561 )") 00:09:08.561 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:08.561 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:08.561 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:08.561 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:08.561 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:08.561 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:08.561 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:08.561 { 00:09:08.561 "params": { 00:09:08.561 "name": "Nvme$subsystem", 00:09:08.561 "trtype": "$TEST_TRANSPORT", 00:09:08.561 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:08.561 "adrfam": "ipv4", 00:09:08.561 "trsvcid": "$NVMF_PORT", 00:09:08.561 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:08.561 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:08.561 "hdgst": ${hdgst:-false}, 00:09:08.561 "ddgst": ${ddgst:-false} 00:09:08.561 }, 00:09:08.561 "method": "bdev_nvme_attach_controller" 00:09:08.561 } 00:09:08.561 EOF 00:09:08.561 )") 00:09:08.561 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:08.561 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:08.561 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 138482 00:09:08.561 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:08.561 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:08.561 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:08.561 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:08.561 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:08.561 "params": { 00:09:08.561 "name": "Nvme1", 00:09:08.561 "trtype": "tcp", 00:09:08.561 "traddr": "10.0.0.2", 00:09:08.561 "adrfam": "ipv4", 00:09:08.561 "trsvcid": "4420", 00:09:08.561 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:08.561 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:08.561 "hdgst": false, 00:09:08.561 "ddgst": false 00:09:08.561 }, 00:09:08.561 "method": "bdev_nvme_attach_controller" 00:09:08.561 }' 00:09:08.561 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:08.561 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:08.561 "params": { 00:09:08.561 "name": "Nvme1", 00:09:08.561 "trtype": "tcp", 00:09:08.561 "traddr": "10.0.0.2", 00:09:08.561 "adrfam": "ipv4", 00:09:08.561 "trsvcid": "4420", 00:09:08.561 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:08.561 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:08.561 "hdgst": false, 00:09:08.561 "ddgst": false 00:09:08.561 }, 00:09:08.561 "method": "bdev_nvme_attach_controller" 00:09:08.561 }' 00:09:08.561 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:08.561 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:08.561 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:08.561 "params": { 00:09:08.561 "name": "Nvme1", 00:09:08.561 "trtype": "tcp", 00:09:08.561 "traddr": "10.0.0.2", 00:09:08.561 "adrfam": "ipv4", 00:09:08.561 "trsvcid": "4420", 00:09:08.561 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:08.561 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:08.561 "hdgst": false, 00:09:08.561 "ddgst": false 00:09:08.561 }, 00:09:08.561 "method": "bdev_nvme_attach_controller" 00:09:08.561 }' 00:09:08.561 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:08.561 00:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:08.561 "params": { 00:09:08.561 "name": "Nvme1", 00:09:08.561 "trtype": "tcp", 00:09:08.561 "traddr": "10.0.0.2", 00:09:08.561 "adrfam": "ipv4", 00:09:08.561 "trsvcid": "4420", 00:09:08.561 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:08.561 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:08.561 "hdgst": false, 00:09:08.561 "ddgst": false 00:09:08.561 }, 00:09:08.561 "method": "bdev_nvme_attach_controller" 00:09:08.561 }' 00:09:08.819 [2024-11-18 00:14:32.408532] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:09:08.819 [2024-11-18 00:14:32.408533] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:09:08.819 [2024-11-18 00:14:32.408638] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-18 00:14:32.408639] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:08.819 --proc-type=auto ] 00:09:08.819 [2024-11-18 00:14:32.409700] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:09:08.819 [2024-11-18 00:14:32.409700] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:09:08.819 [2024-11-18 00:14:32.409781] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-18 00:14:32.409781] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:08.819 --proc-type=auto ] 00:09:08.819 [2024-11-18 00:14:32.591798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.819 [2024-11-18 00:14:32.633976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:09.078 [2024-11-18 00:14:32.694565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.078 [2024-11-18 00:14:32.738991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:09.078 [2024-11-18 00:14:32.769589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.078 [2024-11-18 00:14:32.806921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:09.078 [2024-11-18 00:14:32.844321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.078 [2024-11-18 00:14:32.882682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:09:09.337 Running I/O for 1 seconds... 00:09:09.337 Running I/O for 1 seconds... 00:09:09.337 Running I/O for 1 seconds... 00:09:09.337 Running I/O for 1 seconds... 00:09:10.532 5896.00 IOPS, 23.03 MiB/s 00:09:10.532 Latency(us) 00:09:10.532 [2024-11-17T23:14:34.354Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:10.532 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:10.532 Nvme1n1 : 1.02 5918.29 23.12 0.00 0.00 21472.24 6165.24 28350.39 00:09:10.532 [2024-11-17T23:14:34.354Z] =================================================================================================================== 00:09:10.532 [2024-11-17T23:14:34.354Z] Total : 5918.29 23.12 0.00 0.00 21472.24 6165.24 28350.39 00:09:10.532 5630.00 IOPS, 21.99 MiB/s 00:09:10.532 Latency(us) 00:09:10.532 [2024-11-17T23:14:34.354Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:10.532 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:10.532 Nvme1n1 : 1.01 5717.15 22.33 0.00 0.00 22296.01 6650.69 44467.39 00:09:10.532 [2024-11-17T23:14:34.354Z] =================================================================================================================== 00:09:10.532 [2024-11-17T23:14:34.354Z] Total : 5717.15 22.33 0.00 0.00 22296.01 6650.69 44467.39 00:09:10.532 9517.00 IOPS, 37.18 MiB/s [2024-11-17T23:14:34.354Z] 188184.00 IOPS, 735.09 MiB/s 00:09:10.532 Latency(us) 00:09:10.532 [2024-11-17T23:14:34.354Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:10.532 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:10.532 Nvme1n1 : 1.00 187835.49 733.73 0.00 0.00 677.74 300.37 1844.72 00:09:10.532 [2024-11-17T23:14:34.354Z] =================================================================================================================== 00:09:10.532 [2024-11-17T23:14:34.354Z] Total : 187835.49 733.73 0.00 0.00 677.74 300.37 1844.72 00:09:10.532 00:09:10.532 Latency(us) 00:09:10.532 [2024-11-17T23:14:34.354Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:10.532 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:10.532 Nvme1n1 : 1.01 9585.03 37.44 0.00 0.00 13307.19 5194.33 23981.32 00:09:10.532 [2024-11-17T23:14:34.354Z] =================================================================================================================== 00:09:10.532 [2024-11-17T23:14:34.354Z] Total : 9585.03 37.44 0.00 0.00 13307.19 5194.33 23981.32 00:09:10.532 00:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 138484 00:09:10.532 00:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 138486 00:09:10.532 00:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 138489 00:09:10.532 00:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:10.532 00:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.532 00:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:10.532 00:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.532 00:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:10.532 00:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:10.532 00:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:10.532 00:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:10.533 00:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:10.533 00:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:10.533 00:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:10.533 00:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:10.533 rmmod nvme_tcp 00:09:10.792 rmmod nvme_fabrics 00:09:10.792 rmmod nvme_keyring 00:09:10.792 00:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:10.792 00:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:10.792 00:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:10.792 00:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 138335 ']' 00:09:10.792 00:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 138335 00:09:10.792 00:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 138335 ']' 00:09:10.792 00:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 138335 00:09:10.792 00:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:09:10.792 00:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:10.792 00:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 138335 00:09:10.792 00:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:10.792 00:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:10.792 00:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 138335' 00:09:10.792 killing process with pid 138335 00:09:10.792 00:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 138335 00:09:10.792 00:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 138335 00:09:11.052 00:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:11.052 00:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:11.052 00:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:11.052 00:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:11.052 00:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:09:11.052 00:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:11.052 00:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:09:11.052 00:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:11.052 00:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:11.052 00:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:11.052 00:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:11.052 00:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:12.965 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:12.965 00:09:12.965 real 0m7.342s 00:09:12.965 user 0m15.845s 00:09:12.965 sys 0m3.628s 00:09:12.965 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:12.965 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:12.965 ************************************ 00:09:12.965 END TEST nvmf_bdev_io_wait 00:09:12.965 ************************************ 00:09:12.965 00:14:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:12.965 00:14:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:12.965 00:14:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:12.965 00:14:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:12.965 ************************************ 00:09:12.965 START TEST nvmf_queue_depth 00:09:12.965 ************************************ 00:09:12.965 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:12.965 * Looking for test storage... 00:09:12.965 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:12.965 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:12.965 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:09:13.224 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:13.224 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:13.224 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:13.224 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:13.224 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:13.224 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:13.224 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:13.224 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:13.224 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:13.224 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:13.224 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:13.224 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:13.224 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:13.224 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:13.224 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:13.224 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:13.224 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:13.224 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:13.224 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:13.224 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:13.224 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:13.225 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:13.225 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:13.225 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:13.225 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:13.225 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:13.225 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:13.225 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:13.225 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:13.225 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:13.225 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:13.225 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:13.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.225 --rc genhtml_branch_coverage=1 00:09:13.225 --rc genhtml_function_coverage=1 00:09:13.225 --rc genhtml_legend=1 00:09:13.225 --rc geninfo_all_blocks=1 00:09:13.225 --rc geninfo_unexecuted_blocks=1 00:09:13.225 00:09:13.225 ' 00:09:13.225 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:13.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.225 --rc genhtml_branch_coverage=1 00:09:13.225 --rc genhtml_function_coverage=1 00:09:13.225 --rc genhtml_legend=1 00:09:13.225 --rc geninfo_all_blocks=1 00:09:13.225 --rc geninfo_unexecuted_blocks=1 00:09:13.225 00:09:13.225 ' 00:09:13.225 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:13.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.225 --rc genhtml_branch_coverage=1 00:09:13.225 --rc genhtml_function_coverage=1 00:09:13.225 --rc genhtml_legend=1 00:09:13.225 --rc geninfo_all_blocks=1 00:09:13.225 --rc geninfo_unexecuted_blocks=1 00:09:13.225 00:09:13.225 ' 00:09:13.225 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:13.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.225 --rc genhtml_branch_coverage=1 00:09:13.225 --rc genhtml_function_coverage=1 00:09:13.225 --rc genhtml_legend=1 00:09:13.225 --rc geninfo_all_blocks=1 00:09:13.225 --rc geninfo_unexecuted_blocks=1 00:09:13.225 00:09:13.225 ' 00:09:13.225 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:13.225 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:13.225 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:13.225 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:13.225 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:13.225 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:13.225 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:13.225 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:13.225 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:13.225 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:13.225 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:13.225 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:13.225 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:13.225 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:13.225 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:13.225 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:13.225 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:13.225 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:13.225 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:13.225 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:13.225 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:13.225 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:13.225 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:13.225 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.225 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.225 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.225 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:13.225 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.225 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:13.225 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:13.225 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:13.225 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:13.225 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:13.225 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:13.225 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:13.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:13.225 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:13.225 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:13.225 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:13.225 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:13.225 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:13.225 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:13.225 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:13.225 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:13.225 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:13.225 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:13.225 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:13.225 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:13.226 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:13.226 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:13.226 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:13.226 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:13.226 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:13.226 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:13.226 00:14:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:15.764 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:15.764 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:15.764 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:15.764 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:15.764 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:15.764 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:15.764 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:15.764 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:15.764 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:15.764 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:15.764 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:15.764 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:15.764 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:15.764 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:15.764 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:15.764 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:15.764 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:15.764 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:15.764 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:15.765 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:15.765 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:15.765 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:15.765 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:15.765 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:15.765 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:09:15.765 00:09:15.765 --- 10.0.0.2 ping statistics --- 00:09:15.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:15.765 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:15.765 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:15.765 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:09:15.765 00:09:15.765 --- 10.0.0.1 ping statistics --- 00:09:15.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:15.765 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=140714 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 140714 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 140714 ']' 00:09:15.765 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.766 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:15.766 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.766 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:15.766 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:15.766 [2024-11-18 00:14:39.288105] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:09:15.766 [2024-11-18 00:14:39.288185] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:15.766 [2024-11-18 00:14:39.364156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.766 [2024-11-18 00:14:39.412200] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:15.766 [2024-11-18 00:14:39.412248] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:15.766 [2024-11-18 00:14:39.412262] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:15.766 [2024-11-18 00:14:39.412273] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:15.766 [2024-11-18 00:14:39.412283] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:15.766 [2024-11-18 00:14:39.412877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:15.766 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:15.766 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:15.766 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:15.766 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:15.766 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:15.766 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:15.766 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:15.766 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.766 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:15.766 [2024-11-18 00:14:39.556937] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:15.766 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.766 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:15.766 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.766 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:15.766 Malloc0 00:09:16.023 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.023 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:16.023 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.023 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:16.023 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.023 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:16.023 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.023 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:16.023 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.023 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:16.023 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.023 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:16.023 [2024-11-18 00:14:39.605793] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:16.023 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.023 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=140743 00:09:16.023 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:16.023 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:16.023 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 140743 /var/tmp/bdevperf.sock 00:09:16.023 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 140743 ']' 00:09:16.023 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:16.023 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:16.023 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:16.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:16.023 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:16.023 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:16.023 [2024-11-18 00:14:39.650719] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:09:16.023 [2024-11-18 00:14:39.650782] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140743 ] 00:09:16.023 [2024-11-18 00:14:39.715445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.023 [2024-11-18 00:14:39.760006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.281 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:16.281 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:16.281 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:16.281 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.281 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:16.281 NVMe0n1 00:09:16.281 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.281 00:14:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:16.538 Running I/O for 10 seconds... 00:09:18.425 8288.00 IOPS, 32.38 MiB/s [2024-11-17T23:14:43.183Z] 8699.50 IOPS, 33.98 MiB/s [2024-11-17T23:14:44.556Z] 8757.00 IOPS, 34.21 MiB/s [2024-11-17T23:14:45.491Z] 8701.75 IOPS, 33.99 MiB/s [2024-11-17T23:14:46.427Z] 8796.80 IOPS, 34.36 MiB/s [2024-11-17T23:14:47.362Z] 8747.17 IOPS, 34.17 MiB/s [2024-11-17T23:14:48.296Z] 8774.29 IOPS, 34.27 MiB/s [2024-11-17T23:14:49.228Z] 8818.38 IOPS, 34.45 MiB/s [2024-11-17T23:14:50.169Z] 8838.11 IOPS, 34.52 MiB/s [2024-11-17T23:14:50.428Z] 8832.10 IOPS, 34.50 MiB/s 00:09:26.606 Latency(us) 00:09:26.606 [2024-11-17T23:14:50.428Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:26.606 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:26.606 Verification LBA range: start 0x0 length 0x4000 00:09:26.606 NVMe0n1 : 10.07 8867.59 34.64 0.00 0.00 114936.61 13786.83 69128.34 00:09:26.606 [2024-11-17T23:14:50.428Z] =================================================================================================================== 00:09:26.606 [2024-11-17T23:14:50.428Z] Total : 8867.59 34.64 0.00 0.00 114936.61 13786.83 69128.34 00:09:26.606 { 00:09:26.606 "results": [ 00:09:26.606 { 00:09:26.606 "job": "NVMe0n1", 00:09:26.606 "core_mask": "0x1", 00:09:26.606 "workload": "verify", 00:09:26.606 "status": "finished", 00:09:26.606 "verify_range": { 00:09:26.606 "start": 0, 00:09:26.606 "length": 16384 00:09:26.606 }, 00:09:26.606 "queue_depth": 1024, 00:09:26.606 "io_size": 4096, 00:09:26.606 "runtime": 10.069595, 00:09:26.606 "iops": 8867.58603498949, 00:09:26.606 "mibps": 34.639007949177696, 00:09:26.606 "io_failed": 0, 00:09:26.606 "io_timeout": 0, 00:09:26.606 "avg_latency_us": 114936.6140540899, 00:09:26.606 "min_latency_us": 13786.832592592593, 00:09:26.606 "max_latency_us": 69128.34370370371 00:09:26.606 } 00:09:26.606 ], 00:09:26.606 "core_count": 1 00:09:26.606 } 00:09:26.606 00:14:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 140743 00:09:26.606 00:14:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 140743 ']' 00:09:26.606 00:14:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 140743 00:09:26.606 00:14:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:26.606 00:14:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:26.606 00:14:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 140743 00:09:26.606 00:14:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:26.606 00:14:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:26.606 00:14:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 140743' 00:09:26.606 killing process with pid 140743 00:09:26.606 00:14:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 140743 00:09:26.606 Received shutdown signal, test time was about 10.000000 seconds 00:09:26.606 00:09:26.606 Latency(us) 00:09:26.606 [2024-11-17T23:14:50.428Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:26.606 [2024-11-17T23:14:50.428Z] =================================================================================================================== 00:09:26.606 [2024-11-17T23:14:50.428Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:26.606 00:14:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 140743 00:09:26.865 00:14:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:26.865 00:14:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:26.865 00:14:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:26.865 00:14:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:26.865 00:14:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:26.865 00:14:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:26.865 00:14:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:26.865 00:14:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:26.865 rmmod nvme_tcp 00:09:26.865 rmmod nvme_fabrics 00:09:26.865 rmmod nvme_keyring 00:09:26.865 00:14:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:26.865 00:14:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:26.865 00:14:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:26.865 00:14:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 140714 ']' 00:09:26.865 00:14:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 140714 00:09:26.865 00:14:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 140714 ']' 00:09:26.865 00:14:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 140714 00:09:26.865 00:14:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:26.865 00:14:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:26.865 00:14:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 140714 00:09:26.865 00:14:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:26.865 00:14:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:26.865 00:14:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 140714' 00:09:26.865 killing process with pid 140714 00:09:26.865 00:14:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 140714 00:09:26.865 00:14:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 140714 00:09:27.126 00:14:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:27.126 00:14:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:27.126 00:14:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:27.126 00:14:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:27.126 00:14:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:27.126 00:14:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:27.126 00:14:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:27.126 00:14:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:27.126 00:14:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:27.126 00:14:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.126 00:14:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:27.126 00:14:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:29.039 00:14:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:29.039 00:09:29.039 real 0m16.093s 00:09:29.039 user 0m22.303s 00:09:29.039 sys 0m3.250s 00:09:29.039 00:14:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:29.039 00:14:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:29.039 ************************************ 00:09:29.039 END TEST nvmf_queue_depth 00:09:29.039 ************************************ 00:09:29.039 00:14:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:29.039 00:14:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:29.039 00:14:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.040 00:14:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:29.299 ************************************ 00:09:29.299 START TEST nvmf_target_multipath 00:09:29.299 ************************************ 00:09:29.299 00:14:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:29.299 * Looking for test storage... 00:09:29.299 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:29.299 00:14:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:29.299 00:14:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:09:29.299 00:14:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:29.299 00:14:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:29.299 00:14:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:29.299 00:14:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:29.299 00:14:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:29.299 00:14:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:29.299 00:14:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:29.299 00:14:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:29.299 00:14:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:29.299 00:14:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:29.299 00:14:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:29.299 00:14:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:29.299 00:14:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:29.299 00:14:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:29.299 00:14:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:29.299 00:14:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:29.299 00:14:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:29.299 00:14:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:29.299 00:14:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:29.299 00:14:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:29.299 00:14:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:29.299 00:14:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:29.299 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:29.299 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:29.300 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:29.300 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:29.300 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:29.300 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:29.300 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:29.300 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:29.300 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:29.300 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:29.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.300 --rc genhtml_branch_coverage=1 00:09:29.300 --rc genhtml_function_coverage=1 00:09:29.300 --rc genhtml_legend=1 00:09:29.300 --rc geninfo_all_blocks=1 00:09:29.300 --rc geninfo_unexecuted_blocks=1 00:09:29.300 00:09:29.300 ' 00:09:29.300 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:29.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.300 --rc genhtml_branch_coverage=1 00:09:29.300 --rc genhtml_function_coverage=1 00:09:29.300 --rc genhtml_legend=1 00:09:29.300 --rc geninfo_all_blocks=1 00:09:29.300 --rc geninfo_unexecuted_blocks=1 00:09:29.300 00:09:29.300 ' 00:09:29.300 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:29.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.300 --rc genhtml_branch_coverage=1 00:09:29.300 --rc genhtml_function_coverage=1 00:09:29.300 --rc genhtml_legend=1 00:09:29.300 --rc geninfo_all_blocks=1 00:09:29.300 --rc geninfo_unexecuted_blocks=1 00:09:29.300 00:09:29.300 ' 00:09:29.300 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:29.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.300 --rc genhtml_branch_coverage=1 00:09:29.300 --rc genhtml_function_coverage=1 00:09:29.300 --rc genhtml_legend=1 00:09:29.300 --rc geninfo_all_blocks=1 00:09:29.300 --rc geninfo_unexecuted_blocks=1 00:09:29.300 00:09:29.300 ' 00:09:29.300 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:29.300 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:29.300 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:29.300 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:29.300 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:29.300 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:29.300 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:29.300 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:29.300 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:29.300 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:29.300 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:29.300 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:29.300 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:29.300 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:29.300 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:29.300 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:29.300 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:29.300 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:29.300 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:29.300 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:29.300 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:29.300 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:29.300 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:29.300 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.300 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.300 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.300 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:29.300 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.300 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:29.300 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:29.300 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:29.300 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:29.300 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:29.300 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:29.300 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:29.300 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:29.300 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:29.300 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:29.300 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:29.300 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:29.300 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:29.300 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:29.300 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:29.300 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:29.300 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:29.300 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:29.300 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:29.300 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:29.300 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:29.301 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:29.301 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:29.301 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:29.301 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:29.301 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:29.301 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:29.301 00:14:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:31.850 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:31.850 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:31.850 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:31.850 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:31.850 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:31.851 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:31.851 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:31.851 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:31.851 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:09:31.851 00:09:31.851 --- 10.0.0.2 ping statistics --- 00:09:31.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.851 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:09:31.851 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:31.851 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:31.851 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:09:31.851 00:09:31.851 --- 10.0.0.1 ping statistics --- 00:09:31.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.851 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:09:31.851 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:31.851 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:09:31.851 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:31.851 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:31.851 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:31.851 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:31.851 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:31.851 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:31.851 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:31.851 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:31.851 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:31.851 only one NIC for nvmf test 00:09:31.851 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:31.851 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:31.851 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:31.851 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:31.851 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:31.851 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:31.851 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:31.851 rmmod nvme_tcp 00:09:31.851 rmmod nvme_fabrics 00:09:31.851 rmmod nvme_keyring 00:09:31.851 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:31.851 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:31.851 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:31.851 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:31.851 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:31.851 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:31.851 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:31.851 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:31.851 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:31.851 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:31.851 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:31.851 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:31.851 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:31.851 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:31.851 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:31.851 00:14:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.859 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:33.859 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:33.859 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:33.859 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:33.859 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:33.859 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:33.859 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:33.859 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:33.859 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:33.859 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:33.859 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:33.859 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:33.859 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:33.859 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:33.859 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:33.859 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:33.859 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:33.859 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:33.859 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:33.859 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:33.859 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:33.859 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:33.859 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.859 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:33.859 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.859 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:33.859 00:09:33.859 real 0m4.596s 00:09:33.859 user 0m0.928s 00:09:33.859 sys 0m1.667s 00:09:33.859 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.859 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:33.859 ************************************ 00:09:33.859 END TEST nvmf_target_multipath 00:09:33.859 ************************************ 00:09:33.859 00:14:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:33.859 00:14:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:33.859 00:14:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.859 00:14:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:33.859 ************************************ 00:09:33.859 START TEST nvmf_zcopy 00:09:33.859 ************************************ 00:09:33.859 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:33.859 * Looking for test storage... 00:09:33.859 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:33.859 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:33.859 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:09:33.859 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:33.859 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:33.859 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:33.859 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:33.859 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:33.859 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:33.859 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:33.859 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:33.859 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:33.859 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:33.859 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:33.859 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:33.859 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:33.859 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:33.859 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:33.859 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:33.859 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:33.859 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:33.859 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:33.859 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:33.859 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:33.859 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:33.859 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:33.859 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:33.859 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:33.859 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:33.859 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:33.859 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:33.859 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:33.859 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:33.860 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:33.860 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:33.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.860 --rc genhtml_branch_coverage=1 00:09:33.860 --rc genhtml_function_coverage=1 00:09:33.860 --rc genhtml_legend=1 00:09:33.860 --rc geninfo_all_blocks=1 00:09:33.860 --rc geninfo_unexecuted_blocks=1 00:09:33.860 00:09:33.860 ' 00:09:33.860 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:33.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.860 --rc genhtml_branch_coverage=1 00:09:33.860 --rc genhtml_function_coverage=1 00:09:33.860 --rc genhtml_legend=1 00:09:33.860 --rc geninfo_all_blocks=1 00:09:33.860 --rc geninfo_unexecuted_blocks=1 00:09:33.860 00:09:33.860 ' 00:09:33.860 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:33.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.860 --rc genhtml_branch_coverage=1 00:09:33.860 --rc genhtml_function_coverage=1 00:09:33.860 --rc genhtml_legend=1 00:09:33.860 --rc geninfo_all_blocks=1 00:09:33.860 --rc geninfo_unexecuted_blocks=1 00:09:33.860 00:09:33.860 ' 00:09:33.860 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:33.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.860 --rc genhtml_branch_coverage=1 00:09:33.860 --rc genhtml_function_coverage=1 00:09:33.860 --rc genhtml_legend=1 00:09:33.860 --rc geninfo_all_blocks=1 00:09:33.860 --rc geninfo_unexecuted_blocks=1 00:09:33.860 00:09:33.860 ' 00:09:33.860 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:33.860 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:33.860 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:33.860 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:33.860 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:33.860 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:33.860 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:33.860 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:33.860 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:33.860 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:33.860 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:33.860 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:34.130 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:34.130 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:34.130 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:34.130 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:34.130 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:34.130 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:34.131 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:34.131 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:34.131 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:34.131 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:34.131 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:34.131 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.131 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.131 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.131 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:34.131 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.131 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:34.131 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:34.131 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:34.131 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:34.131 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:34.131 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:34.131 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:34.131 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:34.131 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:34.131 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:34.131 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:34.131 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:34.131 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:34.131 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:34.131 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:34.131 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:34.131 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:34.131 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:34.131 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:34.131 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:34.131 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:34.131 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:34.131 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:34.131 00:14:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:36.185 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:36.185 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:36.185 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:36.185 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:36.185 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:36.185 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:36.185 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:36.185 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:36.185 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:36.185 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:36.185 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:36.185 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:36.185 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:36.185 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:36.185 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:36.185 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:36.185 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:36.185 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:36.185 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:36.185 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:36.185 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:36.185 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:36.185 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:36.186 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:36.186 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:36.186 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:36.186 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:36.186 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:36.186 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:09:36.186 00:09:36.186 --- 10.0.0.2 ping statistics --- 00:09:36.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:36.186 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:36.186 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:36.186 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:09:36.186 00:09:36.186 --- 10.0.0.1 ping statistics --- 00:09:36.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:36.186 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:36.186 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:36.187 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=145962 00:09:36.187 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 145962 00:09:36.187 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 145962 ']' 00:09:36.187 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.187 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:36.187 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:36.187 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.187 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:36.187 00:14:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:36.465 [2024-11-18 00:15:00.017293] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:09:36.465 [2024-11-18 00:15:00.017421] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:36.465 [2024-11-18 00:15:00.096662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.465 [2024-11-18 00:15:00.148050] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:36.465 [2024-11-18 00:15:00.148107] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:36.465 [2024-11-18 00:15:00.148132] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:36.465 [2024-11-18 00:15:00.148157] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:36.465 [2024-11-18 00:15:00.148175] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:36.465 [2024-11-18 00:15:00.148826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:36.465 00:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:36.465 00:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:36.465 00:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:36.465 00:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:36.465 00:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:36.738 00:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:36.738 00:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:36.738 00:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:36.738 00:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.738 00:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:36.738 [2024-11-18 00:15:00.293214] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:36.738 00:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.738 00:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:36.738 00:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.738 00:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:36.738 00:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.738 00:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:36.738 00:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.738 00:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:36.738 [2024-11-18 00:15:00.309439] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:36.738 00:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.738 00:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:36.738 00:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.738 00:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:36.738 00:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.738 00:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:36.738 00:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.738 00:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:36.738 malloc0 00:09:36.738 00:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.738 00:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:36.738 00:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.738 00:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:36.738 00:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.738 00:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:36.739 00:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:36.739 00:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:36.739 00:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:36.739 00:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:36.739 00:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:36.739 { 00:09:36.739 "params": { 00:09:36.739 "name": "Nvme$subsystem", 00:09:36.739 "trtype": "$TEST_TRANSPORT", 00:09:36.739 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:36.739 "adrfam": "ipv4", 00:09:36.739 "trsvcid": "$NVMF_PORT", 00:09:36.739 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:36.739 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:36.739 "hdgst": ${hdgst:-false}, 00:09:36.739 "ddgst": ${ddgst:-false} 00:09:36.739 }, 00:09:36.739 "method": "bdev_nvme_attach_controller" 00:09:36.739 } 00:09:36.739 EOF 00:09:36.739 )") 00:09:36.739 00:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:36.739 00:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:36.739 00:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:36.739 00:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:36.739 "params": { 00:09:36.739 "name": "Nvme1", 00:09:36.739 "trtype": "tcp", 00:09:36.739 "traddr": "10.0.0.2", 00:09:36.739 "adrfam": "ipv4", 00:09:36.739 "trsvcid": "4420", 00:09:36.739 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:36.739 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:36.739 "hdgst": false, 00:09:36.739 "ddgst": false 00:09:36.739 }, 00:09:36.739 "method": "bdev_nvme_attach_controller" 00:09:36.739 }' 00:09:36.739 [2024-11-18 00:15:00.389772] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:09:36.739 [2024-11-18 00:15:00.389850] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146027 ] 00:09:36.739 [2024-11-18 00:15:00.459440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.739 [2024-11-18 00:15:00.510393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.015 Running I/O for 10 seconds... 00:09:39.083 5550.00 IOPS, 43.36 MiB/s [2024-11-17T23:15:03.931Z] 5623.00 IOPS, 43.93 MiB/s [2024-11-17T23:15:04.897Z] 5673.33 IOPS, 44.32 MiB/s [2024-11-17T23:15:05.883Z] 5697.00 IOPS, 44.51 MiB/s [2024-11-17T23:15:06.912Z] 5712.60 IOPS, 44.63 MiB/s [2024-11-17T23:15:07.971Z] 5721.83 IOPS, 44.70 MiB/s [2024-11-17T23:15:08.911Z] 5728.00 IOPS, 44.75 MiB/s [2024-11-17T23:15:09.852Z] 5729.62 IOPS, 44.76 MiB/s [2024-11-17T23:15:11.237Z] 5737.67 IOPS, 44.83 MiB/s [2024-11-17T23:15:11.238Z] 5737.90 IOPS, 44.83 MiB/s 00:09:47.416 Latency(us) 00:09:47.416 [2024-11-17T23:15:11.238Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:47.416 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:47.416 Verification LBA range: start 0x0 length 0x1000 00:09:47.416 Nvme1n1 : 10.01 5742.16 44.86 0.00 0.00 22233.87 3835.07 32039.82 00:09:47.416 [2024-11-17T23:15:11.238Z] =================================================================================================================== 00:09:47.416 [2024-11-17T23:15:11.238Z] Total : 5742.16 44.86 0.00 0.00 22233.87 3835.07 32039.82 00:09:47.416 00:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=147962 00:09:47.416 00:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:47.416 00:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:47.416 00:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:47.416 00:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:47.416 00:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:47.416 00:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:47.416 00:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:47.416 00:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:47.416 { 00:09:47.416 "params": { 00:09:47.416 "name": "Nvme$subsystem", 00:09:47.416 "trtype": "$TEST_TRANSPORT", 00:09:47.416 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:47.416 "adrfam": "ipv4", 00:09:47.416 "trsvcid": "$NVMF_PORT", 00:09:47.416 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:47.416 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:47.416 "hdgst": ${hdgst:-false}, 00:09:47.416 "ddgst": ${ddgst:-false} 00:09:47.416 }, 00:09:47.416 "method": "bdev_nvme_attach_controller" 00:09:47.416 } 00:09:47.416 EOF 00:09:47.416 )") 00:09:47.416 00:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:47.416 [2024-11-18 00:15:11.056377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.416 [2024-11-18 00:15:11.056422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.416 00:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:47.416 00:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:47.416 00:15:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:47.416 "params": { 00:09:47.416 "name": "Nvme1", 00:09:47.416 "trtype": "tcp", 00:09:47.416 "traddr": "10.0.0.2", 00:09:47.416 "adrfam": "ipv4", 00:09:47.416 "trsvcid": "4420", 00:09:47.416 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:47.416 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:47.416 "hdgst": false, 00:09:47.416 "ddgst": false 00:09:47.416 }, 00:09:47.416 "method": "bdev_nvme_attach_controller" 00:09:47.416 }' 00:09:47.416 [2024-11-18 00:15:11.064283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.416 [2024-11-18 00:15:11.064332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.416 [2024-11-18 00:15:11.072329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.416 [2024-11-18 00:15:11.072361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.416 [2024-11-18 00:15:11.080349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.416 [2024-11-18 00:15:11.080371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.416 [2024-11-18 00:15:11.088371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.416 [2024-11-18 00:15:11.088393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.416 [2024-11-18 00:15:11.093091] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:09:47.416 [2024-11-18 00:15:11.093151] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147962 ] 00:09:47.416 [2024-11-18 00:15:11.096393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.416 [2024-11-18 00:15:11.096416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.416 [2024-11-18 00:15:11.104422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.416 [2024-11-18 00:15:11.104460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.416 [2024-11-18 00:15:11.112425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.416 [2024-11-18 00:15:11.112446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.416 [2024-11-18 00:15:11.120447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.416 [2024-11-18 00:15:11.120475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.416 [2024-11-18 00:15:11.128475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.416 [2024-11-18 00:15:11.128497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.416 [2024-11-18 00:15:11.136499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.416 [2024-11-18 00:15:11.136520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.416 [2024-11-18 00:15:11.144515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.416 [2024-11-18 00:15:11.144537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.416 [2024-11-18 00:15:11.152537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.416 [2024-11-18 00:15:11.152559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.416 [2024-11-18 00:15:11.160561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.416 [2024-11-18 00:15:11.160584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.416 [2024-11-18 00:15:11.161572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.416 [2024-11-18 00:15:11.168629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.416 [2024-11-18 00:15:11.168660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.416 [2024-11-18 00:15:11.176671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.416 [2024-11-18 00:15:11.176710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.416 [2024-11-18 00:15:11.184628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.416 [2024-11-18 00:15:11.184650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.416 [2024-11-18 00:15:11.192659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.416 [2024-11-18 00:15:11.192680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.416 [2024-11-18 00:15:11.200691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.416 [2024-11-18 00:15:11.200711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.416 [2024-11-18 00:15:11.208699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.416 [2024-11-18 00:15:11.208720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.416 [2024-11-18 00:15:11.208726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.416 [2024-11-18 00:15:11.216733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.416 [2024-11-18 00:15:11.216752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.416 [2024-11-18 00:15:11.224770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.416 [2024-11-18 00:15:11.224802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.416 [2024-11-18 00:15:11.232827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.416 [2024-11-18 00:15:11.232866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.678 [2024-11-18 00:15:11.240847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.678 [2024-11-18 00:15:11.240888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.678 [2024-11-18 00:15:11.248858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.678 [2024-11-18 00:15:11.248898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.678 [2024-11-18 00:15:11.256875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.678 [2024-11-18 00:15:11.256913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.678 [2024-11-18 00:15:11.264895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.678 [2024-11-18 00:15:11.264935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.678 [2024-11-18 00:15:11.272873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.678 [2024-11-18 00:15:11.272895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.678 [2024-11-18 00:15:11.280934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.678 [2024-11-18 00:15:11.280974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.678 [2024-11-18 00:15:11.288955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.678 [2024-11-18 00:15:11.288994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.678 [2024-11-18 00:15:11.296940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.678 [2024-11-18 00:15:11.296962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.678 [2024-11-18 00:15:11.304959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.678 [2024-11-18 00:15:11.304980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.678 [2024-11-18 00:15:11.312985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.678 [2024-11-18 00:15:11.313009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.678 [2024-11-18 00:15:11.321004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.678 [2024-11-18 00:15:11.321027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.678 [2024-11-18 00:15:11.329024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.678 [2024-11-18 00:15:11.329046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.678 [2024-11-18 00:15:11.337047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.678 [2024-11-18 00:15:11.337070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.678 [2024-11-18 00:15:11.345067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.678 [2024-11-18 00:15:11.345089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.678 [2024-11-18 00:15:11.353088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.678 [2024-11-18 00:15:11.353108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.678 [2024-11-18 00:15:11.361126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.678 [2024-11-18 00:15:11.361147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.678 [2024-11-18 00:15:11.369131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.678 [2024-11-18 00:15:11.369151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.678 [2024-11-18 00:15:11.377155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.678 [2024-11-18 00:15:11.377177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.678 [2024-11-18 00:15:11.385188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.678 [2024-11-18 00:15:11.385211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.678 [2024-11-18 00:15:11.393201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.678 [2024-11-18 00:15:11.393224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.678 [2024-11-18 00:15:11.401222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.678 [2024-11-18 00:15:11.401244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.678 [2024-11-18 00:15:11.409241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.678 [2024-11-18 00:15:11.409262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.678 [2024-11-18 00:15:11.451543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.678 [2024-11-18 00:15:11.451571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.678 [2024-11-18 00:15:11.457395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.678 [2024-11-18 00:15:11.457419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.678 Running I/O for 5 seconds... 00:09:47.678 [2024-11-18 00:15:11.465420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.678 [2024-11-18 00:15:11.465444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.678 [2024-11-18 00:15:11.480050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.678 [2024-11-18 00:15:11.480079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.678 [2024-11-18 00:15:11.490705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.678 [2024-11-18 00:15:11.490740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.939 [2024-11-18 00:15:11.502456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.939 [2024-11-18 00:15:11.502486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.939 [2024-11-18 00:15:11.514077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.939 [2024-11-18 00:15:11.514104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.939 [2024-11-18 00:15:11.525214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.939 [2024-11-18 00:15:11.525242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.939 [2024-11-18 00:15:11.536426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.939 [2024-11-18 00:15:11.536456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.939 [2024-11-18 00:15:11.547344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.939 [2024-11-18 00:15:11.547372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.939 [2024-11-18 00:15:11.558216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.939 [2024-11-18 00:15:11.558243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.939 [2024-11-18 00:15:11.568951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.939 [2024-11-18 00:15:11.568993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.939 [2024-11-18 00:15:11.580080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.939 [2024-11-18 00:15:11.580109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.939 [2024-11-18 00:15:11.593228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.939 [2024-11-18 00:15:11.593255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.939 [2024-11-18 00:15:11.603715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.939 [2024-11-18 00:15:11.603757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.939 [2024-11-18 00:15:11.614398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.939 [2024-11-18 00:15:11.614425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.939 [2024-11-18 00:15:11.625364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.939 [2024-11-18 00:15:11.625392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.939 [2024-11-18 00:15:11.636356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.939 [2024-11-18 00:15:11.636384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.939 [2024-11-18 00:15:11.649239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.939 [2024-11-18 00:15:11.649266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.939 [2024-11-18 00:15:11.659805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.939 [2024-11-18 00:15:11.659833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.939 [2024-11-18 00:15:11.670735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.939 [2024-11-18 00:15:11.670764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.939 [2024-11-18 00:15:11.683168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.939 [2024-11-18 00:15:11.683195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.939 [2024-11-18 00:15:11.693459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.939 [2024-11-18 00:15:11.693487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.939 [2024-11-18 00:15:11.704051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.939 [2024-11-18 00:15:11.704089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.939 [2024-11-18 00:15:11.714995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.939 [2024-11-18 00:15:11.715021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.939 [2024-11-18 00:15:11.725552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.939 [2024-11-18 00:15:11.725579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.939 [2024-11-18 00:15:11.736228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.940 [2024-11-18 00:15:11.736254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.940 [2024-11-18 00:15:11.747184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.940 [2024-11-18 00:15:11.747211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.940 [2024-11-18 00:15:11.758085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.940 [2024-11-18 00:15:11.758112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.202 [2024-11-18 00:15:11.769145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.202 [2024-11-18 00:15:11.769172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.202 [2024-11-18 00:15:11.779393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.202 [2024-11-18 00:15:11.779420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.202 [2024-11-18 00:15:11.790500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.202 [2024-11-18 00:15:11.790528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.202 [2024-11-18 00:15:11.803463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.202 [2024-11-18 00:15:11.803491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.202 [2024-11-18 00:15:11.814038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.202 [2024-11-18 00:15:11.814066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.202 [2024-11-18 00:15:11.824766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.202 [2024-11-18 00:15:11.824793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.202 [2024-11-18 00:15:11.837761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.202 [2024-11-18 00:15:11.837793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.202 [2024-11-18 00:15:11.847896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.202 [2024-11-18 00:15:11.847923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.202 [2024-11-18 00:15:11.858490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.202 [2024-11-18 00:15:11.858518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.202 [2024-11-18 00:15:11.869130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.202 [2024-11-18 00:15:11.869159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.203 [2024-11-18 00:15:11.879984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.203 [2024-11-18 00:15:11.880011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.203 [2024-11-18 00:15:11.892654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.203 [2024-11-18 00:15:11.892681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.203 [2024-11-18 00:15:11.902915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.203 [2024-11-18 00:15:11.902942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.203 [2024-11-18 00:15:11.913863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.203 [2024-11-18 00:15:11.913897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.203 [2024-11-18 00:15:11.924769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.203 [2024-11-18 00:15:11.924797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.203 [2024-11-18 00:15:11.935570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.203 [2024-11-18 00:15:11.935598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.203 [2024-11-18 00:15:11.946028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.203 [2024-11-18 00:15:11.946055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.203 [2024-11-18 00:15:11.956653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.203 [2024-11-18 00:15:11.956681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.203 [2024-11-18 00:15:11.967583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.203 [2024-11-18 00:15:11.967627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.203 [2024-11-18 00:15:11.980670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.203 [2024-11-18 00:15:11.980699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.203 [2024-11-18 00:15:11.990641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.203 [2024-11-18 00:15:11.990683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.203 [2024-11-18 00:15:12.001542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.203 [2024-11-18 00:15:12.001570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.203 [2024-11-18 00:15:12.014070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.203 [2024-11-18 00:15:12.014097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.203 [2024-11-18 00:15:12.024244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.203 [2024-11-18 00:15:12.024270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.465 [2024-11-18 00:15:12.034889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.465 [2024-11-18 00:15:12.034917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.465 [2024-11-18 00:15:12.045943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.465 [2024-11-18 00:15:12.045970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.465 [2024-11-18 00:15:12.056965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.465 [2024-11-18 00:15:12.056992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.465 [2024-11-18 00:15:12.069509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.465 [2024-11-18 00:15:12.069536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.465 [2024-11-18 00:15:12.078905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.465 [2024-11-18 00:15:12.078933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.465 [2024-11-18 00:15:12.089556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.465 [2024-11-18 00:15:12.089584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.465 [2024-11-18 00:15:12.100378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.465 [2024-11-18 00:15:12.100407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.465 [2024-11-18 00:15:12.111384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.465 [2024-11-18 00:15:12.111411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.465 [2024-11-18 00:15:12.122777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.465 [2024-11-18 00:15:12.122812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.465 [2024-11-18 00:15:12.133617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.465 [2024-11-18 00:15:12.133644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.465 [2024-11-18 00:15:12.144430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.465 [2024-11-18 00:15:12.144457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.465 [2024-11-18 00:15:12.157367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.465 [2024-11-18 00:15:12.157395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.465 [2024-11-18 00:15:12.167674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.465 [2024-11-18 00:15:12.167701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.465 [2024-11-18 00:15:12.178323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.465 [2024-11-18 00:15:12.178350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.465 [2024-11-18 00:15:12.189157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.465 [2024-11-18 00:15:12.189184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.465 [2024-11-18 00:15:12.200152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.465 [2024-11-18 00:15:12.200179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.465 [2024-11-18 00:15:12.213890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.465 [2024-11-18 00:15:12.213916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.465 [2024-11-18 00:15:12.224627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.465 [2024-11-18 00:15:12.224654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.465 [2024-11-18 00:15:12.235359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.465 [2024-11-18 00:15:12.235386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.465 [2024-11-18 00:15:12.247992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.465 [2024-11-18 00:15:12.248019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.465 [2024-11-18 00:15:12.258263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.465 [2024-11-18 00:15:12.258289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.465 [2024-11-18 00:15:12.269473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.465 [2024-11-18 00:15:12.269501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.465 [2024-11-18 00:15:12.280700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.465 [2024-11-18 00:15:12.280728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.728 [2024-11-18 00:15:12.292007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.728 [2024-11-18 00:15:12.292035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.728 [2024-11-18 00:15:12.305161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.728 [2024-11-18 00:15:12.305188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.728 [2024-11-18 00:15:12.315268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.728 [2024-11-18 00:15:12.315309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.728 [2024-11-18 00:15:12.325858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.728 [2024-11-18 00:15:12.325884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.728 [2024-11-18 00:15:12.336414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.728 [2024-11-18 00:15:12.336448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.728 [2024-11-18 00:15:12.347063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.728 [2024-11-18 00:15:12.347089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.728 [2024-11-18 00:15:12.357563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.728 [2024-11-18 00:15:12.357590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.728 [2024-11-18 00:15:12.367906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.728 [2024-11-18 00:15:12.367933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.728 [2024-11-18 00:15:12.378566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.728 [2024-11-18 00:15:12.378594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.728 [2024-11-18 00:15:12.389531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.728 [2024-11-18 00:15:12.389559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.728 [2024-11-18 00:15:12.400288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.728 [2024-11-18 00:15:12.400325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.728 [2024-11-18 00:15:12.411178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.728 [2024-11-18 00:15:12.411206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.728 [2024-11-18 00:15:12.422371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.728 [2024-11-18 00:15:12.422398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.728 [2024-11-18 00:15:12.435107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.728 [2024-11-18 00:15:12.435135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.728 [2024-11-18 00:15:12.444636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.728 [2024-11-18 00:15:12.444663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.728 [2024-11-18 00:15:12.456259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.728 [2024-11-18 00:15:12.456287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.728 [2024-11-18 00:15:12.466442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.728 [2024-11-18 00:15:12.466470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.728 11614.00 IOPS, 90.73 MiB/s [2024-11-17T23:15:12.550Z] [2024-11-18 00:15:12.477673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.728 [2024-11-18 00:15:12.477701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.728 [2024-11-18 00:15:12.488321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.728 [2024-11-18 00:15:12.488349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.728 [2024-11-18 00:15:12.499059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.728 [2024-11-18 00:15:12.499086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.728 [2024-11-18 00:15:12.510527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.728 [2024-11-18 00:15:12.510555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.728 [2024-11-18 00:15:12.521573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.728 [2024-11-18 00:15:12.521600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.728 [2024-11-18 00:15:12.534475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.728 [2024-11-18 00:15:12.534502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.728 [2024-11-18 00:15:12.544789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.729 [2024-11-18 00:15:12.544816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.991 [2024-11-18 00:15:12.555970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.991 [2024-11-18 00:15:12.555998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.991 [2024-11-18 00:15:12.569536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.991 [2024-11-18 00:15:12.569564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.991 [2024-11-18 00:15:12.579770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.991 [2024-11-18 00:15:12.579798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.991 [2024-11-18 00:15:12.590526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.991 [2024-11-18 00:15:12.590553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.991 [2024-11-18 00:15:12.603050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.991 [2024-11-18 00:15:12.603078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.991 [2024-11-18 00:15:12.613360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.991 [2024-11-18 00:15:12.613387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.991 [2024-11-18 00:15:12.624442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.991 [2024-11-18 00:15:12.624469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.991 [2024-11-18 00:15:12.635057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.991 [2024-11-18 00:15:12.635084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.991 [2024-11-18 00:15:12.646446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.991 [2024-11-18 00:15:12.646473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.991 [2024-11-18 00:15:12.657020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.991 [2024-11-18 00:15:12.657048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.991 [2024-11-18 00:15:12.667684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.991 [2024-11-18 00:15:12.667711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.991 [2024-11-18 00:15:12.678598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.991 [2024-11-18 00:15:12.678641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.991 [2024-11-18 00:15:12.689300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.991 [2024-11-18 00:15:12.689338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.991 [2024-11-18 00:15:12.700149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.991 [2024-11-18 00:15:12.700176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.991 [2024-11-18 00:15:12.711236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.991 [2024-11-18 00:15:12.711262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.991 [2024-11-18 00:15:12.722126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.991 [2024-11-18 00:15:12.722152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.991 [2024-11-18 00:15:12.732946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.991 [2024-11-18 00:15:12.732972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.991 [2024-11-18 00:15:12.745344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.991 [2024-11-18 00:15:12.745371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.991 [2024-11-18 00:15:12.755220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.991 [2024-11-18 00:15:12.755247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.991 [2024-11-18 00:15:12.765944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.991 [2024-11-18 00:15:12.765972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.991 [2024-11-18 00:15:12.776945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.991 [2024-11-18 00:15:12.776972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.991 [2024-11-18 00:15:12.789593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.991 [2024-11-18 00:15:12.789636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.991 [2024-11-18 00:15:12.799954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.991 [2024-11-18 00:15:12.799981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.991 [2024-11-18 00:15:12.810720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.991 [2024-11-18 00:15:12.810747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.256 [2024-11-18 00:15:12.823146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.256 [2024-11-18 00:15:12.823174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.256 [2024-11-18 00:15:12.833080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.256 [2024-11-18 00:15:12.833107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.256 [2024-11-18 00:15:12.844117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.256 [2024-11-18 00:15:12.844159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.256 [2024-11-18 00:15:12.856980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.256 [2024-11-18 00:15:12.857007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.256 [2024-11-18 00:15:12.867081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.256 [2024-11-18 00:15:12.867109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.256 [2024-11-18 00:15:12.878869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.256 [2024-11-18 00:15:12.878896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.256 [2024-11-18 00:15:12.889759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.256 [2024-11-18 00:15:12.889801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.256 [2024-11-18 00:15:12.900351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.256 [2024-11-18 00:15:12.900379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.256 [2024-11-18 00:15:12.911594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.256 [2024-11-18 00:15:12.911622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.256 [2024-11-18 00:15:12.924436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.256 [2024-11-18 00:15:12.924464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.256 [2024-11-18 00:15:12.934616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.256 [2024-11-18 00:15:12.934657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.256 [2024-11-18 00:15:12.945061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.256 [2024-11-18 00:15:12.945088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.256 [2024-11-18 00:15:12.955792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.256 [2024-11-18 00:15:12.955838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.257 [2024-11-18 00:15:12.966257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.257 [2024-11-18 00:15:12.966289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.257 [2024-11-18 00:15:12.977052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.257 [2024-11-18 00:15:12.977079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.257 [2024-11-18 00:15:12.988320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.257 [2024-11-18 00:15:12.988349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.257 [2024-11-18 00:15:13.001167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.257 [2024-11-18 00:15:13.001195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.257 [2024-11-18 00:15:13.011575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.257 [2024-11-18 00:15:13.011603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.257 [2024-11-18 00:15:13.021902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.257 [2024-11-18 00:15:13.021929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.257 [2024-11-18 00:15:13.032922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.257 [2024-11-18 00:15:13.032949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.257 [2024-11-18 00:15:13.044058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.257 [2024-11-18 00:15:13.044086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.257 [2024-11-18 00:15:13.055203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.257 [2024-11-18 00:15:13.055230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.257 [2024-11-18 00:15:13.065945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.257 [2024-11-18 00:15:13.065973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.521 [2024-11-18 00:15:13.077070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.521 [2024-11-18 00:15:13.077098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.521 [2024-11-18 00:15:13.089896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.521 [2024-11-18 00:15:13.089924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.521 [2024-11-18 00:15:13.099936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.521 [2024-11-18 00:15:13.099963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.521 [2024-11-18 00:15:13.111273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.521 [2024-11-18 00:15:13.111325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.521 [2024-11-18 00:15:13.124092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.521 [2024-11-18 00:15:13.124119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.521 [2024-11-18 00:15:13.134473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.521 [2024-11-18 00:15:13.134500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.521 [2024-11-18 00:15:13.145221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.521 [2024-11-18 00:15:13.145248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.521 [2024-11-18 00:15:13.156698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.521 [2024-11-18 00:15:13.156739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.521 [2024-11-18 00:15:13.167934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.521 [2024-11-18 00:15:13.167968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.521 [2024-11-18 00:15:13.180887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.521 [2024-11-18 00:15:13.180913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.521 [2024-11-18 00:15:13.190706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.522 [2024-11-18 00:15:13.190733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.522 [2024-11-18 00:15:13.201441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.522 [2024-11-18 00:15:13.201469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.522 [2024-11-18 00:15:13.212209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.522 [2024-11-18 00:15:13.212236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.522 [2024-11-18 00:15:13.223396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.522 [2024-11-18 00:15:13.223424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.522 [2024-11-18 00:15:13.234406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.522 [2024-11-18 00:15:13.234434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.522 [2024-11-18 00:15:13.245716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.522 [2024-11-18 00:15:13.245742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.522 [2024-11-18 00:15:13.256981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.522 [2024-11-18 00:15:13.257008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.522 [2024-11-18 00:15:13.267591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.522 [2024-11-18 00:15:13.267634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.522 [2024-11-18 00:15:13.278103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.522 [2024-11-18 00:15:13.278130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.522 [2024-11-18 00:15:13.290695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.522 [2024-11-18 00:15:13.290722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.522 [2024-11-18 00:15:13.300663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.522 [2024-11-18 00:15:13.300689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.522 [2024-11-18 00:15:13.312434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.522 [2024-11-18 00:15:13.312462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.522 [2024-11-18 00:15:13.324992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.522 [2024-11-18 00:15:13.325018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.522 [2024-11-18 00:15:13.336596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.522 [2024-11-18 00:15:13.336639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.783 [2024-11-18 00:15:13.345718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.783 [2024-11-18 00:15:13.345744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.783 [2024-11-18 00:15:13.357848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.783 [2024-11-18 00:15:13.357876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.783 [2024-11-18 00:15:13.368438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.783 [2024-11-18 00:15:13.368465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.783 [2024-11-18 00:15:13.379173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.783 [2024-11-18 00:15:13.379207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.783 [2024-11-18 00:15:13.391772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.783 [2024-11-18 00:15:13.391799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.783 [2024-11-18 00:15:13.401541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.783 [2024-11-18 00:15:13.401568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.783 [2024-11-18 00:15:13.412894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.783 [2024-11-18 00:15:13.412922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.783 [2024-11-18 00:15:13.423874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.783 [2024-11-18 00:15:13.423901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.783 [2024-11-18 00:15:13.434577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.783 [2024-11-18 00:15:13.434619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.783 [2024-11-18 00:15:13.447932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.784 [2024-11-18 00:15:13.447958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.784 [2024-11-18 00:15:13.458780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.784 [2024-11-18 00:15:13.458807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.784 [2024-11-18 00:15:13.469725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.784 [2024-11-18 00:15:13.469751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.784 11655.50 IOPS, 91.06 MiB/s [2024-11-17T23:15:13.606Z] [2024-11-18 00:15:13.482434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.784 [2024-11-18 00:15:13.482462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.784 [2024-11-18 00:15:13.494431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.784 [2024-11-18 00:15:13.494459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.784 [2024-11-18 00:15:13.503574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.784 [2024-11-18 00:15:13.503616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.784 [2024-11-18 00:15:13.515273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.784 [2024-11-18 00:15:13.515327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.784 [2024-11-18 00:15:13.527804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.784 [2024-11-18 00:15:13.527831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.784 [2024-11-18 00:15:13.537157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.784 [2024-11-18 00:15:13.537184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.784 [2024-11-18 00:15:13.549162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.784 [2024-11-18 00:15:13.549188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.784 [2024-11-18 00:15:13.559574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.784 [2024-11-18 00:15:13.559601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.784 [2024-11-18 00:15:13.570280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.784 [2024-11-18 00:15:13.570331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.784 [2024-11-18 00:15:13.580837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.784 [2024-11-18 00:15:13.580864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.784 [2024-11-18 00:15:13.591388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.784 [2024-11-18 00:15:13.591416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.784 [2024-11-18 00:15:13.602343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.784 [2024-11-18 00:15:13.602370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.044 [2024-11-18 00:15:13.613516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.044 [2024-11-18 00:15:13.613544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.044 [2024-11-18 00:15:13.626089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.044 [2024-11-18 00:15:13.626116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.044 [2024-11-18 00:15:13.636404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.044 [2024-11-18 00:15:13.636433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.045 [2024-11-18 00:15:13.646856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.045 [2024-11-18 00:15:13.646883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.045 [2024-11-18 00:15:13.658092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.045 [2024-11-18 00:15:13.658120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.045 [2024-11-18 00:15:13.668776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.045 [2024-11-18 00:15:13.668804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.045 [2024-11-18 00:15:13.680059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.045 [2024-11-18 00:15:13.680087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.045 [2024-11-18 00:15:13.690867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.045 [2024-11-18 00:15:13.690902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.045 [2024-11-18 00:15:13.701793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.045 [2024-11-18 00:15:13.701822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.045 [2024-11-18 00:15:13.714270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.045 [2024-11-18 00:15:13.714298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.045 [2024-11-18 00:15:13.724236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.045 [2024-11-18 00:15:13.724263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.045 [2024-11-18 00:15:13.735434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.045 [2024-11-18 00:15:13.735466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.045 [2024-11-18 00:15:13.748610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.045 [2024-11-18 00:15:13.748637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.045 [2024-11-18 00:15:13.759168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.045 [2024-11-18 00:15:13.759195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.045 [2024-11-18 00:15:13.769791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.045 [2024-11-18 00:15:13.769819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.045 [2024-11-18 00:15:13.781112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.045 [2024-11-18 00:15:13.781140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.045 [2024-11-18 00:15:13.792005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.045 [2024-11-18 00:15:13.792033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.045 [2024-11-18 00:15:13.802743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.045 [2024-11-18 00:15:13.802770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.045 [2024-11-18 00:15:13.813479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.045 [2024-11-18 00:15:13.813507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.045 [2024-11-18 00:15:13.824655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.045 [2024-11-18 00:15:13.824683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.045 [2024-11-18 00:15:13.837194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.045 [2024-11-18 00:15:13.837222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.045 [2024-11-18 00:15:13.847773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.045 [2024-11-18 00:15:13.847800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.045 [2024-11-18 00:15:13.858616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.045 [2024-11-18 00:15:13.858644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.305 [2024-11-18 00:15:13.869359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.305 [2024-11-18 00:15:13.869387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.305 [2024-11-18 00:15:13.880034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.305 [2024-11-18 00:15:13.880062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.305 [2024-11-18 00:15:13.891416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.305 [2024-11-18 00:15:13.891444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.305 [2024-11-18 00:15:13.902058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.305 [2024-11-18 00:15:13.902085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.305 [2024-11-18 00:15:13.912891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.305 [2024-11-18 00:15:13.912918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.305 [2024-11-18 00:15:13.923813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.305 [2024-11-18 00:15:13.923839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.305 [2024-11-18 00:15:13.935147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.305 [2024-11-18 00:15:13.935173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.305 [2024-11-18 00:15:13.945748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.305 [2024-11-18 00:15:13.945775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.305 [2024-11-18 00:15:13.959016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.305 [2024-11-18 00:15:13.959043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.305 [2024-11-18 00:15:13.969637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.305 [2024-11-18 00:15:13.969664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.305 [2024-11-18 00:15:13.980525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.305 [2024-11-18 00:15:13.980553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.305 [2024-11-18 00:15:13.991611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.305 [2024-11-18 00:15:13.991654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.305 [2024-11-18 00:15:14.002643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.305 [2024-11-18 00:15:14.002670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.305 [2024-11-18 00:15:14.015752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.305 [2024-11-18 00:15:14.015779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.305 [2024-11-18 00:15:14.026281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.305 [2024-11-18 00:15:14.026332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.305 [2024-11-18 00:15:14.037278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.305 [2024-11-18 00:15:14.037327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.305 [2024-11-18 00:15:14.050122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.306 [2024-11-18 00:15:14.050149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.306 [2024-11-18 00:15:14.059949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.306 [2024-11-18 00:15:14.059976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.306 [2024-11-18 00:15:14.071300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.306 [2024-11-18 00:15:14.071336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.306 [2024-11-18 00:15:14.084175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.306 [2024-11-18 00:15:14.084202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.306 [2024-11-18 00:15:14.094551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.306 [2024-11-18 00:15:14.094578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.306 [2024-11-18 00:15:14.105446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.306 [2024-11-18 00:15:14.105475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.306 [2024-11-18 00:15:14.116328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.306 [2024-11-18 00:15:14.116356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.306 [2024-11-18 00:15:14.127454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.306 [2024-11-18 00:15:14.127484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.567 [2024-11-18 00:15:14.140135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.567 [2024-11-18 00:15:14.140164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.567 [2024-11-18 00:15:14.150729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.567 [2024-11-18 00:15:14.150756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.567 [2024-11-18 00:15:14.161697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.567 [2024-11-18 00:15:14.161725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.567 [2024-11-18 00:15:14.172798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.567 [2024-11-18 00:15:14.172825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.567 [2024-11-18 00:15:14.183673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.567 [2024-11-18 00:15:14.183705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.567 [2024-11-18 00:15:14.196585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.567 [2024-11-18 00:15:14.196613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.567 [2024-11-18 00:15:14.207127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.567 [2024-11-18 00:15:14.207154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.567 [2024-11-18 00:15:14.218105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.567 [2024-11-18 00:15:14.218131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.567 [2024-11-18 00:15:14.228555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.567 [2024-11-18 00:15:14.228584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.567 [2024-11-18 00:15:14.239662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.567 [2024-11-18 00:15:14.239689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.567 [2024-11-18 00:15:14.253445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.567 [2024-11-18 00:15:14.253474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.567 [2024-11-18 00:15:14.263742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.567 [2024-11-18 00:15:14.263769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.567 [2024-11-18 00:15:14.274274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.567 [2024-11-18 00:15:14.274301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.567 [2024-11-18 00:15:14.285177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.567 [2024-11-18 00:15:14.285204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.567 [2024-11-18 00:15:14.295523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.567 [2024-11-18 00:15:14.295552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.567 [2024-11-18 00:15:14.306198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.567 [2024-11-18 00:15:14.306240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.567 [2024-11-18 00:15:14.316894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.567 [2024-11-18 00:15:14.316920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.567 [2024-11-18 00:15:14.328143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.567 [2024-11-18 00:15:14.328170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.567 [2024-11-18 00:15:14.340718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.567 [2024-11-18 00:15:14.340744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.567 [2024-11-18 00:15:14.350579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.567 [2024-11-18 00:15:14.350620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.567 [2024-11-18 00:15:14.361527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.567 [2024-11-18 00:15:14.361555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.567 [2024-11-18 00:15:14.374577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.567 [2024-11-18 00:15:14.374604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.567 [2024-11-18 00:15:14.385196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.567 [2024-11-18 00:15:14.385223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.828 [2024-11-18 00:15:14.396047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.828 [2024-11-18 00:15:14.396075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.828 [2024-11-18 00:15:14.406560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.828 [2024-11-18 00:15:14.406587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.828 [2024-11-18 00:15:14.417534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.828 [2024-11-18 00:15:14.417561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.828 [2024-11-18 00:15:14.430639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.828 [2024-11-18 00:15:14.430674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.828 [2024-11-18 00:15:14.440931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.828 [2024-11-18 00:15:14.440958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.828 [2024-11-18 00:15:14.451576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.828 [2024-11-18 00:15:14.451603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.828 [2024-11-18 00:15:14.462275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.828 [2024-11-18 00:15:14.462324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.828 [2024-11-18 00:15:14.472635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.828 [2024-11-18 00:15:14.472662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.828 11674.33 IOPS, 91.21 MiB/s [2024-11-17T23:15:14.650Z] [2024-11-18 00:15:14.482920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.828 [2024-11-18 00:15:14.482946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.828 [2024-11-18 00:15:14.493429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.828 [2024-11-18 00:15:14.493457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.828 [2024-11-18 00:15:14.504254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.828 [2024-11-18 00:15:14.504280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.828 [2024-11-18 00:15:14.515449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.828 [2024-11-18 00:15:14.515477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.828 [2024-11-18 00:15:14.528128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.828 [2024-11-18 00:15:14.528156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.828 [2024-11-18 00:15:14.538737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.828 [2024-11-18 00:15:14.538765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.828 [2024-11-18 00:15:14.549146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.828 [2024-11-18 00:15:14.549173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.828 [2024-11-18 00:15:14.559972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.828 [2024-11-18 00:15:14.559999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.828 [2024-11-18 00:15:14.571099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.828 [2024-11-18 00:15:14.571126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.828 [2024-11-18 00:15:14.583827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.828 [2024-11-18 00:15:14.583855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.828 [2024-11-18 00:15:14.594521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.828 [2024-11-18 00:15:14.594549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.828 [2024-11-18 00:15:14.605400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.828 [2024-11-18 00:15:14.605428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.828 [2024-11-18 00:15:14.618002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.828 [2024-11-18 00:15:14.618029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.828 [2024-11-18 00:15:14.627736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.828 [2024-11-18 00:15:14.627763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.828 [2024-11-18 00:15:14.638625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.828 [2024-11-18 00:15:14.638660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.828 [2024-11-18 00:15:14.649714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.828 [2024-11-18 00:15:14.649742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.085 [2024-11-18 00:15:14.660968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.085 [2024-11-18 00:15:14.660996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.085 [2024-11-18 00:15:14.673638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.085 [2024-11-18 00:15:14.673665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.085 [2024-11-18 00:15:14.685082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.085 [2024-11-18 00:15:14.685109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.085 [2024-11-18 00:15:14.694360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.085 [2024-11-18 00:15:14.694388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.085 [2024-11-18 00:15:14.706497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.085 [2024-11-18 00:15:14.706525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.085 [2024-11-18 00:15:14.718015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.085 [2024-11-18 00:15:14.718042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.085 [2024-11-18 00:15:14.731915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.085 [2024-11-18 00:15:14.731942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.085 [2024-11-18 00:15:14.742787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.085 [2024-11-18 00:15:14.742814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.085 [2024-11-18 00:15:14.753742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.085 [2024-11-18 00:15:14.753768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.085 [2024-11-18 00:15:14.766904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.085 [2024-11-18 00:15:14.766932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.085 [2024-11-18 00:15:14.776956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.085 [2024-11-18 00:15:14.776983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.085 [2024-11-18 00:15:14.788338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.086 [2024-11-18 00:15:14.788366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.086 [2024-11-18 00:15:14.801123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.086 [2024-11-18 00:15:14.801150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.086 [2024-11-18 00:15:14.811561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.086 [2024-11-18 00:15:14.811588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.086 [2024-11-18 00:15:14.822144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.086 [2024-11-18 00:15:14.822171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.086 [2024-11-18 00:15:14.833211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.086 [2024-11-18 00:15:14.833238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.086 [2024-11-18 00:15:14.844874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.086 [2024-11-18 00:15:14.844901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.086 [2024-11-18 00:15:14.856001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.086 [2024-11-18 00:15:14.856039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.086 [2024-11-18 00:15:14.868356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.086 [2024-11-18 00:15:14.868385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.086 [2024-11-18 00:15:14.877879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.086 [2024-11-18 00:15:14.877906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.086 [2024-11-18 00:15:14.889274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.086 [2024-11-18 00:15:14.889302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.086 [2024-11-18 00:15:14.900019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.086 [2024-11-18 00:15:14.900047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.346 [2024-11-18 00:15:14.911328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.346 [2024-11-18 00:15:14.911357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.346 [2024-11-18 00:15:14.924270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.346 [2024-11-18 00:15:14.924320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.346 [2024-11-18 00:15:14.934765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.346 [2024-11-18 00:15:14.934792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.346 [2024-11-18 00:15:14.945876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.346 [2024-11-18 00:15:14.945917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.346 [2024-11-18 00:15:14.958767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.346 [2024-11-18 00:15:14.958794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.346 [2024-11-18 00:15:14.969088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.346 [2024-11-18 00:15:14.969114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.346 [2024-11-18 00:15:14.979949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.346 [2024-11-18 00:15:14.979976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.346 [2024-11-18 00:15:14.991032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.346 [2024-11-18 00:15:14.991059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.346 [2024-11-18 00:15:15.001786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.346 [2024-11-18 00:15:15.001813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.346 [2024-11-18 00:15:15.014588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.346 [2024-11-18 00:15:15.014630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.346 [2024-11-18 00:15:15.024961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.346 [2024-11-18 00:15:15.024988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.346 [2024-11-18 00:15:15.035590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.346 [2024-11-18 00:15:15.035619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.346 [2024-11-18 00:15:15.046175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.346 [2024-11-18 00:15:15.046201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.346 [2024-11-18 00:15:15.057015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.346 [2024-11-18 00:15:15.057043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.346 [2024-11-18 00:15:15.069744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.346 [2024-11-18 00:15:15.069771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.346 [2024-11-18 00:15:15.081743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.346 [2024-11-18 00:15:15.081769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.346 [2024-11-18 00:15:15.091773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.346 [2024-11-18 00:15:15.091800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.346 [2024-11-18 00:15:15.102157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.346 [2024-11-18 00:15:15.102184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.346 [2024-11-18 00:15:15.112785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.346 [2024-11-18 00:15:15.112811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.346 [2024-11-18 00:15:15.123516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.346 [2024-11-18 00:15:15.123544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.346 [2024-11-18 00:15:15.134004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.346 [2024-11-18 00:15:15.134031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.346 [2024-11-18 00:15:15.144917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.346 [2024-11-18 00:15:15.144943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.346 [2024-11-18 00:15:15.157195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.346 [2024-11-18 00:15:15.157221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.346 [2024-11-18 00:15:15.167343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.346 [2024-11-18 00:15:15.167372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.607 [2024-11-18 00:15:15.178105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.607 [2024-11-18 00:15:15.178132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.607 [2024-11-18 00:15:15.188743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.607 [2024-11-18 00:15:15.188770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.607 [2024-11-18 00:15:15.199025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.607 [2024-11-18 00:15:15.199051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.607 [2024-11-18 00:15:15.209818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.607 [2024-11-18 00:15:15.209844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.607 [2024-11-18 00:15:15.222580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.607 [2024-11-18 00:15:15.222607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.607 [2024-11-18 00:15:15.232571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.607 [2024-11-18 00:15:15.232599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.607 [2024-11-18 00:15:15.243335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.607 [2024-11-18 00:15:15.243376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.607 [2024-11-18 00:15:15.254577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.607 [2024-11-18 00:15:15.254605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.607 [2024-11-18 00:15:15.265795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.607 [2024-11-18 00:15:15.265824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.607 [2024-11-18 00:15:15.278942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.607 [2024-11-18 00:15:15.278970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.607 [2024-11-18 00:15:15.289573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.607 [2024-11-18 00:15:15.289617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.607 [2024-11-18 00:15:15.300701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.607 [2024-11-18 00:15:15.300728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.607 [2024-11-18 00:15:15.313640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.607 [2024-11-18 00:15:15.313668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.607 [2024-11-18 00:15:15.323748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.607 [2024-11-18 00:15:15.323785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.607 [2024-11-18 00:15:15.335603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.607 [2024-11-18 00:15:15.335631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.607 [2024-11-18 00:15:15.346738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.607 [2024-11-18 00:15:15.346765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.607 [2024-11-18 00:15:15.358373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.607 [2024-11-18 00:15:15.358401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.607 [2024-11-18 00:15:15.369724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.607 [2024-11-18 00:15:15.369752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.607 [2024-11-18 00:15:15.380943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.608 [2024-11-18 00:15:15.380970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.608 [2024-11-18 00:15:15.392476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.608 [2024-11-18 00:15:15.392504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.608 [2024-11-18 00:15:15.404281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.608 [2024-11-18 00:15:15.404334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.608 [2024-11-18 00:15:15.417843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.608 [2024-11-18 00:15:15.417870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.608 [2024-11-18 00:15:15.428685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.608 [2024-11-18 00:15:15.428713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.868 [2024-11-18 00:15:15.440064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.868 [2024-11-18 00:15:15.440092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.868 [2024-11-18 00:15:15.451287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.868 [2024-11-18 00:15:15.451338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.868 [2024-11-18 00:15:15.462828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.868 [2024-11-18 00:15:15.462855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.868 [2024-11-18 00:15:15.474166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.868 [2024-11-18 00:15:15.474193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.868 11658.75 IOPS, 91.08 MiB/s [2024-11-17T23:15:15.690Z] [2024-11-18 00:15:15.485439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.868 [2024-11-18 00:15:15.485475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.868 [2024-11-18 00:15:15.496470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.868 [2024-11-18 00:15:15.496498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.868 [2024-11-18 00:15:15.507859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.868 [2024-11-18 00:15:15.507886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.868 [2024-11-18 00:15:15.518459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.868 [2024-11-18 00:15:15.518487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.868 [2024-11-18 00:15:15.529330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.868 [2024-11-18 00:15:15.529358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.868 [2024-11-18 00:15:15.540757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.868 [2024-11-18 00:15:15.540784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.868 [2024-11-18 00:15:15.551918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.868 [2024-11-18 00:15:15.551961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.868 [2024-11-18 00:15:15.563181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.868 [2024-11-18 00:15:15.563209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.868 [2024-11-18 00:15:15.574140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.868 [2024-11-18 00:15:15.574166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.868 [2024-11-18 00:15:15.587069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.868 [2024-11-18 00:15:15.587096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.868 [2024-11-18 00:15:15.596456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.868 [2024-11-18 00:15:15.596483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.868 [2024-11-18 00:15:15.607947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.868 [2024-11-18 00:15:15.607973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.868 [2024-11-18 00:15:15.619196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.868 [2024-11-18 00:15:15.619237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.868 [2024-11-18 00:15:15.630227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.868 [2024-11-18 00:15:15.630253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.868 [2024-11-18 00:15:15.643376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.868 [2024-11-18 00:15:15.643404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.868 [2024-11-18 00:15:15.654163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.868 [2024-11-18 00:15:15.654189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.868 [2024-11-18 00:15:15.665420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.868 [2024-11-18 00:15:15.665448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.868 [2024-11-18 00:15:15.678072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.868 [2024-11-18 00:15:15.678099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.868 [2024-11-18 00:15:15.688677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.868 [2024-11-18 00:15:15.688704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.128 [2024-11-18 00:15:15.699797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.128 [2024-11-18 00:15:15.699832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.128 [2024-11-18 00:15:15.712453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.128 [2024-11-18 00:15:15.712482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.128 [2024-11-18 00:15:15.723061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.128 [2024-11-18 00:15:15.723088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.128 [2024-11-18 00:15:15.734458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.128 [2024-11-18 00:15:15.734486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.128 [2024-11-18 00:15:15.747499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.128 [2024-11-18 00:15:15.747528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.128 [2024-11-18 00:15:15.758040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.128 [2024-11-18 00:15:15.758066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.128 [2024-11-18 00:15:15.769447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.128 [2024-11-18 00:15:15.769474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.128 [2024-11-18 00:15:15.781028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.128 [2024-11-18 00:15:15.781054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.128 [2024-11-18 00:15:15.792716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.128 [2024-11-18 00:15:15.792742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.128 [2024-11-18 00:15:15.803681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.128 [2024-11-18 00:15:15.803707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.128 [2024-11-18 00:15:15.816773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.128 [2024-11-18 00:15:15.816800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.128 [2024-11-18 00:15:15.827014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.128 [2024-11-18 00:15:15.827041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.128 [2024-11-18 00:15:15.837731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.128 [2024-11-18 00:15:15.837758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.128 [2024-11-18 00:15:15.850449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.128 [2024-11-18 00:15:15.850476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.128 [2024-11-18 00:15:15.862026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.128 [2024-11-18 00:15:15.862053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.128 [2024-11-18 00:15:15.871460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.128 [2024-11-18 00:15:15.871488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.128 [2024-11-18 00:15:15.883320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.128 [2024-11-18 00:15:15.883347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.128 [2024-11-18 00:15:15.894173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.128 [2024-11-18 00:15:15.894199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.128 [2024-11-18 00:15:15.905038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.128 [2024-11-18 00:15:15.905064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.128 [2024-11-18 00:15:15.915950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.128 [2024-11-18 00:15:15.915984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.128 [2024-11-18 00:15:15.926841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.128 [2024-11-18 00:15:15.926868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.128 [2024-11-18 00:15:15.938012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.128 [2024-11-18 00:15:15.938039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.128 [2024-11-18 00:15:15.949262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.128 [2024-11-18 00:15:15.949305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.387 [2024-11-18 00:15:15.960137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.387 [2024-11-18 00:15:15.960165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.387 [2024-11-18 00:15:15.970754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.387 [2024-11-18 00:15:15.970780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.387 [2024-11-18 00:15:15.981619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.387 [2024-11-18 00:15:15.981661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.387 [2024-11-18 00:15:15.992513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.387 [2024-11-18 00:15:15.992541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.387 [2024-11-18 00:15:16.003459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.387 [2024-11-18 00:15:16.003486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.387 [2024-11-18 00:15:16.014348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.387 [2024-11-18 00:15:16.014376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.387 [2024-11-18 00:15:16.025329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.387 [2024-11-18 00:15:16.025356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.387 [2024-11-18 00:15:16.036610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.387 [2024-11-18 00:15:16.036652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.387 [2024-11-18 00:15:16.047001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.387 [2024-11-18 00:15:16.047028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.387 [2024-11-18 00:15:16.057775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.387 [2024-11-18 00:15:16.057802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.387 [2024-11-18 00:15:16.070440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.387 [2024-11-18 00:15:16.070468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.387 [2024-11-18 00:15:16.080125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.387 [2024-11-18 00:15:16.080151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.387 [2024-11-18 00:15:16.091646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.387 [2024-11-18 00:15:16.091672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.387 [2024-11-18 00:15:16.102478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.387 [2024-11-18 00:15:16.102507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.387 [2024-11-18 00:15:16.113456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.387 [2024-11-18 00:15:16.113484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.387 [2024-11-18 00:15:16.126423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.387 [2024-11-18 00:15:16.126460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.387 [2024-11-18 00:15:16.136453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.387 [2024-11-18 00:15:16.136481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.387 [2024-11-18 00:15:16.146771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.387 [2024-11-18 00:15:16.146798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.387 [2024-11-18 00:15:16.157545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.387 [2024-11-18 00:15:16.157573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.387 [2024-11-18 00:15:16.170448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.387 [2024-11-18 00:15:16.170476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.387 [2024-11-18 00:15:16.180966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.387 [2024-11-18 00:15:16.180993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.387 [2024-11-18 00:15:16.191536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.387 [2024-11-18 00:15:16.191564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.387 [2024-11-18 00:15:16.202397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.387 [2024-11-18 00:15:16.202425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.646 [2024-11-18 00:15:16.213563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.646 [2024-11-18 00:15:16.213592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.646 [2024-11-18 00:15:16.226662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.646 [2024-11-18 00:15:16.226689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.646 [2024-11-18 00:15:16.237261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.646 [2024-11-18 00:15:16.237289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.646 [2024-11-18 00:15:16.247802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.646 [2024-11-18 00:15:16.247829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.646 [2024-11-18 00:15:16.258706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.646 [2024-11-18 00:15:16.258733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.646 [2024-11-18 00:15:16.269505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.646 [2024-11-18 00:15:16.269532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.646 [2024-11-18 00:15:16.282186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.646 [2024-11-18 00:15:16.282213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.646 [2024-11-18 00:15:16.293046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.646 [2024-11-18 00:15:16.293072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.646 [2024-11-18 00:15:16.303999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.646 [2024-11-18 00:15:16.304026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.646 [2024-11-18 00:15:16.317876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.646 [2024-11-18 00:15:16.317903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.646 [2024-11-18 00:15:16.328399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.646 [2024-11-18 00:15:16.328427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.646 [2024-11-18 00:15:16.339331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.646 [2024-11-18 00:15:16.339367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.646 [2024-11-18 00:15:16.350197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.646 [2024-11-18 00:15:16.350224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.646 [2024-11-18 00:15:16.361368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.646 [2024-11-18 00:15:16.361396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.646 [2024-11-18 00:15:16.372509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.646 [2024-11-18 00:15:16.372537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.646 [2024-11-18 00:15:16.383466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.646 [2024-11-18 00:15:16.383494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.646 [2024-11-18 00:15:16.394258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.646 [2024-11-18 00:15:16.394285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.646 [2024-11-18 00:15:16.404996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.646 [2024-11-18 00:15:16.405024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.646 [2024-11-18 00:15:16.422592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.646 [2024-11-18 00:15:16.422622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.646 [2024-11-18 00:15:16.432479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.646 [2024-11-18 00:15:16.432506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.646 [2024-11-18 00:15:16.443663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.646 [2024-11-18 00:15:16.443705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.646 [2024-11-18 00:15:16.455241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.646 [2024-11-18 00:15:16.455268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.646 [2024-11-18 00:15:16.466437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.646 [2024-11-18 00:15:16.466466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.908 [2024-11-18 00:15:16.477741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.908 [2024-11-18 00:15:16.477768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.908 11646.40 IOPS, 90.99 MiB/s [2024-11-17T23:15:16.730Z] [2024-11-18 00:15:16.488039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.908 [2024-11-18 00:15:16.488066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.908 00:09:52.908 Latency(us) 00:09:52.908 [2024-11-17T23:15:16.730Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:52.908 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:52.908 Nvme1n1 : 5.01 11646.80 90.99 0.00 0.00 10975.04 4563.25 21359.88 00:09:52.908 [2024-11-17T23:15:16.730Z] =================================================================================================================== 00:09:52.908 [2024-11-17T23:15:16.730Z] Total : 11646.80 90.99 0.00 0.00 10975.04 4563.25 21359.88 00:09:52.908 [2024-11-18 00:15:16.492259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.908 [2024-11-18 00:15:16.492282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.908 [2024-11-18 00:15:16.500413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.908 [2024-11-18 00:15:16.500440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.908 [2024-11-18 00:15:16.508425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.908 [2024-11-18 00:15:16.508462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.908 [2024-11-18 00:15:16.516486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.908 [2024-11-18 00:15:16.516534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.908 [2024-11-18 00:15:16.524494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.908 [2024-11-18 00:15:16.524543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.908 [2024-11-18 00:15:16.532517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.908 [2024-11-18 00:15:16.532564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.908 [2024-11-18 00:15:16.540530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.908 [2024-11-18 00:15:16.540576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.908 [2024-11-18 00:15:16.548564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.908 [2024-11-18 00:15:16.548625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.908 [2024-11-18 00:15:16.556584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.908 [2024-11-18 00:15:16.556642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.908 [2024-11-18 00:15:16.564598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.908 [2024-11-18 00:15:16.564656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.908 [2024-11-18 00:15:16.572626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.908 [2024-11-18 00:15:16.572675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.908 [2024-11-18 00:15:16.580663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.908 [2024-11-18 00:15:16.580712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.908 [2024-11-18 00:15:16.588682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.908 [2024-11-18 00:15:16.588731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.908 [2024-11-18 00:15:16.596699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.908 [2024-11-18 00:15:16.596748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.908 [2024-11-18 00:15:16.604720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.908 [2024-11-18 00:15:16.604764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.908 [2024-11-18 00:15:16.612743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.908 [2024-11-18 00:15:16.612792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.908 [2024-11-18 00:15:16.620759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.908 [2024-11-18 00:15:16.620805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.908 [2024-11-18 00:15:16.628750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.908 [2024-11-18 00:15:16.628776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.909 [2024-11-18 00:15:16.636761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.909 [2024-11-18 00:15:16.636786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.909 [2024-11-18 00:15:16.644835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.909 [2024-11-18 00:15:16.644885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.909 [2024-11-18 00:15:16.652845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.909 [2024-11-18 00:15:16.652889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.909 [2024-11-18 00:15:16.660818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.909 [2024-11-18 00:15:16.660841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.909 [2024-11-18 00:15:16.668829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.909 [2024-11-18 00:15:16.668848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.909 [2024-11-18 00:15:16.676848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.909 [2024-11-18 00:15:16.676867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (147962) - No such process 00:09:52.909 00:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 147962 00:09:52.909 00:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:52.909 00:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.909 00:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:52.909 00:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.909 00:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:52.909 00:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.909 00:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:52.909 delay0 00:09:52.909 00:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.909 00:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:52.909 00:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.909 00:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:52.909 00:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.909 00:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:53.169 [2024-11-18 00:15:16.837478] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:01.322 Initializing NVMe Controllers 00:10:01.322 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:01.322 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:01.322 Initialization complete. Launching workers. 00:10:01.322 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 237, failed: 21999 00:10:01.322 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 22108, failed to submit 128 00:10:01.322 success 22042, unsuccessful 66, failed 0 00:10:01.322 00:15:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:01.322 00:15:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:01.322 00:15:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:01.322 00:15:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:01.322 00:15:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:01.322 00:15:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:01.322 00:15:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:01.322 00:15:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:01.322 rmmod nvme_tcp 00:10:01.322 rmmod nvme_fabrics 00:10:01.322 rmmod nvme_keyring 00:10:01.322 00:15:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:01.322 00:15:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:01.322 00:15:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:01.322 00:15:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 145962 ']' 00:10:01.322 00:15:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 145962 00:10:01.322 00:15:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 145962 ']' 00:10:01.322 00:15:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 145962 00:10:01.322 00:15:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:10:01.322 00:15:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:01.322 00:15:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 145962 00:10:01.322 00:15:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:01.322 00:15:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:01.322 00:15:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 145962' 00:10:01.322 killing process with pid 145962 00:10:01.322 00:15:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 145962 00:10:01.322 00:15:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 145962 00:10:01.322 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:01.322 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:01.322 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:01.322 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:01.322 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:10:01.322 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:01.322 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:10:01.322 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:01.322 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:01.322 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:01.322 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:01.322 00:15:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:02.705 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:02.705 00:10:02.705 real 0m28.730s 00:10:02.705 user 0m42.195s 00:10:02.705 sys 0m8.502s 00:10:02.705 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:02.705 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:02.705 ************************************ 00:10:02.705 END TEST nvmf_zcopy 00:10:02.705 ************************************ 00:10:02.705 00:15:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:02.705 00:15:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:02.705 00:15:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:02.705 00:15:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:02.705 ************************************ 00:10:02.705 START TEST nvmf_nmic 00:10:02.705 ************************************ 00:10:02.705 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:02.705 * Looking for test storage... 00:10:02.705 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:02.705 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:02.705 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:10:02.705 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:02.705 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:02.705 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:02.705 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:02.705 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:02.705 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:02.705 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:02.705 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:02.705 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:02.705 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:02.705 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:02.705 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:02.705 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:02.705 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:02.705 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:02.705 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:02.705 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:02.705 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:02.705 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:02.705 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:02.705 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:02.705 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:02.705 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:02.705 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:02.705 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:02.706 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:02.706 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:02.706 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:02.706 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:02.706 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:02.706 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:02.706 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:02.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.706 --rc genhtml_branch_coverage=1 00:10:02.706 --rc genhtml_function_coverage=1 00:10:02.706 --rc genhtml_legend=1 00:10:02.706 --rc geninfo_all_blocks=1 00:10:02.706 --rc geninfo_unexecuted_blocks=1 00:10:02.706 00:10:02.706 ' 00:10:02.706 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:02.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.706 --rc genhtml_branch_coverage=1 00:10:02.706 --rc genhtml_function_coverage=1 00:10:02.706 --rc genhtml_legend=1 00:10:02.706 --rc geninfo_all_blocks=1 00:10:02.706 --rc geninfo_unexecuted_blocks=1 00:10:02.706 00:10:02.706 ' 00:10:02.706 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:02.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.706 --rc genhtml_branch_coverage=1 00:10:02.706 --rc genhtml_function_coverage=1 00:10:02.706 --rc genhtml_legend=1 00:10:02.706 --rc geninfo_all_blocks=1 00:10:02.706 --rc geninfo_unexecuted_blocks=1 00:10:02.706 00:10:02.706 ' 00:10:02.706 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:02.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.706 --rc genhtml_branch_coverage=1 00:10:02.706 --rc genhtml_function_coverage=1 00:10:02.706 --rc genhtml_legend=1 00:10:02.706 --rc geninfo_all_blocks=1 00:10:02.706 --rc geninfo_unexecuted_blocks=1 00:10:02.706 00:10:02.706 ' 00:10:02.706 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:02.706 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:02.706 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:02.706 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:02.706 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:02.706 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:02.706 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:02.706 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:02.706 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:02.706 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:02.706 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:02.706 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:02.706 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:02.706 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:02.706 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:02.706 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:02.706 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:02.706 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:02.706 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:02.706 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:02.706 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:02.706 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:02.706 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:02.706 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.706 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.706 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.706 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:02.706 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.706 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:02.706 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:02.706 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:02.706 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:02.706 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:02.706 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:02.706 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:02.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:02.706 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:02.706 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:02.706 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:02.706 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:02.706 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:02.706 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:02.706 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:02.706 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:02.706 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:02.706 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:02.706 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:02.706 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:02.706 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:02.706 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:02.706 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:02.706 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:02.706 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:02.706 00:15:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:05.247 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:05.247 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:05.247 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:05.247 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:05.247 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:05.248 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:05.248 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:05.248 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:05.248 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:05.248 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:05.248 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:05.248 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:05.248 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:05.248 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:05.248 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:05.248 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:05.248 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:05.248 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:05.248 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:05.248 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:10:05.248 00:10:05.248 --- 10.0.0.2 ping statistics --- 00:10:05.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:05.248 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:10:05.248 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:05.248 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:05.248 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:10:05.248 00:10:05.248 --- 10.0.0.1 ping statistics --- 00:10:05.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:05.248 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:10:05.248 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:05.248 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:10:05.248 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:05.248 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:05.248 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:05.248 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:05.248 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:05.248 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:05.248 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:05.248 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:05.248 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:05.248 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:05.248 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:05.248 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=151413 00:10:05.248 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:05.248 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 151413 00:10:05.248 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 151413 ']' 00:10:05.248 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:05.248 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:05.248 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:05.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:05.248 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:05.248 00:15:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:05.248 [2024-11-18 00:15:28.981828] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:10:05.248 [2024-11-18 00:15:28.981904] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:05.248 [2024-11-18 00:15:29.057849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:05.510 [2024-11-18 00:15:29.107350] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:05.510 [2024-11-18 00:15:29.107406] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:05.510 [2024-11-18 00:15:29.107428] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:05.510 [2024-11-18 00:15:29.107446] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:05.510 [2024-11-18 00:15:29.107461] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:05.510 [2024-11-18 00:15:29.109142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:05.510 [2024-11-18 00:15:29.109166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:05.510 [2024-11-18 00:15:29.109224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:05.510 [2024-11-18 00:15:29.109228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.510 00:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:05.510 00:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:05.510 00:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:05.510 00:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:05.510 00:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:05.510 00:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:05.510 00:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:05.510 00:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.511 00:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:05.511 [2024-11-18 00:15:29.252590] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:05.511 00:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.511 00:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:05.511 00:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.511 00:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:05.511 Malloc0 00:10:05.511 00:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.511 00:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:05.511 00:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.511 00:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:05.511 00:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.511 00:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:05.511 00:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.511 00:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:05.511 00:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.511 00:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:05.511 00:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.511 00:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:05.511 [2024-11-18 00:15:29.312021] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:05.511 00:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.511 00:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:05.511 test case1: single bdev can't be used in multiple subsystems 00:10:05.511 00:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:05.511 00:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.511 00:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:05.511 00:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.511 00:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:05.511 00:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.511 00:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:05.511 00:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.511 00:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:05.511 00:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:05.511 00:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.773 00:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:05.773 [2024-11-18 00:15:29.335880] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:05.773 [2024-11-18 00:15:29.335912] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:05.773 [2024-11-18 00:15:29.335935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.773 request: 00:10:05.773 { 00:10:05.773 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:05.773 "namespace": { 00:10:05.773 "bdev_name": "Malloc0", 00:10:05.773 "no_auto_visible": false 00:10:05.773 }, 00:10:05.773 "method": "nvmf_subsystem_add_ns", 00:10:05.773 "req_id": 1 00:10:05.773 } 00:10:05.773 Got JSON-RPC error response 00:10:05.773 response: 00:10:05.773 { 00:10:05.773 "code": -32602, 00:10:05.773 "message": "Invalid parameters" 00:10:05.773 } 00:10:05.773 00:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:05.773 00:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:05.773 00:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:05.773 00:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:05.773 Adding namespace failed - expected result. 00:10:05.773 00:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:05.773 test case2: host connect to nvmf target in multiple paths 00:10:05.773 00:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:05.773 00:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.773 00:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:05.773 [2024-11-18 00:15:29.343982] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:05.773 00:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.773 00:15:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:06.344 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:06.915 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:06.915 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:06.915 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:06.915 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:06.915 00:15:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:09.478 00:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:09.478 00:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:09.478 00:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:09.478 00:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:09.478 00:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:09.478 00:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:09.478 00:15:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:09.478 [global] 00:10:09.478 thread=1 00:10:09.478 invalidate=1 00:10:09.478 rw=write 00:10:09.478 time_based=1 00:10:09.478 runtime=1 00:10:09.478 ioengine=libaio 00:10:09.478 direct=1 00:10:09.478 bs=4096 00:10:09.478 iodepth=1 00:10:09.478 norandommap=0 00:10:09.478 numjobs=1 00:10:09.478 00:10:09.478 verify_dump=1 00:10:09.478 verify_backlog=512 00:10:09.478 verify_state_save=0 00:10:09.478 do_verify=1 00:10:09.478 verify=crc32c-intel 00:10:09.478 [job0] 00:10:09.478 filename=/dev/nvme0n1 00:10:09.478 Could not set queue depth (nvme0n1) 00:10:09.478 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:09.478 fio-3.35 00:10:09.478 Starting 1 thread 00:10:10.863 00:10:10.863 job0: (groupid=0, jobs=1): err= 0: pid=152014: Mon Nov 18 00:15:34 2024 00:10:10.863 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:10.863 slat (nsec): min=5546, max=58171, avg=13199.87, stdev=5669.48 00:10:10.863 clat (usec): min=175, max=320, avg=231.78, stdev=17.86 00:10:10.863 lat (usec): min=181, max=329, avg=244.98, stdev=21.52 00:10:10.863 clat percentiles (usec): 00:10:10.863 | 1.00th=[ 194], 5.00th=[ 202], 10.00th=[ 210], 20.00th=[ 217], 00:10:10.863 | 30.00th=[ 223], 40.00th=[ 229], 50.00th=[ 233], 60.00th=[ 237], 00:10:10.863 | 70.00th=[ 243], 80.00th=[ 249], 90.00th=[ 255], 95.00th=[ 262], 00:10:10.863 | 99.00th=[ 273], 99.50th=[ 277], 99.90th=[ 285], 99.95th=[ 306], 00:10:10.863 | 99.99th=[ 322] 00:10:10.863 write: IOPS=2468, BW=9874KiB/s (10.1MB/s)(9884KiB/1001msec); 0 zone resets 00:10:10.863 slat (usec): min=7, max=28348, avg=27.86, stdev=570.01 00:10:10.863 clat (usec): min=124, max=907, avg=166.16, stdev=29.82 00:10:10.863 lat (usec): min=132, max=28636, avg=194.02, stdev=573.38 00:10:10.863 clat percentiles (usec): 00:10:10.863 | 1.00th=[ 130], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 149], 00:10:10.863 | 30.00th=[ 153], 40.00th=[ 159], 50.00th=[ 165], 60.00th=[ 172], 00:10:10.863 | 70.00th=[ 176], 80.00th=[ 182], 90.00th=[ 190], 95.00th=[ 196], 00:10:10.863 | 99.00th=[ 208], 99.50th=[ 219], 99.90th=[ 742], 99.95th=[ 766], 00:10:10.863 | 99.99th=[ 906] 00:10:10.863 bw ( KiB/s): min=10720, max=10720, per=100.00%, avg=10720.00, stdev= 0.00, samples=1 00:10:10.863 iops : min= 2680, max= 2680, avg=2680.00, stdev= 0.00, samples=1 00:10:10.863 lat (usec) : 250=92.45%, 500=7.48%, 750=0.02%, 1000=0.04% 00:10:10.863 cpu : usr=5.10%, sys=8.80%, ctx=4522, majf=0, minf=1 00:10:10.863 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:10.863 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.863 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.863 issued rwts: total=2048,2471,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.863 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:10.863 00:10:10.863 Run status group 0 (all jobs): 00:10:10.863 READ: bw=8184KiB/s (8380kB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:10:10.863 WRITE: bw=9874KiB/s (10.1MB/s), 9874KiB/s-9874KiB/s (10.1MB/s-10.1MB/s), io=9884KiB (10.1MB), run=1001-1001msec 00:10:10.863 00:10:10.863 Disk stats (read/write): 00:10:10.863 nvme0n1: ios=2020/2048, merge=0/0, ticks=1388/309, in_queue=1697, util=98.70% 00:10:10.863 00:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:10.863 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:10.863 00:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:10.863 00:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:10.863 00:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:10.863 00:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:10.863 00:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:10.863 00:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:10.863 00:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:10.863 00:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:10.863 00:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:10.863 00:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:10.863 00:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:10.863 00:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:10.863 00:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:10.863 00:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:10.863 00:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:10.863 rmmod nvme_tcp 00:10:10.863 rmmod nvme_fabrics 00:10:10.863 rmmod nvme_keyring 00:10:10.863 00:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:10.863 00:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:10.863 00:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:10.863 00:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 151413 ']' 00:10:10.863 00:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 151413 00:10:10.863 00:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 151413 ']' 00:10:10.863 00:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 151413 00:10:10.863 00:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:10.863 00:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:10.863 00:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 151413 00:10:10.863 00:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:10.863 00:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:10.863 00:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 151413' 00:10:10.863 killing process with pid 151413 00:10:10.863 00:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 151413 00:10:10.863 00:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 151413 00:10:11.131 00:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:11.131 00:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:11.131 00:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:11.131 00:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:11.131 00:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:11.131 00:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:11.131 00:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:11.131 00:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:11.131 00:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:11.131 00:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.131 00:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:11.131 00:15:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:13.683 00:15:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:13.683 00:10:13.683 real 0m10.645s 00:10:13.683 user 0m24.182s 00:10:13.683 sys 0m2.994s 00:10:13.683 00:15:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:13.683 00:15:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:13.683 ************************************ 00:10:13.683 END TEST nvmf_nmic 00:10:13.683 ************************************ 00:10:13.683 00:15:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:13.683 00:15:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:13.683 00:15:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:13.683 00:15:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:13.683 ************************************ 00:10:13.683 START TEST nvmf_fio_target 00:10:13.683 ************************************ 00:10:13.683 00:15:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:13.683 * Looking for test storage... 00:10:13.683 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:13.683 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:13.683 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:10:13.683 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:13.683 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:13.683 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:13.683 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:13.683 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:13.683 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:13.683 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:13.683 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:13.683 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:13.683 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:13.683 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:13.683 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:13.683 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:13.683 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:13.683 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:13.683 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:13.683 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:13.683 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:13.683 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:13.683 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:13.683 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:13.683 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:13.683 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:13.683 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:13.683 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:13.683 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:13.683 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:13.683 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:13.683 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:13.683 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:13.683 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:13.683 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:13.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.683 --rc genhtml_branch_coverage=1 00:10:13.683 --rc genhtml_function_coverage=1 00:10:13.683 --rc genhtml_legend=1 00:10:13.683 --rc geninfo_all_blocks=1 00:10:13.683 --rc geninfo_unexecuted_blocks=1 00:10:13.683 00:10:13.683 ' 00:10:13.683 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:13.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.683 --rc genhtml_branch_coverage=1 00:10:13.683 --rc genhtml_function_coverage=1 00:10:13.683 --rc genhtml_legend=1 00:10:13.683 --rc geninfo_all_blocks=1 00:10:13.683 --rc geninfo_unexecuted_blocks=1 00:10:13.683 00:10:13.683 ' 00:10:13.683 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:13.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.683 --rc genhtml_branch_coverage=1 00:10:13.683 --rc genhtml_function_coverage=1 00:10:13.683 --rc genhtml_legend=1 00:10:13.683 --rc geninfo_all_blocks=1 00:10:13.683 --rc geninfo_unexecuted_blocks=1 00:10:13.683 00:10:13.683 ' 00:10:13.683 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:13.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.683 --rc genhtml_branch_coverage=1 00:10:13.683 --rc genhtml_function_coverage=1 00:10:13.683 --rc genhtml_legend=1 00:10:13.683 --rc geninfo_all_blocks=1 00:10:13.683 --rc geninfo_unexecuted_blocks=1 00:10:13.683 00:10:13.683 ' 00:10:13.683 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:13.683 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:13.683 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:13.683 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:13.683 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:13.683 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:13.683 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:13.683 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:13.683 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:13.683 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:13.683 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:13.683 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:13.683 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:13.683 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:13.683 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:13.683 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:13.683 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:13.683 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:13.683 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:13.683 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:13.683 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:13.683 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:13.683 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:13.684 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.684 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.684 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.684 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:13.684 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.684 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:13.684 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:13.684 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:13.684 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:13.684 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:13.684 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:13.684 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:13.684 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:13.684 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:13.684 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:13.684 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:13.684 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:13.684 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:13.684 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:13.684 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:13.684 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:13.684 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:13.684 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:13.684 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:13.684 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:13.684 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:13.684 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:13.684 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:13.684 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:13.684 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:13.684 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:13.684 00:15:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.593 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:15.593 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:15.593 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:15.593 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:15.593 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:15.593 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:15.593 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:15.593 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:15.593 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:15.593 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:15.593 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:15.593 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:15.593 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:15.593 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:15.593 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:15.593 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:15.593 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:15.593 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:15.593 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:15.593 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:15.594 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:15.594 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:15.594 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:15.594 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:15.594 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:15.853 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:15.853 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:15.853 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:15.853 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:15.853 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:15.853 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:15.853 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:15.853 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:10:15.853 00:10:15.853 --- 10.0.0.2 ping statistics --- 00:10:15.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:15.853 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:10:15.853 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:15.853 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:15.853 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:10:15.853 00:10:15.853 --- 10.0.0.1 ping statistics --- 00:10:15.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:15.853 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:10:15.853 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:15.853 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:10:15.853 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:15.853 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:15.853 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:15.853 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:15.853 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:15.853 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:15.853 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:15.853 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:15.853 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:15.853 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:15.853 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.853 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=154220 00:10:15.853 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:15.853 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 154220 00:10:15.853 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 154220 ']' 00:10:15.853 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.853 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:15.853 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.853 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:15.853 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.853 [2024-11-18 00:15:39.555808] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:10:15.853 [2024-11-18 00:15:39.555895] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:15.853 [2024-11-18 00:15:39.625643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:15.853 [2024-11-18 00:15:39.670757] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:15.853 [2024-11-18 00:15:39.670824] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:15.853 [2024-11-18 00:15:39.670846] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:15.853 [2024-11-18 00:15:39.670873] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:15.853 [2024-11-18 00:15:39.670887] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:15.853 [2024-11-18 00:15:39.672612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:15.853 [2024-11-18 00:15:39.672673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:15.853 [2024-11-18 00:15:39.672761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:15.853 [2024-11-18 00:15:39.672764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.112 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:16.112 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:16.112 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:16.112 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:16.112 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.112 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:16.112 00:15:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:16.372 [2024-11-18 00:15:40.068776] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:16.372 00:15:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:16.632 00:15:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:16.632 00:15:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:16.891 00:15:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:16.891 00:15:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:17.463 00:15:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:17.463 00:15:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:17.463 00:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:17.463 00:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:18.035 00:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:18.035 00:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:18.035 00:15:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:18.605 00:15:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:18.605 00:15:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:18.605 00:15:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:18.605 00:15:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:19.173 00:15:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:19.173 00:15:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:19.173 00:15:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:19.432 00:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:19.432 00:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:19.691 00:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:19.950 [2024-11-18 00:15:43.755672] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:20.210 00:15:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:20.469 00:15:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:20.731 00:15:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:21.300 00:15:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:21.300 00:15:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:10:21.300 00:15:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:21.300 00:15:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:10:21.301 00:15:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:10:21.301 00:15:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:10:23.227 00:15:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:23.227 00:15:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:23.227 00:15:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:23.227 00:15:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:10:23.227 00:15:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:23.227 00:15:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:10:23.227 00:15:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:23.227 [global] 00:10:23.227 thread=1 00:10:23.227 invalidate=1 00:10:23.227 rw=write 00:10:23.227 time_based=1 00:10:23.227 runtime=1 00:10:23.227 ioengine=libaio 00:10:23.227 direct=1 00:10:23.227 bs=4096 00:10:23.227 iodepth=1 00:10:23.227 norandommap=0 00:10:23.227 numjobs=1 00:10:23.227 00:10:23.227 verify_dump=1 00:10:23.227 verify_backlog=512 00:10:23.227 verify_state_save=0 00:10:23.227 do_verify=1 00:10:23.227 verify=crc32c-intel 00:10:23.228 [job0] 00:10:23.228 filename=/dev/nvme0n1 00:10:23.228 [job1] 00:10:23.228 filename=/dev/nvme0n2 00:10:23.228 [job2] 00:10:23.228 filename=/dev/nvme0n3 00:10:23.228 [job3] 00:10:23.228 filename=/dev/nvme0n4 00:10:23.228 Could not set queue depth (nvme0n1) 00:10:23.228 Could not set queue depth (nvme0n2) 00:10:23.228 Could not set queue depth (nvme0n3) 00:10:23.228 Could not set queue depth (nvme0n4) 00:10:23.487 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:23.487 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:23.487 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:23.487 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:23.487 fio-3.35 00:10:23.487 Starting 4 threads 00:10:24.869 00:10:24.869 job0: (groupid=0, jobs=1): err= 0: pid=155299: Mon Nov 18 00:15:48 2024 00:10:24.869 read: IOPS=52, BW=212KiB/s (217kB/s)(216KiB/1019msec) 00:10:24.869 slat (nsec): min=6949, max=33395, avg=15859.11, stdev=9640.04 00:10:24.869 clat (usec): min=266, max=41187, avg=16145.18, stdev=19995.39 00:10:24.869 lat (usec): min=275, max=41214, avg=16161.04, stdev=20002.31 00:10:24.869 clat percentiles (usec): 00:10:24.869 | 1.00th=[ 265], 5.00th=[ 289], 10.00th=[ 310], 20.00th=[ 318], 00:10:24.869 | 30.00th=[ 330], 40.00th=[ 363], 50.00th=[ 375], 60.00th=[ 474], 00:10:24.869 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:24.869 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:24.869 | 99.99th=[41157] 00:10:24.869 write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets 00:10:24.869 slat (nsec): min=6266, max=52972, avg=11755.11, stdev=6799.92 00:10:24.869 clat (usec): min=142, max=535, avg=270.25, stdev=75.74 00:10:24.869 lat (usec): min=149, max=550, avg=282.00, stdev=78.02 00:10:24.869 clat percentiles (usec): 00:10:24.869 | 1.00th=[ 151], 5.00th=[ 161], 10.00th=[ 176], 20.00th=[ 206], 00:10:24.869 | 30.00th=[ 243], 40.00th=[ 253], 50.00th=[ 262], 60.00th=[ 269], 00:10:24.869 | 70.00th=[ 281], 80.00th=[ 314], 90.00th=[ 392], 95.00th=[ 424], 00:10:24.869 | 99.00th=[ 482], 99.50th=[ 510], 99.90th=[ 537], 99.95th=[ 537], 00:10:24.869 | 99.99th=[ 537] 00:10:24.869 bw ( KiB/s): min= 4087, max= 4087, per=20.34%, avg=4087.00, stdev= 0.00, samples=1 00:10:24.869 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:10:24.869 lat (usec) : 250=33.22%, 500=62.37%, 750=0.71% 00:10:24.869 lat (msec) : 50=3.71% 00:10:24.869 cpu : usr=0.49%, sys=0.49%, ctx=566, majf=0, minf=2 00:10:24.869 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:24.869 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:24.869 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:24.869 issued rwts: total=54,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:24.869 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:24.869 job1: (groupid=0, jobs=1): err= 0: pid=155300: Mon Nov 18 00:15:48 2024 00:10:24.869 read: IOPS=549, BW=2198KiB/s (2251kB/s)(2240KiB/1019msec) 00:10:24.869 slat (nsec): min=5483, max=34197, avg=7249.87, stdev=3980.79 00:10:24.869 clat (usec): min=184, max=41216, avg=1342.16, stdev=6443.06 00:10:24.869 lat (usec): min=189, max=41237, avg=1349.41, stdev=6446.29 00:10:24.869 clat percentiles (usec): 00:10:24.869 | 1.00th=[ 200], 5.00th=[ 208], 10.00th=[ 217], 20.00th=[ 227], 00:10:24.869 | 30.00th=[ 258], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 277], 00:10:24.869 | 70.00th=[ 281], 80.00th=[ 293], 90.00th=[ 416], 95.00th=[ 515], 00:10:24.869 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:24.869 | 99.99th=[41157] 00:10:24.869 write: IOPS=1004, BW=4020KiB/s (4116kB/s)(4096KiB/1019msec); 0 zone resets 00:10:24.869 slat (nsec): min=7036, max=64586, avg=14545.38, stdev=7216.39 00:10:24.869 clat (usec): min=128, max=399, avg=237.45, stdev=52.43 00:10:24.869 lat (usec): min=135, max=409, avg=251.99, stdev=50.09 00:10:24.869 clat percentiles (usec): 00:10:24.869 | 1.00th=[ 135], 5.00th=[ 165], 10.00th=[ 174], 20.00th=[ 188], 00:10:24.869 | 30.00th=[ 206], 40.00th=[ 227], 50.00th=[ 243], 60.00th=[ 249], 00:10:24.869 | 70.00th=[ 258], 80.00th=[ 269], 90.00th=[ 297], 95.00th=[ 347], 00:10:24.869 | 99.00th=[ 379], 99.50th=[ 392], 99.90th=[ 396], 99.95th=[ 400], 00:10:24.869 | 99.99th=[ 400] 00:10:24.869 bw ( KiB/s): min= 8175, max= 8175, per=40.68%, avg=8175.00, stdev= 0.00, samples=1 00:10:24.869 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:10:24.869 lat (usec) : 250=48.93%, 500=49.05%, 750=1.07% 00:10:24.870 lat (msec) : 50=0.95% 00:10:24.870 cpu : usr=1.38%, sys=2.36%, ctx=1584, majf=0, minf=1 00:10:24.870 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:24.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:24.870 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:24.870 issued rwts: total=560,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:24.870 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:24.870 job2: (groupid=0, jobs=1): err= 0: pid=155301: Mon Nov 18 00:15:48 2024 00:10:24.870 read: IOPS=1963, BW=7852KiB/s (8041kB/s)(7860KiB/1001msec) 00:10:24.870 slat (nsec): min=5959, max=53801, avg=14452.34, stdev=6002.31 00:10:24.870 clat (usec): min=203, max=355, avg=258.70, stdev=24.23 00:10:24.870 lat (usec): min=209, max=374, avg=273.15, stdev=28.34 00:10:24.870 clat percentiles (usec): 00:10:24.870 | 1.00th=[ 212], 5.00th=[ 219], 10.00th=[ 223], 20.00th=[ 235], 00:10:24.870 | 30.00th=[ 249], 40.00th=[ 255], 50.00th=[ 262], 60.00th=[ 269], 00:10:24.870 | 70.00th=[ 273], 80.00th=[ 277], 90.00th=[ 289], 95.00th=[ 297], 00:10:24.870 | 99.00th=[ 318], 99.50th=[ 326], 99.90th=[ 351], 99.95th=[ 355], 00:10:24.870 | 99.99th=[ 355] 00:10:24.870 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:24.870 slat (nsec): min=7745, max=52548, avg=15571.41, stdev=7430.55 00:10:24.870 clat (usec): min=146, max=861, avg=202.34, stdev=34.89 00:10:24.870 lat (usec): min=159, max=873, avg=217.91, stdev=36.56 00:10:24.870 clat percentiles (usec): 00:10:24.870 | 1.00th=[ 161], 5.00th=[ 167], 10.00th=[ 174], 20.00th=[ 182], 00:10:24.870 | 30.00th=[ 188], 40.00th=[ 194], 50.00th=[ 198], 60.00th=[ 202], 00:10:24.870 | 70.00th=[ 208], 80.00th=[ 219], 90.00th=[ 235], 95.00th=[ 251], 00:10:24.870 | 99.00th=[ 281], 99.50th=[ 351], 99.90th=[ 627], 99.95th=[ 799], 00:10:24.870 | 99.99th=[ 865] 00:10:24.870 bw ( KiB/s): min= 8175, max= 8175, per=40.68%, avg=8175.00, stdev= 0.00, samples=1 00:10:24.870 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:10:24.870 lat (usec) : 250=63.89%, 500=36.03%, 750=0.02%, 1000=0.05% 00:10:24.870 cpu : usr=5.20%, sys=7.20%, ctx=4014, majf=0, minf=1 00:10:24.870 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:24.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:24.870 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:24.870 issued rwts: total=1965,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:24.870 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:24.870 job3: (groupid=0, jobs=1): err= 0: pid=155302: Mon Nov 18 00:15:48 2024 00:10:24.870 read: IOPS=1337, BW=5351KiB/s (5479kB/s)(5356KiB/1001msec) 00:10:24.870 slat (nsec): min=5884, max=60699, avg=16788.05, stdev=5616.29 00:10:24.870 clat (usec): min=204, max=42008, avg=462.78, stdev=2623.43 00:10:24.870 lat (usec): min=214, max=42014, avg=479.56, stdev=2623.03 00:10:24.870 clat percentiles (usec): 00:10:24.870 | 1.00th=[ 225], 5.00th=[ 239], 10.00th=[ 245], 20.00th=[ 251], 00:10:24.870 | 30.00th=[ 258], 40.00th=[ 262], 50.00th=[ 265], 60.00th=[ 273], 00:10:24.870 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 392], 95.00th=[ 486], 00:10:24.870 | 99.00th=[ 562], 99.50th=[ 4113], 99.90th=[42206], 99.95th=[42206], 00:10:24.870 | 99.99th=[42206] 00:10:24.870 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:10:24.870 slat (nsec): min=7773, max=57153, avg=15512.93, stdev=6653.29 00:10:24.870 clat (usec): min=149, max=917, avg=208.87, stdev=44.74 00:10:24.870 lat (usec): min=157, max=939, avg=224.38, stdev=44.74 00:10:24.870 clat percentiles (usec): 00:10:24.870 | 1.00th=[ 159], 5.00th=[ 165], 10.00th=[ 172], 20.00th=[ 180], 00:10:24.870 | 30.00th=[ 190], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 204], 00:10:24.870 | 70.00th=[ 212], 80.00th=[ 229], 90.00th=[ 247], 95.00th=[ 277], 00:10:24.870 | 99.00th=[ 371], 99.50th=[ 388], 99.90th=[ 461], 99.95th=[ 914], 00:10:24.870 | 99.99th=[ 914] 00:10:24.870 bw ( KiB/s): min= 4087, max= 4087, per=20.34%, avg=4087.00, stdev= 0.00, samples=1 00:10:24.870 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:10:24.870 lat (usec) : 250=57.29%, 500=40.59%, 750=1.84%, 1000=0.03% 00:10:24.870 lat (msec) : 10=0.03%, 50=0.21% 00:10:24.870 cpu : usr=3.50%, sys=6.30%, ctx=2875, majf=0, minf=1 00:10:24.870 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:24.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:24.870 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:24.870 issued rwts: total=1339,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:24.870 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:24.870 00:10:24.870 Run status group 0 (all jobs): 00:10:24.870 READ: bw=15.0MiB/s (15.7MB/s), 212KiB/s-7852KiB/s (217kB/s-8041kB/s), io=15.3MiB (16.0MB), run=1001-1019msec 00:10:24.870 WRITE: bw=19.6MiB/s (20.6MB/s), 2010KiB/s-8184KiB/s (2058kB/s-8380kB/s), io=20.0MiB (21.0MB), run=1001-1019msec 00:10:24.870 00:10:24.870 Disk stats (read/write): 00:10:24.870 nvme0n1: ios=99/512, merge=0/0, ticks=706/136, in_queue=842, util=86.87% 00:10:24.870 nvme0n2: ios=554/1024, merge=0/0, ticks=545/230, in_queue=775, util=86.44% 00:10:24.870 nvme0n3: ios=1594/1866, merge=0/0, ticks=906/363, in_queue=1269, util=98.00% 00:10:24.870 nvme0n4: ios=1024/1352, merge=0/0, ticks=497/271, in_queue=768, util=89.64% 00:10:24.870 00:15:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:24.870 [global] 00:10:24.870 thread=1 00:10:24.870 invalidate=1 00:10:24.870 rw=randwrite 00:10:24.870 time_based=1 00:10:24.870 runtime=1 00:10:24.870 ioengine=libaio 00:10:24.870 direct=1 00:10:24.870 bs=4096 00:10:24.870 iodepth=1 00:10:24.870 norandommap=0 00:10:24.870 numjobs=1 00:10:24.870 00:10:24.870 verify_dump=1 00:10:24.870 verify_backlog=512 00:10:24.870 verify_state_save=0 00:10:24.870 do_verify=1 00:10:24.870 verify=crc32c-intel 00:10:24.870 [job0] 00:10:24.870 filename=/dev/nvme0n1 00:10:24.870 [job1] 00:10:24.870 filename=/dev/nvme0n2 00:10:24.870 [job2] 00:10:24.870 filename=/dev/nvme0n3 00:10:24.870 [job3] 00:10:24.870 filename=/dev/nvme0n4 00:10:24.870 Could not set queue depth (nvme0n1) 00:10:24.870 Could not set queue depth (nvme0n2) 00:10:24.870 Could not set queue depth (nvme0n3) 00:10:24.870 Could not set queue depth (nvme0n4) 00:10:24.870 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:24.870 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:24.870 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:24.870 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:24.870 fio-3.35 00:10:24.870 Starting 4 threads 00:10:26.254 00:10:26.254 job0: (groupid=0, jobs=1): err= 0: pid=155531: Mon Nov 18 00:15:49 2024 00:10:26.254 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:10:26.254 slat (nsec): min=6179, max=66106, avg=17291.59, stdev=7424.77 00:10:26.254 clat (usec): min=212, max=42119, avg=661.49, stdev=3368.02 00:10:26.254 lat (usec): min=221, max=42143, avg=678.78, stdev=3368.41 00:10:26.254 clat percentiles (usec): 00:10:26.254 | 1.00th=[ 223], 5.00th=[ 233], 10.00th=[ 249], 20.00th=[ 314], 00:10:26.254 | 30.00th=[ 338], 40.00th=[ 347], 50.00th=[ 359], 60.00th=[ 375], 00:10:26.254 | 70.00th=[ 416], 80.00th=[ 482], 90.00th=[ 545], 95.00th=[ 578], 00:10:26.254 | 99.00th=[ 660], 99.50th=[41157], 99.90th=[41157], 99.95th=[42206], 00:10:26.254 | 99.99th=[42206] 00:10:26.254 write: IOPS=1297, BW=5191KiB/s (5315kB/s)(5196KiB/1001msec); 0 zone resets 00:10:26.254 slat (nsec): min=7136, max=58689, avg=13019.77, stdev=5948.37 00:10:26.254 clat (usec): min=146, max=456, avg=213.61, stdev=47.05 00:10:26.254 lat (usec): min=156, max=466, avg=226.63, stdev=47.45 00:10:26.254 clat percentiles (usec): 00:10:26.254 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 169], 00:10:26.254 | 30.00th=[ 178], 40.00th=[ 190], 50.00th=[ 210], 60.00th=[ 229], 00:10:26.254 | 70.00th=[ 241], 80.00th=[ 249], 90.00th=[ 265], 95.00th=[ 285], 00:10:26.254 | 99.00th=[ 396], 99.50th=[ 400], 99.90th=[ 412], 99.95th=[ 457], 00:10:26.254 | 99.99th=[ 457] 00:10:26.254 bw ( KiB/s): min= 4416, max= 4416, per=20.85%, avg=4416.00, stdev= 0.00, samples=1 00:10:26.254 iops : min= 1104, max= 1104, avg=1104.00, stdev= 0.00, samples=1 00:10:26.254 lat (usec) : 250=49.50%, 500=43.00%, 750=7.10%, 1000=0.09% 00:10:26.254 lat (msec) : 50=0.30% 00:10:26.254 cpu : usr=3.00%, sys=4.50%, ctx=2323, majf=0, minf=1 00:10:26.254 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:26.255 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.255 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.255 issued rwts: total=1024,1299,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.255 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:26.255 job1: (groupid=0, jobs=1): err= 0: pid=155532: Mon Nov 18 00:15:49 2024 00:10:26.255 read: IOPS=1067, BW=4271KiB/s (4373kB/s)(4352KiB/1019msec) 00:10:26.255 slat (nsec): min=6487, max=71097, avg=19218.72, stdev=7066.90 00:10:26.255 clat (usec): min=217, max=42218, avg=581.07, stdev=2759.04 00:10:26.255 lat (usec): min=225, max=42237, avg=600.29, stdev=2758.89 00:10:26.255 clat percentiles (usec): 00:10:26.255 | 1.00th=[ 243], 5.00th=[ 258], 10.00th=[ 273], 20.00th=[ 322], 00:10:26.255 | 30.00th=[ 338], 40.00th=[ 351], 50.00th=[ 367], 60.00th=[ 400], 00:10:26.255 | 70.00th=[ 441], 80.00th=[ 482], 90.00th=[ 529], 95.00th=[ 586], 00:10:26.255 | 99.00th=[ 685], 99.50th=[ 889], 99.90th=[41157], 99.95th=[42206], 00:10:26.255 | 99.99th=[42206] 00:10:26.255 write: IOPS=1507, BW=6029KiB/s (6174kB/s)(6144KiB/1019msec); 0 zone resets 00:10:26.255 slat (nsec): min=7845, max=65709, avg=15944.52, stdev=6980.12 00:10:26.255 clat (usec): min=144, max=466, avg=213.25, stdev=43.83 00:10:26.255 lat (usec): min=155, max=490, avg=229.20, stdev=43.20 00:10:26.255 clat percentiles (usec): 00:10:26.255 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 165], 20.00th=[ 180], 00:10:26.255 | 30.00th=[ 186], 40.00th=[ 194], 50.00th=[ 200], 60.00th=[ 219], 00:10:26.255 | 70.00th=[ 237], 80.00th=[ 249], 90.00th=[ 265], 95.00th=[ 285], 00:10:26.255 | 99.00th=[ 392], 99.50th=[ 404], 99.90th=[ 437], 99.95th=[ 465], 00:10:26.255 | 99.99th=[ 465] 00:10:26.255 bw ( KiB/s): min= 4096, max= 8192, per=29.01%, avg=6144.00, stdev=2896.31, samples=2 00:10:26.255 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:10:26.255 lat (usec) : 250=48.67%, 500=44.78%, 750=6.29%, 1000=0.08% 00:10:26.255 lat (msec) : 50=0.19% 00:10:26.255 cpu : usr=2.75%, sys=6.58%, ctx=2625, majf=0, minf=1 00:10:26.255 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:26.255 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.255 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.255 issued rwts: total=1088,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.255 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:26.255 job2: (groupid=0, jobs=1): err= 0: pid=155537: Mon Nov 18 00:15:49 2024 00:10:26.255 read: IOPS=457, BW=1829KiB/s (1872kB/s)(1856KiB/1015msec) 00:10:26.255 slat (nsec): min=7439, max=71044, avg=16814.25, stdev=9434.78 00:10:26.255 clat (usec): min=204, max=41218, avg=1958.34, stdev=7951.90 00:10:26.255 lat (usec): min=212, max=41236, avg=1975.16, stdev=7952.47 00:10:26.255 clat percentiles (usec): 00:10:26.255 | 1.00th=[ 233], 5.00th=[ 249], 10.00th=[ 253], 20.00th=[ 262], 00:10:26.255 | 30.00th=[ 269], 40.00th=[ 277], 50.00th=[ 285], 60.00th=[ 293], 00:10:26.255 | 70.00th=[ 347], 80.00th=[ 437], 90.00th=[ 490], 95.00th=[ 510], 00:10:26.255 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:26.255 | 99.99th=[41157] 00:10:26.255 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 00:10:26.255 slat (nsec): min=8048, max=31393, avg=9543.43, stdev=2972.08 00:10:26.255 clat (usec): min=148, max=227, avg=175.03, stdev=12.25 00:10:26.255 lat (usec): min=157, max=253, avg=184.58, stdev=12.50 00:10:26.255 clat percentiles (usec): 00:10:26.255 | 1.00th=[ 157], 5.00th=[ 159], 10.00th=[ 161], 20.00th=[ 165], 00:10:26.255 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 174], 60.00th=[ 178], 00:10:26.255 | 70.00th=[ 182], 80.00th=[ 186], 90.00th=[ 192], 95.00th=[ 198], 00:10:26.255 | 99.00th=[ 208], 99.50th=[ 215], 99.90th=[ 229], 99.95th=[ 229], 00:10:26.255 | 99.99th=[ 229] 00:10:26.255 bw ( KiB/s): min= 4096, max= 4096, per=19.34%, avg=4096.00, stdev= 0.00, samples=1 00:10:26.255 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:26.255 lat (usec) : 250=55.12%, 500=41.70%, 750=1.23% 00:10:26.255 lat (msec) : 50=1.95% 00:10:26.255 cpu : usr=0.39%, sys=2.17%, ctx=979, majf=0, minf=1 00:10:26.255 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:26.255 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.255 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.255 issued rwts: total=464,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.255 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:26.255 job3: (groupid=0, jobs=1): err= 0: pid=155538: Mon Nov 18 00:15:49 2024 00:10:26.255 read: IOPS=1908, BW=7632KiB/s (7816kB/s)(7640KiB/1001msec) 00:10:26.255 slat (nsec): min=5832, max=62098, avg=14846.68, stdev=4849.23 00:10:26.255 clat (usec): min=205, max=593, avg=263.14, stdev=32.92 00:10:26.255 lat (usec): min=213, max=610, avg=277.99, stdev=34.39 00:10:26.255 clat percentiles (usec): 00:10:26.255 | 1.00th=[ 223], 5.00th=[ 233], 10.00th=[ 239], 20.00th=[ 247], 00:10:26.255 | 30.00th=[ 251], 40.00th=[ 255], 50.00th=[ 260], 60.00th=[ 265], 00:10:26.255 | 70.00th=[ 269], 80.00th=[ 273], 90.00th=[ 281], 95.00th=[ 293], 00:10:26.255 | 99.00th=[ 474], 99.50th=[ 498], 99.90th=[ 586], 99.95th=[ 594], 00:10:26.255 | 99.99th=[ 594] 00:10:26.255 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:26.255 slat (nsec): min=7502, max=69339, avg=14684.92, stdev=7532.06 00:10:26.255 clat (usec): min=155, max=464, avg=205.85, stdev=54.94 00:10:26.255 lat (usec): min=164, max=502, avg=220.54, stdev=59.39 00:10:26.255 clat percentiles (usec): 00:10:26.255 | 1.00th=[ 163], 5.00th=[ 165], 10.00th=[ 167], 20.00th=[ 172], 00:10:26.255 | 30.00th=[ 176], 40.00th=[ 182], 50.00th=[ 188], 60.00th=[ 196], 00:10:26.255 | 70.00th=[ 204], 80.00th=[ 215], 90.00th=[ 302], 95.00th=[ 343], 00:10:26.255 | 99.00th=[ 404], 99.50th=[ 412], 99.90th=[ 437], 99.95th=[ 445], 00:10:26.255 | 99.99th=[ 465] 00:10:26.255 bw ( KiB/s): min= 8192, max= 8192, per=38.68%, avg=8192.00, stdev= 0.00, samples=1 00:10:26.255 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:26.255 lat (usec) : 250=57.91%, 500=41.89%, 750=0.20% 00:10:26.255 cpu : usr=4.70%, sys=7.80%, ctx=3958, majf=0, minf=1 00:10:26.255 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:26.255 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.255 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.255 issued rwts: total=1910,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.255 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:26.255 00:10:26.255 Run status group 0 (all jobs): 00:10:26.255 READ: bw=17.2MiB/s (18.0MB/s), 1829KiB/s-7632KiB/s (1872kB/s-7816kB/s), io=17.5MiB (18.4MB), run=1001-1019msec 00:10:26.255 WRITE: bw=20.7MiB/s (21.7MB/s), 2018KiB/s-8184KiB/s (2066kB/s-8380kB/s), io=21.1MiB (22.1MB), run=1001-1019msec 00:10:26.255 00:10:26.255 Disk stats (read/write): 00:10:26.255 nvme0n1: ios=1020/1024, merge=0/0, ticks=619/212, in_queue=831, util=87.68% 00:10:26.255 nvme0n2: ios=1074/1231, merge=0/0, ticks=950/260, in_queue=1210, util=97.97% 00:10:26.255 nvme0n3: ios=483/512, merge=0/0, ticks=1723/84, in_queue=1807, util=98.02% 00:10:26.255 nvme0n4: ios=1536/1841, merge=0/0, ticks=394/347, in_queue=741, util=89.68% 00:10:26.255 00:15:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:26.255 [global] 00:10:26.255 thread=1 00:10:26.255 invalidate=1 00:10:26.255 rw=write 00:10:26.255 time_based=1 00:10:26.255 runtime=1 00:10:26.255 ioengine=libaio 00:10:26.255 direct=1 00:10:26.255 bs=4096 00:10:26.255 iodepth=128 00:10:26.255 norandommap=0 00:10:26.255 numjobs=1 00:10:26.255 00:10:26.255 verify_dump=1 00:10:26.255 verify_backlog=512 00:10:26.255 verify_state_save=0 00:10:26.255 do_verify=1 00:10:26.255 verify=crc32c-intel 00:10:26.255 [job0] 00:10:26.255 filename=/dev/nvme0n1 00:10:26.255 [job1] 00:10:26.255 filename=/dev/nvme0n2 00:10:26.255 [job2] 00:10:26.255 filename=/dev/nvme0n3 00:10:26.255 [job3] 00:10:26.255 filename=/dev/nvme0n4 00:10:26.255 Could not set queue depth (nvme0n1) 00:10:26.255 Could not set queue depth (nvme0n2) 00:10:26.255 Could not set queue depth (nvme0n3) 00:10:26.255 Could not set queue depth (nvme0n4) 00:10:26.514 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:26.514 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:26.514 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:26.514 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:26.514 fio-3.35 00:10:26.514 Starting 4 threads 00:10:27.894 00:10:27.894 job0: (groupid=0, jobs=1): err= 0: pid=155764: Mon Nov 18 00:15:51 2024 00:10:27.894 read: IOPS=3215, BW=12.6MiB/s (13.2MB/s)(13.1MiB/1044msec) 00:10:27.894 slat (usec): min=3, max=24240, avg=158.37, stdev=924.43 00:10:27.894 clat (usec): min=11048, max=80915, avg=22140.91, stdev=12324.94 00:10:27.894 lat (usec): min=11056, max=80937, avg=22299.29, stdev=12398.49 00:10:27.894 clat percentiles (usec): 00:10:27.894 | 1.00th=[11731], 5.00th=[13435], 10.00th=[14615], 20.00th=[15139], 00:10:27.894 | 30.00th=[16188], 40.00th=[17433], 50.00th=[18220], 60.00th=[18744], 00:10:27.894 | 70.00th=[20317], 80.00th=[25035], 90.00th=[36963], 95.00th=[55837], 00:10:27.894 | 99.00th=[74974], 99.50th=[74974], 99.90th=[74974], 99.95th=[77071], 00:10:27.894 | 99.99th=[81265] 00:10:27.894 write: IOPS=3432, BW=13.4MiB/s (14.1MB/s)(14.0MiB/1044msec); 0 zone resets 00:10:27.894 slat (usec): min=4, max=12364, avg=119.40, stdev=588.19 00:10:27.894 clat (usec): min=8689, max=37776, avg=16046.97, stdev=4799.78 00:10:27.894 lat (usec): min=8717, max=37785, avg=16166.37, stdev=4838.71 00:10:27.894 clat percentiles (usec): 00:10:27.894 | 1.00th=[10945], 5.00th=[11994], 10.00th=[12125], 20.00th=[12780], 00:10:27.894 | 30.00th=[13042], 40.00th=[13566], 50.00th=[13960], 60.00th=[14877], 00:10:27.894 | 70.00th=[16057], 80.00th=[20055], 90.00th=[24249], 95.00th=[26084], 00:10:27.894 | 99.00th=[32900], 99.50th=[36439], 99.90th=[38011], 99.95th=[38011], 00:10:27.894 | 99.99th=[38011] 00:10:27.894 bw ( KiB/s): min=12312, max=16384, per=25.67%, avg=14348.00, stdev=2879.34, samples=2 00:10:27.894 iops : min= 3078, max= 4096, avg=3587.00, stdev=719.83, samples=2 00:10:27.894 lat (msec) : 10=0.26%, 20=74.34%, 50=22.48%, 100=2.92% 00:10:27.894 cpu : usr=5.18%, sys=7.57%, ctx=316, majf=0, minf=2 00:10:27.894 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:10:27.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.895 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:27.895 issued rwts: total=3357,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:27.895 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:27.895 job1: (groupid=0, jobs=1): err= 0: pid=155765: Mon Nov 18 00:15:51 2024 00:10:27.895 read: IOPS=3041, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1010msec) 00:10:27.895 slat (usec): min=2, max=14598, avg=146.83, stdev=813.48 00:10:27.895 clat (usec): min=7324, max=33068, avg=19483.06, stdev=4793.76 00:10:27.895 lat (usec): min=7341, max=33093, avg=19629.89, stdev=4866.59 00:10:27.895 clat percentiles (usec): 00:10:27.895 | 1.00th=[ 9241], 5.00th=[11469], 10.00th=[12780], 20.00th=[15533], 00:10:27.895 | 30.00th=[16909], 40.00th=[18482], 50.00th=[19268], 60.00th=[20055], 00:10:27.895 | 70.00th=[21103], 80.00th=[24249], 90.00th=[26870], 95.00th=[27919], 00:10:27.895 | 99.00th=[28443], 99.50th=[31065], 99.90th=[32375], 99.95th=[32900], 00:10:27.895 | 99.99th=[33162] 00:10:27.895 write: IOPS=3504, BW=13.7MiB/s (14.4MB/s)(13.8MiB/1010msec); 0 zone resets 00:10:27.895 slat (usec): min=3, max=34214, avg=139.42, stdev=1008.78 00:10:27.895 clat (usec): min=709, max=96825, avg=17262.91, stdev=9634.23 00:10:27.895 lat (usec): min=715, max=96840, avg=17402.33, stdev=9757.12 00:10:27.895 clat percentiles (usec): 00:10:27.895 | 1.00th=[ 4686], 5.00th=[ 8029], 10.00th=[ 8717], 20.00th=[11469], 00:10:27.895 | 30.00th=[12518], 40.00th=[13304], 50.00th=[14091], 60.00th=[16909], 00:10:27.895 | 70.00th=[19792], 80.00th=[22414], 90.00th=[27919], 95.00th=[32113], 00:10:27.895 | 99.00th=[38536], 99.50th=[72877], 99.90th=[96994], 99.95th=[96994], 00:10:27.895 | 99.99th=[96994] 00:10:27.895 bw ( KiB/s): min=12336, max=14960, per=24.42%, avg=13648.00, stdev=1855.45, samples=2 00:10:27.895 iops : min= 3084, max= 3740, avg=3412.00, stdev=463.86, samples=2 00:10:27.895 lat (usec) : 750=0.06% 00:10:27.895 lat (msec) : 2=0.02%, 4=0.26%, 10=7.89%, 20=58.27%, 50=33.02% 00:10:27.895 lat (msec) : 100=0.48% 00:10:27.895 cpu : usr=5.15%, sys=6.54%, ctx=309, majf=0, minf=1 00:10:27.895 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:10:27.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.895 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:27.895 issued rwts: total=3072,3540,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:27.895 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:27.895 job2: (groupid=0, jobs=1): err= 0: pid=155766: Mon Nov 18 00:15:51 2024 00:10:27.895 read: IOPS=3046, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1010msec) 00:10:27.895 slat (usec): min=3, max=14766, avg=139.32, stdev=929.44 00:10:27.895 clat (usec): min=4423, max=58242, avg=16772.88, stdev=8378.95 00:10:27.895 lat (usec): min=4429, max=58255, avg=16912.20, stdev=8465.38 00:10:27.895 clat percentiles (usec): 00:10:27.895 | 1.00th=[ 8029], 5.00th=[10159], 10.00th=[11076], 20.00th=[11994], 00:10:27.895 | 30.00th=[13042], 40.00th=[13960], 50.00th=[14091], 60.00th=[14353], 00:10:27.895 | 70.00th=[15139], 80.00th=[19006], 90.00th=[28705], 95.00th=[38536], 00:10:27.895 | 99.00th=[46924], 99.50th=[55837], 99.90th=[58459], 99.95th=[58459], 00:10:27.895 | 99.99th=[58459] 00:10:27.895 write: IOPS=3548, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1010msec); 0 zone resets 00:10:27.895 slat (usec): min=4, max=9799, avg=146.52, stdev=652.52 00:10:27.895 clat (usec): min=979, max=58291, avg=21308.97, stdev=13303.97 00:10:27.895 lat (usec): min=989, max=59475, avg=21455.49, stdev=13374.49 00:10:27.895 clat percentiles (usec): 00:10:27.895 | 1.00th=[ 5014], 5.00th=[ 9110], 10.00th=[10814], 20.00th=[11994], 00:10:27.895 | 30.00th=[13042], 40.00th=[13829], 50.00th=[17957], 60.00th=[21103], 00:10:27.895 | 70.00th=[21365], 80.00th=[24249], 90.00th=[46400], 95.00th=[56361], 00:10:27.895 | 99.00th=[57934], 99.50th=[57934], 99.90th=[57934], 99.95th=[58459], 00:10:27.895 | 99.99th=[58459] 00:10:27.895 bw ( KiB/s): min=11328, max=16368, per=24.78%, avg=13848.00, stdev=3563.82, samples=2 00:10:27.895 iops : min= 2832, max= 4092, avg=3462.00, stdev=890.95, samples=2 00:10:27.895 lat (usec) : 1000=0.11% 00:10:27.895 lat (msec) : 2=0.05%, 4=0.09%, 10=5.25%, 20=61.45%, 50=28.33% 00:10:27.895 lat (msec) : 100=4.73% 00:10:27.895 cpu : usr=3.67%, sys=6.54%, ctx=433, majf=0, minf=2 00:10:27.895 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:10:27.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.895 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:27.895 issued rwts: total=3077,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:27.895 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:27.895 job3: (groupid=0, jobs=1): err= 0: pid=155767: Mon Nov 18 00:15:51 2024 00:10:27.895 read: IOPS=3541, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1012msec) 00:10:27.895 slat (usec): min=2, max=11042, avg=121.76, stdev=752.42 00:10:27.895 clat (usec): min=4880, max=38680, avg=14878.16, stdev=6093.85 00:10:27.895 lat (usec): min=4886, max=38689, avg=14999.93, stdev=6127.75 00:10:27.895 clat percentiles (usec): 00:10:27.895 | 1.00th=[ 6652], 5.00th=[ 9634], 10.00th=[10159], 20.00th=[11338], 00:10:27.895 | 30.00th=[11600], 40.00th=[12780], 50.00th=[13304], 60.00th=[13829], 00:10:27.895 | 70.00th=[14877], 80.00th=[15795], 90.00th=[22414], 95.00th=[29754], 00:10:27.895 | 99.00th=[38536], 99.50th=[38536], 99.90th=[38536], 99.95th=[38536], 00:10:27.895 | 99.99th=[38536] 00:10:27.895 write: IOPS=3832, BW=15.0MiB/s (15.7MB/s)(15.1MiB/1012msec); 0 zone resets 00:10:27.895 slat (usec): min=4, max=7714, avg=133.07, stdev=555.24 00:10:27.895 clat (usec): min=3497, max=41139, avg=19198.01, stdev=8035.23 00:10:27.895 lat (usec): min=3503, max=41145, avg=19331.08, stdev=8087.66 00:10:27.895 clat percentiles (usec): 00:10:27.895 | 1.00th=[ 4817], 5.00th=[ 9110], 10.00th=[10945], 20.00th=[12911], 00:10:27.895 | 30.00th=[13566], 40.00th=[14222], 50.00th=[18220], 60.00th=[21103], 00:10:27.895 | 70.00th=[21627], 80.00th=[24511], 90.00th=[32637], 95.00th=[35914], 00:10:27.895 | 99.00th=[40109], 99.50th=[40109], 99.90th=[41157], 99.95th=[41157], 00:10:27.895 | 99.99th=[41157] 00:10:27.895 bw ( KiB/s): min=13624, max=16384, per=26.85%, avg=15004.00, stdev=1951.61, samples=2 00:10:27.895 iops : min= 3406, max= 4096, avg=3751.00, stdev=487.90, samples=2 00:10:27.895 lat (msec) : 4=0.16%, 10=7.26%, 20=62.15%, 50=30.42% 00:10:27.895 cpu : usr=4.35%, sys=6.43%, ctx=427, majf=0, minf=1 00:10:27.895 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:27.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.895 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:27.895 issued rwts: total=3584,3878,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:27.895 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:27.895 00:10:27.895 Run status group 0 (all jobs): 00:10:27.895 READ: bw=49.0MiB/s (51.4MB/s), 11.9MiB/s-13.8MiB/s (12.5MB/s-14.5MB/s), io=51.1MiB (53.6MB), run=1010-1044msec 00:10:27.895 WRITE: bw=54.6MiB/s (57.2MB/s), 13.4MiB/s-15.0MiB/s (14.1MB/s-15.7MB/s), io=57.0MiB (59.7MB), run=1010-1044msec 00:10:27.895 00:10:27.895 Disk stats (read/write): 00:10:27.895 nvme0n1: ios=2708/3072, merge=0/0, ticks=18220/15460, in_queue=33680, util=96.49% 00:10:27.895 nvme0n2: ios=2592/2936, merge=0/0, ticks=20978/22005, in_queue=42983, util=100.00% 00:10:27.895 nvme0n3: ios=2619/3071, merge=0/0, ticks=29046/39995, in_queue=69041, util=100.00% 00:10:27.895 nvme0n4: ios=3119/3303, merge=0/0, ticks=32739/45053, in_queue=77792, util=95.80% 00:10:27.895 00:15:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:27.895 [global] 00:10:27.895 thread=1 00:10:27.895 invalidate=1 00:10:27.895 rw=randwrite 00:10:27.895 time_based=1 00:10:27.895 runtime=1 00:10:27.895 ioengine=libaio 00:10:27.895 direct=1 00:10:27.895 bs=4096 00:10:27.895 iodepth=128 00:10:27.895 norandommap=0 00:10:27.895 numjobs=1 00:10:27.895 00:10:27.895 verify_dump=1 00:10:27.895 verify_backlog=512 00:10:27.895 verify_state_save=0 00:10:27.895 do_verify=1 00:10:27.895 verify=crc32c-intel 00:10:27.895 [job0] 00:10:27.895 filename=/dev/nvme0n1 00:10:27.895 [job1] 00:10:27.895 filename=/dev/nvme0n2 00:10:27.895 [job2] 00:10:27.895 filename=/dev/nvme0n3 00:10:27.896 [job3] 00:10:27.896 filename=/dev/nvme0n4 00:10:27.896 Could not set queue depth (nvme0n1) 00:10:27.896 Could not set queue depth (nvme0n2) 00:10:27.896 Could not set queue depth (nvme0n3) 00:10:27.896 Could not set queue depth (nvme0n4) 00:10:27.896 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:27.896 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:27.896 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:27.896 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:27.896 fio-3.35 00:10:27.896 Starting 4 threads 00:10:29.288 00:10:29.288 job0: (groupid=0, jobs=1): err= 0: pid=156005: Mon Nov 18 00:15:52 2024 00:10:29.288 read: IOPS=2025, BW=8103KiB/s (8297kB/s)(8192KiB/1011msec) 00:10:29.288 slat (usec): min=2, max=24865, avg=172.57, stdev=1253.26 00:10:29.288 clat (usec): min=10155, max=68786, avg=20902.55, stdev=10604.19 00:10:29.288 lat (usec): min=10160, max=72531, avg=21075.12, stdev=10723.94 00:10:29.288 clat percentiles (usec): 00:10:29.288 | 1.00th=[11600], 5.00th=[12911], 10.00th=[14353], 20.00th=[15664], 00:10:29.288 | 30.00th=[16057], 40.00th=[16188], 50.00th=[16581], 60.00th=[16909], 00:10:29.288 | 70.00th=[17695], 80.00th=[22414], 90.00th=[39584], 95.00th=[47449], 00:10:29.288 | 99.00th=[62653], 99.50th=[65274], 99.90th=[65799], 99.95th=[65799], 00:10:29.288 | 99.99th=[68682] 00:10:29.288 write: IOPS=2515, BW=9.83MiB/s (10.3MB/s)(9.93MiB/1011msec); 0 zone resets 00:10:29.288 slat (usec): min=4, max=33339, avg=245.19, stdev=1510.47 00:10:29.288 clat (msec): min=8, max=133, avg=33.40, stdev=23.41 00:10:29.288 lat (msec): min=8, max=133, avg=33.65, stdev=23.53 00:10:29.288 clat percentiles (msec): 00:10:29.288 | 1.00th=[ 12], 5.00th=[ 12], 10.00th=[ 12], 20.00th=[ 15], 00:10:29.288 | 30.00th=[ 22], 40.00th=[ 25], 50.00th=[ 26], 60.00th=[ 27], 00:10:29.288 | 70.00th=[ 39], 80.00th=[ 47], 90.00th=[ 68], 95.00th=[ 78], 00:10:29.288 | 99.00th=[ 123], 99.50th=[ 133], 99.90th=[ 134], 99.95th=[ 134], 00:10:29.288 | 99.99th=[ 134] 00:10:29.288 bw ( KiB/s): min= 7048, max=12272, per=16.43%, avg=9660.00, stdev=3693.93, samples=2 00:10:29.288 iops : min= 1762, max= 3068, avg=2415.00, stdev=923.48, samples=2 00:10:29.288 lat (msec) : 10=0.15%, 20=46.68%, 50=41.69%, 100=9.93%, 250=1.55% 00:10:29.288 cpu : usr=2.87%, sys=5.25%, ctx=244, majf=0, minf=1 00:10:29.288 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:10:29.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.288 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:29.288 issued rwts: total=2048,2543,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.288 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:29.288 job1: (groupid=0, jobs=1): err= 0: pid=156017: Mon Nov 18 00:15:52 2024 00:10:29.288 read: IOPS=5398, BW=21.1MiB/s (22.1MB/s)(21.1MiB/1002msec) 00:10:29.288 slat (usec): min=2, max=9089, avg=88.52, stdev=504.21 00:10:29.288 clat (usec): min=832, max=21904, avg=11459.49, stdev=1527.01 00:10:29.288 lat (usec): min=3246, max=21916, avg=11548.02, stdev=1517.97 00:10:29.288 clat percentiles (usec): 00:10:29.288 | 1.00th=[ 6325], 5.00th=[ 9110], 10.00th=[10028], 20.00th=[10814], 00:10:29.288 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11338], 60.00th=[11600], 00:10:29.288 | 70.00th=[11863], 80.00th=[12256], 90.00th=[12911], 95.00th=[13960], 00:10:29.288 | 99.00th=[16188], 99.50th=[16909], 99.90th=[19268], 99.95th=[20317], 00:10:29.288 | 99.99th=[21890] 00:10:29.288 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:10:29.288 slat (usec): min=2, max=16268, avg=80.80, stdev=455.98 00:10:29.288 clat (usec): min=350, max=44871, avg=11541.84, stdev=4685.59 00:10:29.288 lat (usec): min=452, max=44877, avg=11622.64, stdev=4682.71 00:10:29.288 clat percentiles (usec): 00:10:29.288 | 1.00th=[ 3458], 5.00th=[ 7570], 10.00th=[ 8979], 20.00th=[ 9503], 00:10:29.288 | 30.00th=[10552], 40.00th=[10945], 50.00th=[11076], 60.00th=[11207], 00:10:29.288 | 70.00th=[11469], 80.00th=[11731], 90.00th=[12518], 95.00th=[17695], 00:10:29.288 | 99.00th=[36963], 99.50th=[42206], 99.90th=[42730], 99.95th=[44827], 00:10:29.288 | 99.99th=[44827] 00:10:29.288 bw ( KiB/s): min=20480, max=24576, per=38.32%, avg=22528.00, stdev=2896.31, samples=2 00:10:29.288 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:10:29.288 lat (usec) : 500=0.04%, 1000=0.01% 00:10:29.288 lat (msec) : 4=0.97%, 10=16.83%, 20=80.10%, 50=2.06% 00:10:29.288 cpu : usr=4.40%, sys=7.39%, ctx=565, majf=0, minf=1 00:10:29.288 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:29.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.288 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:29.288 issued rwts: total=5409,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.288 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:29.288 job2: (groupid=0, jobs=1): err= 0: pid=156068: Mon Nov 18 00:15:52 2024 00:10:29.288 read: IOPS=2663, BW=10.4MiB/s (10.9MB/s)(10.5MiB/1009msec) 00:10:29.288 slat (usec): min=3, max=13488, avg=161.02, stdev=994.05 00:10:29.288 clat (usec): min=3766, max=54720, avg=18875.85, stdev=7289.53 00:10:29.288 lat (usec): min=8347, max=54738, avg=19036.87, stdev=7352.73 00:10:29.288 clat percentiles (usec): 00:10:29.288 | 1.00th=[ 8717], 5.00th=[12125], 10.00th=[13304], 20.00th=[14484], 00:10:29.288 | 30.00th=[15008], 40.00th=[15139], 50.00th=[17171], 60.00th=[18744], 00:10:29.288 | 70.00th=[19268], 80.00th=[20579], 90.00th=[27657], 95.00th=[36439], 00:10:29.288 | 99.00th=[46924], 99.50th=[47973], 99.90th=[54789], 99.95th=[54789], 00:10:29.288 | 99.99th=[54789] 00:10:29.288 write: IOPS=3044, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1009msec); 0 zone resets 00:10:29.288 slat (usec): min=4, max=16637, avg=175.58, stdev=880.82 00:10:29.288 clat (usec): min=3677, max=54726, avg=25102.07, stdev=11039.25 00:10:29.288 lat (usec): min=3684, max=54760, avg=25277.65, stdev=11126.15 00:10:29.288 clat percentiles (usec): 00:10:29.288 | 1.00th=[ 7177], 5.00th=[12387], 10.00th=[13304], 20.00th=[14353], 00:10:29.288 | 30.00th=[18220], 40.00th=[20317], 50.00th=[24249], 60.00th=[25297], 00:10:29.288 | 70.00th=[26608], 80.00th=[34341], 90.00th=[43779], 95.00th=[46924], 00:10:29.288 | 99.00th=[49546], 99.50th=[51119], 99.90th=[52167], 99.95th=[54789], 00:10:29.288 | 99.99th=[54789] 00:10:29.288 bw ( KiB/s): min=12016, max=12552, per=20.90%, avg=12284.00, stdev=379.01, samples=2 00:10:29.288 iops : min= 3004, max= 3138, avg=3071.00, stdev=94.75, samples=2 00:10:29.289 lat (msec) : 4=0.12%, 10=2.52%, 20=52.86%, 50=43.76%, 100=0.75% 00:10:29.289 cpu : usr=3.67%, sys=7.34%, ctx=315, majf=0, minf=1 00:10:29.289 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:10:29.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.289 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:29.289 issued rwts: total=2687,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.289 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:29.289 job3: (groupid=0, jobs=1): err= 0: pid=156079: Mon Nov 18 00:15:52 2024 00:10:29.289 read: IOPS=3862, BW=15.1MiB/s (15.8MB/s)(15.8MiB/1044msec) 00:10:29.289 slat (usec): min=3, max=21673, avg=122.45, stdev=753.48 00:10:29.289 clat (usec): min=9820, max=68919, avg=17930.45, stdev=11739.48 00:10:29.289 lat (usec): min=9838, max=71185, avg=18052.90, stdev=11793.64 00:10:29.289 clat percentiles (usec): 00:10:29.289 | 1.00th=[10552], 5.00th=[11469], 10.00th=[12780], 20.00th=[13173], 00:10:29.289 | 30.00th=[13435], 40.00th=[13698], 50.00th=[13960], 60.00th=[14353], 00:10:29.289 | 70.00th=[14615], 80.00th=[15401], 90.00th=[40109], 95.00th=[50070], 00:10:29.289 | 99.00th=[62129], 99.50th=[63701], 99.90th=[68682], 99.95th=[68682], 00:10:29.289 | 99.99th=[68682] 00:10:29.289 write: IOPS=3923, BW=15.3MiB/s (16.1MB/s)(16.0MiB/1044msec); 0 zone resets 00:10:29.289 slat (usec): min=4, max=13410, avg=112.56, stdev=580.82 00:10:29.289 clat (usec): min=9385, max=40972, avg=14540.74, stdev=3308.50 00:10:29.289 lat (usec): min=9397, max=41321, avg=14653.31, stdev=3356.96 00:10:29.289 clat percentiles (usec): 00:10:29.289 | 1.00th=[10028], 5.00th=[11207], 10.00th=[12780], 20.00th=[13173], 00:10:29.289 | 30.00th=[13566], 40.00th=[13829], 50.00th=[13960], 60.00th=[14222], 00:10:29.289 | 70.00th=[14353], 80.00th=[14615], 90.00th=[16450], 95.00th=[18482], 00:10:29.289 | 99.00th=[33817], 99.50th=[34866], 99.90th=[36439], 99.95th=[36439], 00:10:29.289 | 99.99th=[41157] 00:10:29.289 bw ( KiB/s): min=16384, max=16384, per=27.87%, avg=16384.00, stdev= 0.00, samples=2 00:10:29.289 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:10:29.289 lat (msec) : 10=0.46%, 20=91.54%, 50=5.35%, 100=2.66% 00:10:29.289 cpu : usr=5.37%, sys=9.68%, ctx=405, majf=0, minf=1 00:10:29.289 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:29.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.289 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:29.289 issued rwts: total=4032,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.289 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:29.289 00:10:29.289 Run status group 0 (all jobs): 00:10:29.289 READ: bw=53.0MiB/s (55.6MB/s), 8103KiB/s-21.1MiB/s (8297kB/s-22.1MB/s), io=55.4MiB (58.1MB), run=1002-1044msec 00:10:29.289 WRITE: bw=57.4MiB/s (60.2MB/s), 9.83MiB/s-22.0MiB/s (10.3MB/s-23.0MB/s), io=59.9MiB (62.8MB), run=1002-1044msec 00:10:29.289 00:10:29.289 Disk stats (read/write): 00:10:29.289 nvme0n1: ios=2006/2048, merge=0/0, ticks=20708/29655, in_queue=50363, util=98.20% 00:10:29.289 nvme0n2: ios=4613/4704, merge=0/0, ticks=17547/15258, in_queue=32805, util=83.76% 00:10:29.289 nvme0n3: ios=2094/2207, merge=0/0, ticks=39500/59257, in_queue=98757, util=99.46% 00:10:29.289 nvme0n4: ios=3382/3584, merge=0/0, ticks=16775/15844, in_queue=32619, util=98.36% 00:10:29.289 00:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:29.289 00:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=156257 00:10:29.289 00:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:29.289 00:15:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:29.289 [global] 00:10:29.289 thread=1 00:10:29.289 invalidate=1 00:10:29.289 rw=read 00:10:29.289 time_based=1 00:10:29.289 runtime=10 00:10:29.289 ioengine=libaio 00:10:29.289 direct=1 00:10:29.289 bs=4096 00:10:29.289 iodepth=1 00:10:29.289 norandommap=1 00:10:29.289 numjobs=1 00:10:29.289 00:10:29.289 [job0] 00:10:29.289 filename=/dev/nvme0n1 00:10:29.289 [job1] 00:10:29.289 filename=/dev/nvme0n2 00:10:29.289 [job2] 00:10:29.289 filename=/dev/nvme0n3 00:10:29.289 [job3] 00:10:29.289 filename=/dev/nvme0n4 00:10:29.289 Could not set queue depth (nvme0n1) 00:10:29.289 Could not set queue depth (nvme0n2) 00:10:29.289 Could not set queue depth (nvme0n3) 00:10:29.289 Could not set queue depth (nvme0n4) 00:10:29.549 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:29.549 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:29.549 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:29.549 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:29.549 fio-3.35 00:10:29.549 Starting 4 threads 00:10:32.095 00:15:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:32.674 00:15:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:32.674 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=40579072, buflen=4096 00:10:32.674 fio: pid=156353, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:32.934 00:15:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:32.934 00:15:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:32.934 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=4493312, buflen=4096 00:10:32.934 fio: pid=156352, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:33.192 00:15:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:33.192 00:15:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:33.192 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=3948544, buflen=4096 00:10:33.192 fio: pid=156349, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:33.452 00:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:33.452 00:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:33.452 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=30715904, buflen=4096 00:10:33.452 fio: pid=156350, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:33.452 00:10:33.452 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=156349: Mon Nov 18 00:15:57 2024 00:10:33.452 read: IOPS=274, BW=1097KiB/s (1123kB/s)(3856KiB/3515msec) 00:10:33.452 slat (usec): min=4, max=9897, avg=21.42, stdev=318.32 00:10:33.452 clat (usec): min=182, max=42149, avg=3597.26, stdev=11236.06 00:10:33.452 lat (usec): min=188, max=50928, avg=3618.66, stdev=11276.06 00:10:33.452 clat percentiles (usec): 00:10:33.452 | 1.00th=[ 194], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 215], 00:10:33.452 | 30.00th=[ 229], 40.00th=[ 237], 50.00th=[ 243], 60.00th=[ 249], 00:10:33.452 | 70.00th=[ 258], 80.00th=[ 269], 90.00th=[ 334], 95.00th=[41157], 00:10:33.452 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:33.452 | 99.99th=[42206] 00:10:33.452 bw ( KiB/s): min= 96, max= 5280, per=6.17%, avg=1257.33, stdev=2032.42, samples=6 00:10:33.452 iops : min= 24, max= 1320, avg=314.33, stdev=508.11, samples=6 00:10:33.452 lat (usec) : 250=61.66%, 500=29.84%, 750=0.21% 00:10:33.452 lat (msec) : 50=8.19% 00:10:33.452 cpu : usr=0.14%, sys=0.34%, ctx=967, majf=0, minf=1 00:10:33.452 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:33.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.452 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.452 issued rwts: total=965,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:33.452 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:33.452 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=156350: Mon Nov 18 00:15:57 2024 00:10:33.452 read: IOPS=1962, BW=7848KiB/s (8037kB/s)(29.3MiB/3822msec) 00:10:33.452 slat (usec): min=4, max=2914, avg=16.40, stdev=34.72 00:10:33.452 clat (usec): min=171, max=64297, avg=486.26, stdev=2975.08 00:10:33.452 lat (usec): min=177, max=64346, avg=502.66, stdev=2980.55 00:10:33.452 clat percentiles (usec): 00:10:33.452 | 1.00th=[ 190], 5.00th=[ 196], 10.00th=[ 202], 20.00th=[ 215], 00:10:33.452 | 30.00th=[ 229], 40.00th=[ 241], 50.00th=[ 269], 60.00th=[ 285], 00:10:33.452 | 70.00th=[ 310], 80.00th=[ 334], 90.00th=[ 363], 95.00th=[ 383], 00:10:33.452 | 99.00th=[ 498], 99.50th=[40109], 99.90th=[41157], 99.95th=[41157], 00:10:33.452 | 99.99th=[64226] 00:10:33.452 bw ( KiB/s): min= 93, max=13944, per=42.02%, avg=8561.86, stdev=5874.72, samples=7 00:10:33.452 iops : min= 23, max= 3486, avg=2140.43, stdev=1468.74, samples=7 00:10:33.452 lat (usec) : 250=44.43%, 500=54.57%, 750=0.47% 00:10:33.452 lat (msec) : 2=0.01%, 50=0.48%, 100=0.03% 00:10:33.452 cpu : usr=1.57%, sys=3.87%, ctx=7503, majf=0, minf=2 00:10:33.452 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:33.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.452 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.452 issued rwts: total=7500,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:33.452 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:33.452 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=156352: Mon Nov 18 00:15:57 2024 00:10:33.452 read: IOPS=338, BW=1354KiB/s (1387kB/s)(4388KiB/3240msec) 00:10:33.452 slat (nsec): min=5587, max=70786, avg=13313.84, stdev=6214.11 00:10:33.452 clat (usec): min=213, max=42190, avg=2915.52, stdev=10063.33 00:10:33.452 lat (usec): min=221, max=42206, avg=2928.83, stdev=10064.07 00:10:33.452 clat percentiles (usec): 00:10:33.452 | 1.00th=[ 221], 5.00th=[ 235], 10.00th=[ 243], 20.00th=[ 249], 00:10:33.452 | 30.00th=[ 255], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 269], 00:10:33.452 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 367], 95.00th=[41157], 00:10:33.452 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:33.452 | 99.99th=[42206] 00:10:33.452 bw ( KiB/s): min= 96, max= 6248, per=7.14%, avg=1454.67, stdev=2392.96, samples=6 00:10:33.452 iops : min= 24, max= 1562, avg=363.67, stdev=598.24, samples=6 00:10:33.452 lat (usec) : 250=20.40%, 500=72.40%, 750=0.64% 00:10:33.452 lat (msec) : 50=6.47% 00:10:33.452 cpu : usr=0.34%, sys=0.62%, ctx=1098, majf=0, minf=1 00:10:33.452 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:33.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.452 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.452 issued rwts: total=1098,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:33.452 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:33.452 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=156353: Mon Nov 18 00:15:57 2024 00:10:33.452 read: IOPS=3378, BW=13.2MiB/s (13.8MB/s)(38.7MiB/2933msec) 00:10:33.452 slat (nsec): min=5154, max=64988, avg=12673.16, stdev=6477.77 00:10:33.452 clat (usec): min=182, max=40596, avg=277.86, stdev=408.43 00:10:33.452 lat (usec): min=188, max=40605, avg=290.53, stdev=408.82 00:10:33.452 clat percentiles (usec): 00:10:33.452 | 1.00th=[ 192], 5.00th=[ 202], 10.00th=[ 208], 20.00th=[ 223], 00:10:33.452 | 30.00th=[ 241], 40.00th=[ 260], 50.00th=[ 277], 60.00th=[ 293], 00:10:33.452 | 70.00th=[ 302], 80.00th=[ 314], 90.00th=[ 326], 95.00th=[ 343], 00:10:33.452 | 99.00th=[ 457], 99.50th=[ 498], 99.90th=[ 562], 99.95th=[ 603], 00:10:33.452 | 99.99th=[40633] 00:10:33.452 bw ( KiB/s): min=12144, max=15352, per=68.32%, avg=13920.00, stdev=1542.02, samples=5 00:10:33.452 iops : min= 3036, max= 3838, avg=3480.00, stdev=385.50, samples=5 00:10:33.452 lat (usec) : 250=34.61%, 500=64.92%, 750=0.45% 00:10:33.452 lat (msec) : 50=0.01% 00:10:33.452 cpu : usr=2.86%, sys=6.24%, ctx=9908, majf=0, minf=2 00:10:33.452 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:33.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.452 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.452 issued rwts: total=9908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:33.452 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:33.452 00:10:33.452 Run status group 0 (all jobs): 00:10:33.452 READ: bw=19.9MiB/s (20.9MB/s), 1097KiB/s-13.2MiB/s (1123kB/s-13.8MB/s), io=76.0MiB (79.7MB), run=2933-3822msec 00:10:33.452 00:10:33.452 Disk stats (read/write): 00:10:33.452 nvme0n1: ios=960/0, merge=0/0, ticks=3298/0, in_queue=3298, util=95.74% 00:10:33.452 nvme0n2: ios=7494/0, merge=0/0, ticks=3405/0, in_queue=3405, util=96.52% 00:10:33.452 nvme0n3: ios=1094/0, merge=0/0, ticks=3054/0, in_queue=3054, util=96.79% 00:10:33.452 nvme0n4: ios=9731/0, merge=0/0, ticks=2542/0, in_queue=2542, util=96.75% 00:10:33.711 00:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:33.711 00:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:33.970 00:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:33.970 00:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:34.229 00:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:34.229 00:15:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:34.492 00:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:34.492 00:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:34.754 00:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:34.754 00:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 156257 00:10:34.754 00:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:34.754 00:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:35.014 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:35.014 00:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:35.014 00:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:10:35.014 00:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:35.015 00:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:35.015 00:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:35.015 00:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:35.015 00:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:10:35.015 00:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:35.015 00:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:35.015 nvmf hotplug test: fio failed as expected 00:10:35.015 00:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:35.276 00:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:35.276 00:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:35.276 00:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:35.276 00:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:35.276 00:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:35.276 00:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:35.276 00:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:35.276 00:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:35.276 00:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:35.276 00:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:35.276 00:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:35.276 rmmod nvme_tcp 00:10:35.276 rmmod nvme_fabrics 00:10:35.277 rmmod nvme_keyring 00:10:35.277 00:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:35.277 00:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:35.277 00:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:35.277 00:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 154220 ']' 00:10:35.277 00:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 154220 00:10:35.277 00:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 154220 ']' 00:10:35.277 00:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 154220 00:10:35.277 00:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:10:35.277 00:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:35.277 00:15:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 154220 00:10:35.277 00:15:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:35.277 00:15:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:35.277 00:15:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 154220' 00:10:35.277 killing process with pid 154220 00:10:35.277 00:15:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 154220 00:10:35.277 00:15:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 154220 00:10:35.541 00:15:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:35.541 00:15:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:35.541 00:15:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:35.541 00:15:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:35.541 00:15:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:10:35.541 00:15:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:35.541 00:15:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:10:35.542 00:15:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:35.542 00:15:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:35.542 00:15:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:35.542 00:15:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:35.542 00:15:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.458 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:37.458 00:10:37.458 real 0m24.292s 00:10:37.458 user 1m24.962s 00:10:37.458 sys 0m7.510s 00:10:37.458 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:37.458 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.458 ************************************ 00:10:37.458 END TEST nvmf_fio_target 00:10:37.458 ************************************ 00:10:37.717 00:16:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:37.717 00:16:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:37.717 00:16:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:37.717 00:16:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:37.717 ************************************ 00:10:37.717 START TEST nvmf_bdevio 00:10:37.717 ************************************ 00:10:37.717 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:37.717 * Looking for test storage... 00:10:37.717 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:37.717 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:37.717 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:10:37.717 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:37.717 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:37.717 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:37.717 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:37.717 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:37.717 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:37.717 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:37.717 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:37.717 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:37.717 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:37.717 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:37.717 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:37.717 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:37.717 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:37.717 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:37.717 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:37.717 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:37.717 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:37.717 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:37.717 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:37.717 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:37.717 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:37.717 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:37.717 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:37.717 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:37.717 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:37.717 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:37.717 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:37.717 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:37.717 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:37.718 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:37.718 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:37.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.718 --rc genhtml_branch_coverage=1 00:10:37.718 --rc genhtml_function_coverage=1 00:10:37.718 --rc genhtml_legend=1 00:10:37.718 --rc geninfo_all_blocks=1 00:10:37.718 --rc geninfo_unexecuted_blocks=1 00:10:37.718 00:10:37.718 ' 00:10:37.718 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:37.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.718 --rc genhtml_branch_coverage=1 00:10:37.718 --rc genhtml_function_coverage=1 00:10:37.718 --rc genhtml_legend=1 00:10:37.718 --rc geninfo_all_blocks=1 00:10:37.718 --rc geninfo_unexecuted_blocks=1 00:10:37.718 00:10:37.718 ' 00:10:37.718 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:37.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.718 --rc genhtml_branch_coverage=1 00:10:37.718 --rc genhtml_function_coverage=1 00:10:37.718 --rc genhtml_legend=1 00:10:37.718 --rc geninfo_all_blocks=1 00:10:37.718 --rc geninfo_unexecuted_blocks=1 00:10:37.718 00:10:37.718 ' 00:10:37.718 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:37.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.718 --rc genhtml_branch_coverage=1 00:10:37.718 --rc genhtml_function_coverage=1 00:10:37.718 --rc genhtml_legend=1 00:10:37.718 --rc geninfo_all_blocks=1 00:10:37.718 --rc geninfo_unexecuted_blocks=1 00:10:37.718 00:10:37.718 ' 00:10:37.718 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:37.718 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:37.718 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:37.718 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:37.718 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:37.718 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:37.718 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:37.718 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:37.718 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:37.718 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:37.718 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:37.718 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:37.718 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:37.718 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:37.718 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:37.718 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:37.718 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:37.718 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:37.718 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:37.718 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:37.718 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:37.718 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:37.718 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:37.718 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.718 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.718 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.718 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:37.718 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.718 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:37.718 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:37.718 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:37.718 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:37.718 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:37.718 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:37.718 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:37.718 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:37.718 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:37.718 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:37.718 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:37.718 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:37.718 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:37.718 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:37.718 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:37.718 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:37.718 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:37.718 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:37.718 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:37.718 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:37.718 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:37.718 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.718 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:37.718 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:37.718 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:37.718 00:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:40.255 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:40.256 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:40.256 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:40.256 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:40.256 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:40.256 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:40.256 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:10:40.256 00:10:40.256 --- 10.0.0.2 ping statistics --- 00:10:40.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.256 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:40.256 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:40.256 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:10:40.256 00:10:40.256 --- 10.0.0.1 ping statistics --- 00:10:40.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.256 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:10:40.256 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:40.257 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:40.257 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:40.257 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:40.257 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:40.257 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:40.257 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:40.257 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:40.257 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:40.257 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:40.257 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:40.257 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=158995 00:10:40.257 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:40.257 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 158995 00:10:40.257 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 158995 ']' 00:10:40.257 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.257 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:40.257 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.257 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:40.257 00:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:40.257 [2024-11-18 00:16:03.967510] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:10:40.257 [2024-11-18 00:16:03.967589] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:40.257 [2024-11-18 00:16:04.040612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:40.514 [2024-11-18 00:16:04.091974] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:40.514 [2024-11-18 00:16:04.092023] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:40.514 [2024-11-18 00:16:04.092052] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:40.514 [2024-11-18 00:16:04.092063] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:40.514 [2024-11-18 00:16:04.092072] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:40.514 [2024-11-18 00:16:04.093825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:40.514 [2024-11-18 00:16:04.093885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:40.514 [2024-11-18 00:16:04.093957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:40.514 [2024-11-18 00:16:04.093960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:40.514 00:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:40.514 00:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:10:40.514 00:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:40.514 00:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:40.515 00:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:40.515 00:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:40.515 00:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:40.515 00:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.515 00:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:40.515 [2024-11-18 00:16:04.239199] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:40.515 00:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.515 00:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:40.515 00:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.515 00:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:40.515 Malloc0 00:10:40.515 00:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.515 00:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:40.515 00:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.515 00:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:40.515 00:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.515 00:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:40.515 00:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.515 00:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:40.515 00:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.515 00:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:40.515 00:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.515 00:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:40.515 [2024-11-18 00:16:04.305268] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:40.515 00:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.515 00:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:40.515 00:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:40.515 00:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:40.515 00:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:40.515 00:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:40.515 00:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:40.515 { 00:10:40.515 "params": { 00:10:40.515 "name": "Nvme$subsystem", 00:10:40.515 "trtype": "$TEST_TRANSPORT", 00:10:40.515 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:40.515 "adrfam": "ipv4", 00:10:40.515 "trsvcid": "$NVMF_PORT", 00:10:40.515 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:40.515 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:40.515 "hdgst": ${hdgst:-false}, 00:10:40.515 "ddgst": ${ddgst:-false} 00:10:40.515 }, 00:10:40.515 "method": "bdev_nvme_attach_controller" 00:10:40.515 } 00:10:40.515 EOF 00:10:40.515 )") 00:10:40.515 00:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:40.515 00:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:40.515 00:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:40.515 00:16:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:40.515 "params": { 00:10:40.515 "name": "Nvme1", 00:10:40.515 "trtype": "tcp", 00:10:40.515 "traddr": "10.0.0.2", 00:10:40.515 "adrfam": "ipv4", 00:10:40.515 "trsvcid": "4420", 00:10:40.515 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:40.515 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:40.515 "hdgst": false, 00:10:40.515 "ddgst": false 00:10:40.515 }, 00:10:40.515 "method": "bdev_nvme_attach_controller" 00:10:40.515 }' 00:10:40.773 [2024-11-18 00:16:04.353999] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:10:40.773 [2024-11-18 00:16:04.354064] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid159137 ] 00:10:40.773 [2024-11-18 00:16:04.423390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:40.773 [2024-11-18 00:16:04.475053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:40.773 [2024-11-18 00:16:04.475106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:40.773 [2024-11-18 00:16:04.475109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.030 I/O targets: 00:10:41.030 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:41.030 00:10:41.030 00:10:41.030 CUnit - A unit testing framework for C - Version 2.1-3 00:10:41.030 http://cunit.sourceforge.net/ 00:10:41.030 00:10:41.030 00:10:41.030 Suite: bdevio tests on: Nvme1n1 00:10:41.287 Test: blockdev write read block ...passed 00:10:41.287 Test: blockdev write zeroes read block ...passed 00:10:41.287 Test: blockdev write zeroes read no split ...passed 00:10:41.287 Test: blockdev write zeroes read split ...passed 00:10:41.287 Test: blockdev write zeroes read split partial ...passed 00:10:41.287 Test: blockdev reset ...[2024-11-18 00:16:04.930367] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:41.287 [2024-11-18 00:16:04.930473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x54eb70 (9): Bad file descriptor 00:10:41.287 [2024-11-18 00:16:04.947232] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:41.287 passed 00:10:41.287 Test: blockdev write read 8 blocks ...passed 00:10:41.287 Test: blockdev write read size > 128k ...passed 00:10:41.287 Test: blockdev write read invalid size ...passed 00:10:41.287 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:41.287 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:41.287 Test: blockdev write read max offset ...passed 00:10:41.545 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:41.545 Test: blockdev writev readv 8 blocks ...passed 00:10:41.545 Test: blockdev writev readv 30 x 1block ...passed 00:10:41.545 Test: blockdev writev readv block ...passed 00:10:41.545 Test: blockdev writev readv size > 128k ...passed 00:10:41.545 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:41.545 Test: blockdev comparev and writev ...[2024-11-18 00:16:05.160525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:41.545 [2024-11-18 00:16:05.160563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:41.545 [2024-11-18 00:16:05.160601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:41.545 [2024-11-18 00:16:05.160630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:41.545 [2024-11-18 00:16:05.161037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:41.545 [2024-11-18 00:16:05.161064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:41.545 [2024-11-18 00:16:05.161098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:41.545 [2024-11-18 00:16:05.161125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:41.545 [2024-11-18 00:16:05.161520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:41.545 [2024-11-18 00:16:05.161547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:41.545 [2024-11-18 00:16:05.161581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:41.545 [2024-11-18 00:16:05.161607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:41.545 [2024-11-18 00:16:05.161988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:41.545 [2024-11-18 00:16:05.162014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:41.545 [2024-11-18 00:16:05.162050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:41.545 [2024-11-18 00:16:05.162076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:41.545 passed 00:10:41.545 Test: blockdev nvme passthru rw ...passed 00:10:41.545 Test: blockdev nvme passthru vendor specific ...[2024-11-18 00:16:05.244599] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:41.545 [2024-11-18 00:16:05.244627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:41.545 [2024-11-18 00:16:05.244804] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:41.545 [2024-11-18 00:16:05.244833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:41.545 [2024-11-18 00:16:05.244990] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:41.545 [2024-11-18 00:16:05.245015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:41.545 [2024-11-18 00:16:05.245171] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:41.545 [2024-11-18 00:16:05.245196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:41.545 passed 00:10:41.545 Test: blockdev nvme admin passthru ...passed 00:10:41.545 Test: blockdev copy ...passed 00:10:41.545 00:10:41.545 Run Summary: Type Total Ran Passed Failed Inactive 00:10:41.545 suites 1 1 n/a 0 0 00:10:41.545 tests 23 23 23 0 0 00:10:41.545 asserts 152 152 152 0 n/a 00:10:41.545 00:10:41.545 Elapsed time = 0.965 seconds 00:10:41.802 00:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:41.802 00:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.802 00:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:41.802 00:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.802 00:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:41.802 00:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:41.802 00:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:41.802 00:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:41.802 00:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:41.802 00:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:41.802 00:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:41.802 00:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:41.802 rmmod nvme_tcp 00:10:41.802 rmmod nvme_fabrics 00:10:41.802 rmmod nvme_keyring 00:10:41.802 00:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:41.802 00:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:41.802 00:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:41.802 00:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 158995 ']' 00:10:41.802 00:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 158995 00:10:41.802 00:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 158995 ']' 00:10:41.802 00:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 158995 00:10:41.802 00:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:10:41.802 00:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:41.802 00:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 158995 00:10:41.802 00:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:10:41.802 00:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:10:41.802 00:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 158995' 00:10:41.802 killing process with pid 158995 00:10:41.802 00:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 158995 00:10:41.802 00:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 158995 00:10:42.062 00:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:42.062 00:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:42.062 00:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:42.062 00:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:42.062 00:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:42.062 00:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:42.062 00:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:42.062 00:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:42.062 00:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:42.062 00:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:42.062 00:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:42.062 00:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.606 00:16:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:44.606 00:10:44.606 real 0m6.509s 00:10:44.606 user 0m9.977s 00:10:44.606 sys 0m2.233s 00:10:44.606 00:16:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:44.606 00:16:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:44.606 ************************************ 00:10:44.606 END TEST nvmf_bdevio 00:10:44.606 ************************************ 00:10:44.606 00:16:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:44.606 00:10:44.606 real 3m56.899s 00:10:44.606 user 10m14.894s 00:10:44.606 sys 1m9.049s 00:10:44.606 00:16:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:44.606 00:16:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:44.606 ************************************ 00:10:44.606 END TEST nvmf_target_core 00:10:44.606 ************************************ 00:10:44.606 00:16:07 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:44.606 00:16:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:44.606 00:16:07 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:44.606 00:16:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:44.606 ************************************ 00:10:44.606 START TEST nvmf_target_extra 00:10:44.606 ************************************ 00:10:44.606 00:16:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:44.606 * Looking for test storage... 00:10:44.606 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:44.606 00:16:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:44.606 00:16:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:10:44.606 00:16:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:44.606 00:16:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:44.606 00:16:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:44.606 00:16:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:44.606 00:16:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:44.606 00:16:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:44.606 00:16:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:44.606 00:16:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:44.606 00:16:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:44.606 00:16:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:44.606 00:16:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:44.606 00:16:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:44.606 00:16:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:44.606 00:16:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:44.606 00:16:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:44.606 00:16:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:44.606 00:16:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:44.606 00:16:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:44.606 00:16:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:44.606 00:16:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:44.606 00:16:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:44.606 00:16:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:44.606 00:16:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:44.606 00:16:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:44.606 00:16:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:44.606 00:16:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:44.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.607 --rc genhtml_branch_coverage=1 00:10:44.607 --rc genhtml_function_coverage=1 00:10:44.607 --rc genhtml_legend=1 00:10:44.607 --rc geninfo_all_blocks=1 00:10:44.607 --rc geninfo_unexecuted_blocks=1 00:10:44.607 00:10:44.607 ' 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:44.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.607 --rc genhtml_branch_coverage=1 00:10:44.607 --rc genhtml_function_coverage=1 00:10:44.607 --rc genhtml_legend=1 00:10:44.607 --rc geninfo_all_blocks=1 00:10:44.607 --rc geninfo_unexecuted_blocks=1 00:10:44.607 00:10:44.607 ' 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:44.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.607 --rc genhtml_branch_coverage=1 00:10:44.607 --rc genhtml_function_coverage=1 00:10:44.607 --rc genhtml_legend=1 00:10:44.607 --rc geninfo_all_blocks=1 00:10:44.607 --rc geninfo_unexecuted_blocks=1 00:10:44.607 00:10:44.607 ' 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:44.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.607 --rc genhtml_branch_coverage=1 00:10:44.607 --rc genhtml_function_coverage=1 00:10:44.607 --rc genhtml_legend=1 00:10:44.607 --rc geninfo_all_blocks=1 00:10:44.607 --rc geninfo_unexecuted_blocks=1 00:10:44.607 00:10:44.607 ' 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:44.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:44.607 ************************************ 00:10:44.607 START TEST nvmf_example 00:10:44.607 ************************************ 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:44.607 * Looking for test storage... 00:10:44.607 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:44.607 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:44.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.608 --rc genhtml_branch_coverage=1 00:10:44.608 --rc genhtml_function_coverage=1 00:10:44.608 --rc genhtml_legend=1 00:10:44.608 --rc geninfo_all_blocks=1 00:10:44.608 --rc geninfo_unexecuted_blocks=1 00:10:44.608 00:10:44.608 ' 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:44.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.608 --rc genhtml_branch_coverage=1 00:10:44.608 --rc genhtml_function_coverage=1 00:10:44.608 --rc genhtml_legend=1 00:10:44.608 --rc geninfo_all_blocks=1 00:10:44.608 --rc geninfo_unexecuted_blocks=1 00:10:44.608 00:10:44.608 ' 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:44.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.608 --rc genhtml_branch_coverage=1 00:10:44.608 --rc genhtml_function_coverage=1 00:10:44.608 --rc genhtml_legend=1 00:10:44.608 --rc geninfo_all_blocks=1 00:10:44.608 --rc geninfo_unexecuted_blocks=1 00:10:44.608 00:10:44.608 ' 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:44.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.608 --rc genhtml_branch_coverage=1 00:10:44.608 --rc genhtml_function_coverage=1 00:10:44.608 --rc genhtml_legend=1 00:10:44.608 --rc geninfo_all_blocks=1 00:10:44.608 --rc geninfo_unexecuted_blocks=1 00:10:44.608 00:10:44.608 ' 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:44.608 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:44.608 00:16:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:47.152 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:47.152 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:47.152 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:47.152 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:47.152 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:47.152 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.297 ms 00:10:47.152 00:10:47.152 --- 10.0.0.2 ping statistics --- 00:10:47.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.152 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:47.152 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:47.152 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:10:47.152 00:10:47.152 --- 10.0.0.1 ping statistics --- 00:10:47.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.152 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:47.152 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:47.153 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:47.153 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:47.153 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:47.153 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:47.153 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:47.153 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=161285 00:10:47.153 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:47.153 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:47.153 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 161285 00:10:47.153 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 161285 ']' 00:10:47.153 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:47.153 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:47.153 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:47.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:47.153 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:47.153 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:47.153 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:47.153 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:10:47.153 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:47.153 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:47.153 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:47.153 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:47.153 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.153 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:47.153 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.153 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:47.153 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.153 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:47.153 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.153 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:47.153 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:47.153 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.153 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:47.153 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.153 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:47.153 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:47.153 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.153 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:47.153 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.153 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:47.153 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.153 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:47.153 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.153 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:47.153 00:16:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:59.400 Initializing NVMe Controllers 00:10:59.400 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:59.401 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:59.401 Initialization complete. Launching workers. 00:10:59.401 ======================================================== 00:10:59.401 Latency(us) 00:10:59.401 Device Information : IOPS MiB/s Average min max 00:10:59.401 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14830.60 57.93 4317.19 650.88 20028.66 00:10:59.401 ======================================================== 00:10:59.401 Total : 14830.60 57.93 4317.19 650.88 20028.66 00:10:59.401 00:10:59.401 00:16:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:59.401 00:16:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:59.401 00:16:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:59.401 00:16:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:59.401 00:16:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:59.401 00:16:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:59.401 00:16:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:59.401 00:16:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:59.401 rmmod nvme_tcp 00:10:59.401 rmmod nvme_fabrics 00:10:59.401 rmmod nvme_keyring 00:10:59.401 00:16:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:59.401 00:16:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:59.401 00:16:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:59.401 00:16:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 161285 ']' 00:10:59.401 00:16:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 161285 00:10:59.401 00:16:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 161285 ']' 00:10:59.401 00:16:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 161285 00:10:59.401 00:16:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:10:59.401 00:16:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:59.401 00:16:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 161285 00:10:59.401 00:16:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:10:59.401 00:16:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:10:59.401 00:16:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 161285' 00:10:59.401 killing process with pid 161285 00:10:59.401 00:16:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 161285 00:10:59.401 00:16:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 161285 00:10:59.401 nvmf threads initialize successfully 00:10:59.401 bdev subsystem init successfully 00:10:59.401 created a nvmf target service 00:10:59.401 create targets's poll groups done 00:10:59.401 all subsystems of target started 00:10:59.401 nvmf target is running 00:10:59.401 all subsystems of target stopped 00:10:59.401 destroy targets's poll groups done 00:10:59.401 destroyed the nvmf target service 00:10:59.401 bdev subsystem finish successfully 00:10:59.401 nvmf threads destroy successfully 00:10:59.401 00:16:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:59.401 00:16:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:59.401 00:16:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:59.401 00:16:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:59.401 00:16:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:10:59.401 00:16:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:59.401 00:16:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:10:59.401 00:16:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:59.401 00:16:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:59.401 00:16:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:59.401 00:16:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:59.401 00:16:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:59.972 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:59.972 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:59.972 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:59.972 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:59.972 00:10:59.972 real 0m15.412s 00:10:59.972 user 0m42.162s 00:10:59.972 sys 0m3.473s 00:10:59.972 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:59.972 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:59.972 ************************************ 00:10:59.972 END TEST nvmf_example 00:10:59.972 ************************************ 00:10:59.972 00:16:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:59.972 00:16:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:59.972 00:16:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:59.972 00:16:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:59.972 ************************************ 00:10:59.972 START TEST nvmf_filesystem 00:10:59.972 ************************************ 00:10:59.972 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:59.972 * Looking for test storage... 00:10:59.972 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:59.972 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:59.972 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:10:59.972 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:59.972 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:59.972 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:59.972 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:59.972 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:59.972 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:59.972 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:59.972 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:59.972 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:59.972 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:59.972 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:59.972 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:59.972 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:59.972 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:59.972 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:59.972 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:59.972 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:59.972 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:59.972 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:59.972 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:59.972 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:59.972 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:59.972 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:59.972 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:59.972 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:59.972 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:59.972 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:59.972 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:59.972 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:59.972 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:59.972 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:59.972 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:59.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.972 --rc genhtml_branch_coverage=1 00:10:59.972 --rc genhtml_function_coverage=1 00:10:59.972 --rc genhtml_legend=1 00:10:59.972 --rc geninfo_all_blocks=1 00:10:59.972 --rc geninfo_unexecuted_blocks=1 00:10:59.972 00:10:59.972 ' 00:10:59.972 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:59.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.972 --rc genhtml_branch_coverage=1 00:10:59.972 --rc genhtml_function_coverage=1 00:10:59.972 --rc genhtml_legend=1 00:10:59.972 --rc geninfo_all_blocks=1 00:10:59.972 --rc geninfo_unexecuted_blocks=1 00:10:59.972 00:10:59.972 ' 00:10:59.972 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:59.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.972 --rc genhtml_branch_coverage=1 00:10:59.972 --rc genhtml_function_coverage=1 00:10:59.972 --rc genhtml_legend=1 00:10:59.972 --rc geninfo_all_blocks=1 00:10:59.972 --rc geninfo_unexecuted_blocks=1 00:10:59.972 00:10:59.972 ' 00:10:59.972 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:59.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.972 --rc genhtml_branch_coverage=1 00:10:59.972 --rc genhtml_function_coverage=1 00:10:59.973 --rc genhtml_legend=1 00:10:59.973 --rc geninfo_all_blocks=1 00:10:59.973 --rc geninfo_unexecuted_blocks=1 00:10:59.973 00:10:59.973 ' 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:59.973 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:59.974 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:59.974 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:59.974 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:59.974 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:59.974 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:59.974 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:59.974 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:59.974 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:59.974 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:59.974 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:59.974 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:59.974 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:59.974 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:59.974 #define SPDK_CONFIG_H 00:10:59.974 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:59.974 #define SPDK_CONFIG_APPS 1 00:10:59.974 #define SPDK_CONFIG_ARCH native 00:10:59.974 #undef SPDK_CONFIG_ASAN 00:10:59.974 #undef SPDK_CONFIG_AVAHI 00:10:59.974 #undef SPDK_CONFIG_CET 00:10:59.974 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:59.974 #define SPDK_CONFIG_COVERAGE 1 00:10:59.974 #define SPDK_CONFIG_CROSS_PREFIX 00:10:59.974 #undef SPDK_CONFIG_CRYPTO 00:10:59.974 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:59.974 #undef SPDK_CONFIG_CUSTOMOCF 00:10:59.974 #undef SPDK_CONFIG_DAOS 00:10:59.974 #define SPDK_CONFIG_DAOS_DIR 00:10:59.974 #define SPDK_CONFIG_DEBUG 1 00:10:59.974 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:59.974 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:10:59.974 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:10:59.974 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:59.974 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:59.974 #undef SPDK_CONFIG_DPDK_UADK 00:10:59.974 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:59.974 #define SPDK_CONFIG_EXAMPLES 1 00:10:59.974 #undef SPDK_CONFIG_FC 00:10:59.974 #define SPDK_CONFIG_FC_PATH 00:10:59.974 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:59.974 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:59.974 #define SPDK_CONFIG_FSDEV 1 00:10:59.974 #undef SPDK_CONFIG_FUSE 00:10:59.974 #undef SPDK_CONFIG_FUZZER 00:10:59.974 #define SPDK_CONFIG_FUZZER_LIB 00:10:59.974 #undef SPDK_CONFIG_GOLANG 00:10:59.974 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:59.974 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:59.974 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:59.974 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:59.974 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:59.974 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:59.974 #undef SPDK_CONFIG_HAVE_LZ4 00:10:59.974 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:59.974 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:59.974 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:59.974 #define SPDK_CONFIG_IDXD 1 00:10:59.974 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:59.974 #undef SPDK_CONFIG_IPSEC_MB 00:10:59.974 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:59.974 #define SPDK_CONFIG_ISAL 1 00:10:59.974 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:59.974 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:59.974 #define SPDK_CONFIG_LIBDIR 00:10:59.974 #undef SPDK_CONFIG_LTO 00:10:59.974 #define SPDK_CONFIG_MAX_LCORES 128 00:10:59.974 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:10:59.974 #define SPDK_CONFIG_NVME_CUSE 1 00:10:59.974 #undef SPDK_CONFIG_OCF 00:10:59.974 #define SPDK_CONFIG_OCF_PATH 00:10:59.974 #define SPDK_CONFIG_OPENSSL_PATH 00:10:59.974 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:59.974 #define SPDK_CONFIG_PGO_DIR 00:10:59.974 #undef SPDK_CONFIG_PGO_USE 00:10:59.974 #define SPDK_CONFIG_PREFIX /usr/local 00:10:59.974 #undef SPDK_CONFIG_RAID5F 00:10:59.974 #undef SPDK_CONFIG_RBD 00:10:59.974 #define SPDK_CONFIG_RDMA 1 00:10:59.974 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:59.974 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:59.974 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:59.974 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:59.974 #define SPDK_CONFIG_SHARED 1 00:10:59.974 #undef SPDK_CONFIG_SMA 00:10:59.974 #define SPDK_CONFIG_TESTS 1 00:10:59.974 #undef SPDK_CONFIG_TSAN 00:10:59.974 #define SPDK_CONFIG_UBLK 1 00:10:59.974 #define SPDK_CONFIG_UBSAN 1 00:10:59.974 #undef SPDK_CONFIG_UNIT_TESTS 00:10:59.974 #undef SPDK_CONFIG_URING 00:10:59.974 #define SPDK_CONFIG_URING_PATH 00:10:59.974 #undef SPDK_CONFIG_URING_ZNS 00:10:59.974 #undef SPDK_CONFIG_USDT 00:10:59.974 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:59.974 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:59.974 #define SPDK_CONFIG_VFIO_USER 1 00:10:59.974 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:59.974 #define SPDK_CONFIG_VHOST 1 00:10:59.974 #define SPDK_CONFIG_VIRTIO 1 00:10:59.974 #undef SPDK_CONFIG_VTUNE 00:10:59.974 #define SPDK_CONFIG_VTUNE_DIR 00:10:59.974 #define SPDK_CONFIG_WERROR 1 00:10:59.974 #define SPDK_CONFIG_WPDK_DIR 00:10:59.974 #undef SPDK_CONFIG_XNVME 00:10:59.974 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:59.974 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:59.974 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:59.974 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:59.974 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:59.974 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:59.974 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:59.974 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.974 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.974 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.974 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:59.974 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.974 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:59.974 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:59.974 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:59.974 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:59.974 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:59.974 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:59.974 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:59.974 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:59.974 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:59.974 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:59.974 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:59.974 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:59.974 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:59.974 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:59.974 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:59.974 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:59.974 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : v22.11.4 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:59.975 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:59.976 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:59.976 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:59.976 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:59.976 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:59.976 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:59.976 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:59.976 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:59.976 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:59.976 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:59.976 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:59.976 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:59.976 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:59.976 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:59.976 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:59.976 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:59.976 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:59.976 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:59.976 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:10:59.976 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:10:59.976 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:59.976 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:59.976 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:59.976 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:59.976 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:59.976 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:59.976 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:59.976 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:59.976 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:59.976 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:59.976 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:59.976 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:59.976 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:00.238 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:00.238 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:00.238 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:00.238 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:00.238 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:00.238 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:00.238 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:00.238 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:11:00.238 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:00.238 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:00.238 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:00.238 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:00.238 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:00.238 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:00.238 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:00.238 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:00.239 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:00.239 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:00.239 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:00.239 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:00.239 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:00.239 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:00.239 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:00.239 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:00.239 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:00.239 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:00.239 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:00.239 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:00.239 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:00.239 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:11:00.239 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:00.239 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:00.239 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:00.239 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:00.239 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:00.239 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:00.239 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:11:00.239 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:11:00.239 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:11:00.239 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:00.239 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:00.240 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:00.240 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:00.240 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:11:00.240 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j48 00:11:00.240 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:00.240 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:00.240 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:00.240 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:00.240 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:11:00.240 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:11:00.240 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:11:00.240 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 162939 ]] 00:11:00.240 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 162939 00:11:00.240 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:11:00.240 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:00.240 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:00.240 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:00.240 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:00.240 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:00.240 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:00.240 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:00.240 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.5eMiq6 00:11:00.240 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:00.240 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:00.240 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:00.240 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.5eMiq6/tests/target /tmp/spdk.5eMiq6 00:11:00.240 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:00.240 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:00.240 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:11:00.241 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:00.241 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:11:00.241 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:00.241 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:11:00.241 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:11:00.241 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:00.241 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:00.241 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:11:00.241 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:11:00.241 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:11:00.241 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:11:00.241 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:11:00.241 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:00.241 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:11:00.241 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:11:00.241 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=54526476288 00:11:00.241 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=61988511744 00:11:00.241 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=7462035456 00:11:00.241 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:00.241 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:00.241 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:00.241 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30984224768 00:11:00.241 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994255872 00:11:00.241 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:11:00.241 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:00.241 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:00.241 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:00.241 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12375273472 00:11:00.241 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12397703168 00:11:00.242 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=22429696 00:11:00.242 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:00.242 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:00.242 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:00.242 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30993944576 00:11:00.242 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994255872 00:11:00.242 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=311296 00:11:00.242 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:00.242 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:00.242 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:00.242 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6198837248 00:11:00.242 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6198849536 00:11:00.242 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:00.242 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:00.242 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:00.242 * Looking for test storage... 00:11:00.242 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:00.242 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:00.242 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:00.242 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:00.242 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:11:00.242 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=54526476288 00:11:00.243 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:00.243 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:00.243 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:11:00.243 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:11:00.243 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:11:00.243 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9676627968 00:11:00.243 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:00.243 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:00.243 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:00.243 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:00.243 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:00.243 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:11:00.243 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:11:00.244 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:11:00.244 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:00.244 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:00.244 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:11:00.244 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:11:00.244 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:00.244 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:00.244 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:00.244 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:00.244 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:00.244 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:00.244 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:00.244 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:00.244 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:00.244 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:00.244 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:00.244 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:00.244 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:00.244 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:00.245 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:00.245 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:00.245 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:00.245 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:00.245 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:00.245 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:00.245 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:00.245 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:00.245 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:00.245 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:00.245 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:00.245 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:00.245 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:00.245 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:00.245 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:00.245 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:00.245 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:00.245 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:00.246 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:00.246 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:00.246 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:00.246 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:00.246 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:00.246 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:00.246 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:00.246 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:00.246 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:00.246 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:00.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.246 --rc genhtml_branch_coverage=1 00:11:00.246 --rc genhtml_function_coverage=1 00:11:00.246 --rc genhtml_legend=1 00:11:00.246 --rc geninfo_all_blocks=1 00:11:00.246 --rc geninfo_unexecuted_blocks=1 00:11:00.246 00:11:00.246 ' 00:11:00.246 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:00.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.246 --rc genhtml_branch_coverage=1 00:11:00.246 --rc genhtml_function_coverage=1 00:11:00.246 --rc genhtml_legend=1 00:11:00.246 --rc geninfo_all_blocks=1 00:11:00.246 --rc geninfo_unexecuted_blocks=1 00:11:00.246 00:11:00.246 ' 00:11:00.247 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:00.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.247 --rc genhtml_branch_coverage=1 00:11:00.247 --rc genhtml_function_coverage=1 00:11:00.247 --rc genhtml_legend=1 00:11:00.247 --rc geninfo_all_blocks=1 00:11:00.247 --rc geninfo_unexecuted_blocks=1 00:11:00.247 00:11:00.247 ' 00:11:00.247 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:00.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.247 --rc genhtml_branch_coverage=1 00:11:00.247 --rc genhtml_function_coverage=1 00:11:00.247 --rc genhtml_legend=1 00:11:00.247 --rc geninfo_all_blocks=1 00:11:00.247 --rc geninfo_unexecuted_blocks=1 00:11:00.247 00:11:00.247 ' 00:11:00.247 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:00.247 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:00.247 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:00.248 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:00.248 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:00.248 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:00.248 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:00.248 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:00.248 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:00.248 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:00.248 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:00.248 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:00.248 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:00.248 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:00.248 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:00.248 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:00.249 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:00.249 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:00.249 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:00.249 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:00.249 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:00.249 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:00.249 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:00.249 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.250 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.250 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.250 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:00.250 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.250 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:00.250 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:00.250 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:00.250 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:00.250 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:00.250 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:00.250 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:00.250 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:00.251 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:00.251 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:00.251 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:00.251 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:00.251 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:00.251 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:00.251 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:00.251 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:00.251 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:00.251 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:00.251 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:00.251 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:00.251 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:00.251 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.251 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:00.251 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:00.251 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:00.251 00:16:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:02.785 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:02.785 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:02.785 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:02.785 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:02.785 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:02.786 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:02.786 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:02.786 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:02.786 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:02.786 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:02.786 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:02.786 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:02.786 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:02.786 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:02.786 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:02.786 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:02.786 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:02.786 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:02.786 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:02.786 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:02.786 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:02.786 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:02.786 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:02.786 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:02.786 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:02.786 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:02.786 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.312 ms 00:11:02.786 00:11:02.786 --- 10.0.0.2 ping statistics --- 00:11:02.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:02.786 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:11:02.786 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:02.786 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:02.786 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:11:02.786 00:11:02.786 --- 10.0.0.1 ping statistics --- 00:11:02.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:02.786 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:11:02.786 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:02.786 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:11:02.786 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:02.786 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:02.786 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:02.786 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:02.786 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:02.786 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:02.786 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:02.786 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:02.786 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:02.786 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:02.786 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:02.786 ************************************ 00:11:02.786 START TEST nvmf_filesystem_no_in_capsule 00:11:02.786 ************************************ 00:11:02.786 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:11:02.786 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:02.786 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:02.786 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:02.786 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:02.786 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:02.786 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=164613 00:11:02.786 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:02.786 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 164613 00:11:02.786 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 164613 ']' 00:11:02.786 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.786 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:02.786 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.786 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:02.786 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:02.786 [2024-11-18 00:16:26.341083] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:11:02.786 [2024-11-18 00:16:26.341181] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:02.786 [2024-11-18 00:16:26.414460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:02.786 [2024-11-18 00:16:26.465198] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:02.786 [2024-11-18 00:16:26.465251] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:02.786 [2024-11-18 00:16:26.465279] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:02.786 [2024-11-18 00:16:26.465290] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:02.786 [2024-11-18 00:16:26.465300] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:02.786 [2024-11-18 00:16:26.466841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:02.786 [2024-11-18 00:16:26.466900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:02.786 [2024-11-18 00:16:26.466965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:02.786 [2024-11-18 00:16:26.466968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.786 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:02.786 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:02.786 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:02.786 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:02.786 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:03.045 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:03.045 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:03.045 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:03.045 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.045 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:03.045 [2024-11-18 00:16:26.616817] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:03.045 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.045 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:03.045 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.046 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:03.046 Malloc1 00:11:03.046 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.046 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:03.046 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.046 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:03.046 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.046 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:03.046 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.046 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:03.046 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.046 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:03.046 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.046 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:03.046 [2024-11-18 00:16:26.798997] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:03.046 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.046 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:03.046 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:03.046 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:03.046 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:03.046 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:03.046 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:03.046 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.046 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:03.046 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.046 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:03.046 { 00:11:03.046 "name": "Malloc1", 00:11:03.046 "aliases": [ 00:11:03.046 "36995377-4edf-40cd-b6d5-cfefdce23f1b" 00:11:03.046 ], 00:11:03.046 "product_name": "Malloc disk", 00:11:03.046 "block_size": 512, 00:11:03.046 "num_blocks": 1048576, 00:11:03.046 "uuid": "36995377-4edf-40cd-b6d5-cfefdce23f1b", 00:11:03.046 "assigned_rate_limits": { 00:11:03.046 "rw_ios_per_sec": 0, 00:11:03.046 "rw_mbytes_per_sec": 0, 00:11:03.046 "r_mbytes_per_sec": 0, 00:11:03.046 "w_mbytes_per_sec": 0 00:11:03.046 }, 00:11:03.046 "claimed": true, 00:11:03.046 "claim_type": "exclusive_write", 00:11:03.046 "zoned": false, 00:11:03.046 "supported_io_types": { 00:11:03.046 "read": true, 00:11:03.046 "write": true, 00:11:03.046 "unmap": true, 00:11:03.046 "flush": true, 00:11:03.046 "reset": true, 00:11:03.046 "nvme_admin": false, 00:11:03.046 "nvme_io": false, 00:11:03.046 "nvme_io_md": false, 00:11:03.046 "write_zeroes": true, 00:11:03.046 "zcopy": true, 00:11:03.046 "get_zone_info": false, 00:11:03.046 "zone_management": false, 00:11:03.046 "zone_append": false, 00:11:03.046 "compare": false, 00:11:03.046 "compare_and_write": false, 00:11:03.046 "abort": true, 00:11:03.046 "seek_hole": false, 00:11:03.046 "seek_data": false, 00:11:03.046 "copy": true, 00:11:03.046 "nvme_iov_md": false 00:11:03.046 }, 00:11:03.046 "memory_domains": [ 00:11:03.046 { 00:11:03.046 "dma_device_id": "system", 00:11:03.046 "dma_device_type": 1 00:11:03.046 }, 00:11:03.046 { 00:11:03.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.046 "dma_device_type": 2 00:11:03.046 } 00:11:03.046 ], 00:11:03.046 "driver_specific": {} 00:11:03.046 } 00:11:03.046 ]' 00:11:03.046 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:03.046 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:03.046 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:03.304 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:03.304 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:03.304 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:03.304 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:03.304 00:16:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:03.870 00:16:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:03.870 00:16:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:03.870 00:16:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:03.870 00:16:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:03.870 00:16:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:06.399 00:16:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:06.399 00:16:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:06.399 00:16:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:06.399 00:16:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:06.399 00:16:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:06.399 00:16:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:06.399 00:16:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:06.399 00:16:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:06.399 00:16:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:06.399 00:16:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:06.399 00:16:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:06.399 00:16:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:06.399 00:16:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:06.399 00:16:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:06.399 00:16:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:06.399 00:16:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:06.399 00:16:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:06.399 00:16:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:06.658 00:16:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:08.032 00:16:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:08.032 00:16:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:08.032 00:16:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:08.032 00:16:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:08.033 00:16:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:08.033 ************************************ 00:11:08.033 START TEST filesystem_ext4 00:11:08.033 ************************************ 00:11:08.033 00:16:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:08.033 00:16:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:08.033 00:16:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:08.033 00:16:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:08.033 00:16:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:08.033 00:16:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:08.033 00:16:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:08.033 00:16:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:08.033 00:16:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:08.033 00:16:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:08.033 00:16:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:08.033 mke2fs 1.47.0 (5-Feb-2023) 00:11:08.033 Discarding device blocks: 0/522240 done 00:11:08.033 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:08.033 Filesystem UUID: 1134ef5b-c8d2-4651-85a6-dae41c829494 00:11:08.033 Superblock backups stored on blocks: 00:11:08.033 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:08.033 00:11:08.033 Allocating group tables: 0/64 done 00:11:08.033 Writing inode tables: 0/64 done 00:11:08.033 Creating journal (8192 blocks): done 00:11:08.033 Writing superblocks and filesystem accounting information: 0/64 done 00:11:08.033 00:11:08.033 00:16:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:08.033 00:16:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:14.586 00:16:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:14.586 00:16:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:14.586 00:16:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:14.587 00:16:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:14.587 00:16:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:14.587 00:16:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:14.587 00:16:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 164613 00:11:14.587 00:16:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:14.587 00:16:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:14.587 00:16:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:14.587 00:16:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:14.587 00:11:14.587 real 0m6.273s 00:11:14.587 user 0m0.021s 00:11:14.587 sys 0m0.094s 00:11:14.587 00:16:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:14.587 00:16:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:14.587 ************************************ 00:11:14.587 END TEST filesystem_ext4 00:11:14.587 ************************************ 00:11:14.587 00:16:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:14.587 00:16:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:14.587 00:16:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:14.587 00:16:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:14.587 ************************************ 00:11:14.587 START TEST filesystem_btrfs 00:11:14.587 ************************************ 00:11:14.587 00:16:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:14.587 00:16:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:14.587 00:16:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:14.587 00:16:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:14.587 00:16:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:14.587 00:16:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:14.587 00:16:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:14.587 00:16:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:14.587 00:16:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:14.587 00:16:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:14.587 00:16:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:14.587 btrfs-progs v6.8.1 00:11:14.587 See https://btrfs.readthedocs.io for more information. 00:11:14.587 00:11:14.587 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:14.587 NOTE: several default settings have changed in version 5.15, please make sure 00:11:14.587 this does not affect your deployments: 00:11:14.587 - DUP for metadata (-m dup) 00:11:14.587 - enabled no-holes (-O no-holes) 00:11:14.587 - enabled free-space-tree (-R free-space-tree) 00:11:14.587 00:11:14.587 Label: (null) 00:11:14.587 UUID: 209361fc-5776-4722-9d82-5a199b4a4603 00:11:14.587 Node size: 16384 00:11:14.587 Sector size: 4096 (CPU page size: 4096) 00:11:14.587 Filesystem size: 510.00MiB 00:11:14.587 Block group profiles: 00:11:14.587 Data: single 8.00MiB 00:11:14.587 Metadata: DUP 32.00MiB 00:11:14.587 System: DUP 8.00MiB 00:11:14.587 SSD detected: yes 00:11:14.587 Zoned device: no 00:11:14.587 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:14.587 Checksum: crc32c 00:11:14.587 Number of devices: 1 00:11:14.587 Devices: 00:11:14.587 ID SIZE PATH 00:11:14.587 1 510.00MiB /dev/nvme0n1p1 00:11:14.587 00:11:14.587 00:16:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:14.587 00:16:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:15.524 00:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:15.524 00:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:15.524 00:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:15.524 00:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:15.524 00:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:15.524 00:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:15.524 00:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 164613 00:11:15.524 00:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:15.525 00:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:15.525 00:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:15.525 00:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:15.525 00:11:15.525 real 0m1.358s 00:11:15.525 user 0m0.014s 00:11:15.525 sys 0m0.140s 00:11:15.525 00:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:15.525 00:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:15.525 ************************************ 00:11:15.525 END TEST filesystem_btrfs 00:11:15.525 ************************************ 00:11:15.525 00:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:15.525 00:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:15.525 00:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:15.525 00:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.525 ************************************ 00:11:15.525 START TEST filesystem_xfs 00:11:15.525 ************************************ 00:11:15.525 00:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:15.525 00:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:15.525 00:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:15.525 00:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:15.525 00:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:15.525 00:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:15.525 00:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:15.525 00:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:11:15.525 00:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:15.526 00:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:15.526 00:16:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:15.526 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:15.526 = sectsz=512 attr=2, projid32bit=1 00:11:15.526 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:15.526 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:15.526 data = bsize=4096 blocks=130560, imaxpct=25 00:11:15.526 = sunit=0 swidth=0 blks 00:11:15.526 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:15.526 log =internal log bsize=4096 blocks=16384, version=2 00:11:15.526 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:15.526 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:16.483 Discarding blocks...Done. 00:11:16.483 00:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:16.483 00:16:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:19.767 00:16:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:19.767 00:16:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:19.767 00:16:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:19.767 00:16:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:19.767 00:16:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:19.767 00:16:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:19.767 00:16:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 164613 00:11:19.767 00:16:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:19.767 00:16:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:19.767 00:16:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:19.767 00:16:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:19.767 00:11:19.767 real 0m3.940s 00:11:19.767 user 0m0.027s 00:11:19.767 sys 0m0.082s 00:11:19.767 00:16:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:19.767 00:16:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:19.767 ************************************ 00:11:19.767 END TEST filesystem_xfs 00:11:19.767 ************************************ 00:11:19.767 00:16:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:19.767 00:16:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:19.767 00:16:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:19.767 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.767 00:16:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:19.767 00:16:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:19.767 00:16:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:19.767 00:16:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:19.767 00:16:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:19.767 00:16:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:19.767 00:16:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:19.767 00:16:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:19.767 00:16:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.767 00:16:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:19.767 00:16:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.767 00:16:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:19.767 00:16:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 164613 00:11:19.767 00:16:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 164613 ']' 00:11:19.767 00:16:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 164613 00:11:19.767 00:16:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:19.767 00:16:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:19.767 00:16:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 164613 00:11:19.767 00:16:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:19.767 00:16:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:19.767 00:16:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 164613' 00:11:19.767 killing process with pid 164613 00:11:19.767 00:16:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 164613 00:11:19.767 00:16:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 164613 00:11:20.026 00:16:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:20.026 00:11:20.026 real 0m17.517s 00:11:20.026 user 1m7.957s 00:11:20.026 sys 0m2.274s 00:11:20.026 00:16:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:20.026 00:16:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:20.026 ************************************ 00:11:20.026 END TEST nvmf_filesystem_no_in_capsule 00:11:20.026 ************************************ 00:11:20.026 00:16:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:20.026 00:16:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:20.026 00:16:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:20.026 00:16:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:20.285 ************************************ 00:11:20.285 START TEST nvmf_filesystem_in_capsule 00:11:20.285 ************************************ 00:11:20.285 00:16:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:11:20.285 00:16:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:20.285 00:16:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:20.285 00:16:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:20.285 00:16:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:20.285 00:16:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:20.285 00:16:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=166854 00:11:20.285 00:16:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:20.285 00:16:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 166854 00:11:20.285 00:16:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 166854 ']' 00:11:20.285 00:16:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:20.285 00:16:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:20.285 00:16:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:20.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:20.285 00:16:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:20.285 00:16:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:20.285 [2024-11-18 00:16:43.909254] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:11:20.285 [2024-11-18 00:16:43.909346] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:20.285 [2024-11-18 00:16:43.980595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:20.285 [2024-11-18 00:16:44.026178] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:20.285 [2024-11-18 00:16:44.026228] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:20.285 [2024-11-18 00:16:44.026256] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:20.285 [2024-11-18 00:16:44.026275] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:20.285 [2024-11-18 00:16:44.026285] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:20.285 [2024-11-18 00:16:44.027849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:20.285 [2024-11-18 00:16:44.027916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:20.285 [2024-11-18 00:16:44.027981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:20.285 [2024-11-18 00:16:44.027984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.544 00:16:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:20.544 00:16:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:20.544 00:16:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:20.544 00:16:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:20.544 00:16:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:20.544 00:16:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:20.544 00:16:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:20.544 00:16:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:20.544 00:16:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.544 00:16:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:20.544 [2024-11-18 00:16:44.169057] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:20.544 00:16:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.544 00:16:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:20.544 00:16:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.544 00:16:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:20.544 Malloc1 00:11:20.544 00:16:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.544 00:16:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:20.544 00:16:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.544 00:16:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:20.544 00:16:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.544 00:16:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:20.544 00:16:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.544 00:16:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:20.544 00:16:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.544 00:16:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:20.544 00:16:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.544 00:16:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:20.544 [2024-11-18 00:16:44.348027] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:20.544 00:16:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.544 00:16:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:20.544 00:16:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:20.544 00:16:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:20.544 00:16:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:20.544 00:16:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:20.545 00:16:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:20.545 00:16:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.545 00:16:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:20.806 00:16:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.806 00:16:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:20.806 { 00:11:20.806 "name": "Malloc1", 00:11:20.806 "aliases": [ 00:11:20.806 "68cf729b-d2b0-4666-9472-a515dd630b7b" 00:11:20.806 ], 00:11:20.806 "product_name": "Malloc disk", 00:11:20.806 "block_size": 512, 00:11:20.806 "num_blocks": 1048576, 00:11:20.806 "uuid": "68cf729b-d2b0-4666-9472-a515dd630b7b", 00:11:20.806 "assigned_rate_limits": { 00:11:20.806 "rw_ios_per_sec": 0, 00:11:20.806 "rw_mbytes_per_sec": 0, 00:11:20.806 "r_mbytes_per_sec": 0, 00:11:20.806 "w_mbytes_per_sec": 0 00:11:20.806 }, 00:11:20.806 "claimed": true, 00:11:20.806 "claim_type": "exclusive_write", 00:11:20.806 "zoned": false, 00:11:20.806 "supported_io_types": { 00:11:20.806 "read": true, 00:11:20.806 "write": true, 00:11:20.806 "unmap": true, 00:11:20.806 "flush": true, 00:11:20.806 "reset": true, 00:11:20.806 "nvme_admin": false, 00:11:20.806 "nvme_io": false, 00:11:20.806 "nvme_io_md": false, 00:11:20.806 "write_zeroes": true, 00:11:20.806 "zcopy": true, 00:11:20.806 "get_zone_info": false, 00:11:20.806 "zone_management": false, 00:11:20.806 "zone_append": false, 00:11:20.806 "compare": false, 00:11:20.806 "compare_and_write": false, 00:11:20.806 "abort": true, 00:11:20.806 "seek_hole": false, 00:11:20.806 "seek_data": false, 00:11:20.806 "copy": true, 00:11:20.806 "nvme_iov_md": false 00:11:20.806 }, 00:11:20.806 "memory_domains": [ 00:11:20.806 { 00:11:20.806 "dma_device_id": "system", 00:11:20.806 "dma_device_type": 1 00:11:20.806 }, 00:11:20.806 { 00:11:20.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.806 "dma_device_type": 2 00:11:20.806 } 00:11:20.806 ], 00:11:20.806 "driver_specific": {} 00:11:20.806 } 00:11:20.806 ]' 00:11:20.806 00:16:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:20.806 00:16:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:20.806 00:16:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:20.806 00:16:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:20.806 00:16:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:20.806 00:16:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:20.806 00:16:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:20.806 00:16:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:21.372 00:16:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:21.372 00:16:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:21.372 00:16:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:21.372 00:16:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:21.372 00:16:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:23.904 00:16:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:23.904 00:16:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:23.904 00:16:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:23.904 00:16:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:23.904 00:16:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:23.904 00:16:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:23.904 00:16:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:23.904 00:16:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:23.904 00:16:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:23.904 00:16:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:23.904 00:16:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:23.904 00:16:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:23.904 00:16:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:23.904 00:16:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:23.904 00:16:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:23.904 00:16:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:23.904 00:16:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:23.904 00:16:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:24.471 00:16:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:25.406 00:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:25.406 00:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:25.406 00:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:25.406 00:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:25.406 00:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:25.406 ************************************ 00:11:25.406 START TEST filesystem_in_capsule_ext4 00:11:25.406 ************************************ 00:11:25.406 00:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:25.406 00:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:25.407 00:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:25.407 00:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:25.407 00:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:25.407 00:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:25.407 00:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:25.407 00:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:25.407 00:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:25.407 00:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:25.407 00:16:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:25.407 mke2fs 1.47.0 (5-Feb-2023) 00:11:25.665 Discarding device blocks: 0/522240 done 00:11:25.665 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:25.665 Filesystem UUID: 07fceab3-8258-48af-8d26-6af9ba5ed655 00:11:25.665 Superblock backups stored on blocks: 00:11:25.665 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:25.665 00:11:25.665 Allocating group tables: 0/64 done 00:11:25.665 Writing inode tables: 0/64 done 00:11:28.460 Creating journal (8192 blocks): done 00:11:30.659 Writing superblocks and filesystem accounting information: 0/6410/64 done 00:11:30.659 00:11:30.660 00:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:30.660 00:16:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:37.219 00:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:37.219 00:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:37.219 00:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:37.219 00:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:37.219 00:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:37.219 00:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:37.219 00:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 166854 00:11:37.219 00:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:37.219 00:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:37.219 00:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:37.219 00:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:37.219 00:11:37.219 real 0m11.273s 00:11:37.219 user 0m0.029s 00:11:37.219 sys 0m0.059s 00:11:37.219 00:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:37.219 00:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:37.219 ************************************ 00:11:37.219 END TEST filesystem_in_capsule_ext4 00:11:37.219 ************************************ 00:11:37.219 00:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:37.219 00:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:37.219 00:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:37.219 00:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.219 ************************************ 00:11:37.219 START TEST filesystem_in_capsule_btrfs 00:11:37.219 ************************************ 00:11:37.219 00:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:37.219 00:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:37.219 00:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:37.219 00:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:37.219 00:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:37.219 00:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:37.219 00:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:37.219 00:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:37.219 00:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:37.219 00:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:37.219 00:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:37.219 btrfs-progs v6.8.1 00:11:37.219 See https://btrfs.readthedocs.io for more information. 00:11:37.219 00:11:37.219 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:37.219 NOTE: several default settings have changed in version 5.15, please make sure 00:11:37.219 this does not affect your deployments: 00:11:37.219 - DUP for metadata (-m dup) 00:11:37.219 - enabled no-holes (-O no-holes) 00:11:37.219 - enabled free-space-tree (-R free-space-tree) 00:11:37.219 00:11:37.219 Label: (null) 00:11:37.219 UUID: c345b0c7-be7c-48ad-9ef2-0cde08238a3f 00:11:37.219 Node size: 16384 00:11:37.219 Sector size: 4096 (CPU page size: 4096) 00:11:37.219 Filesystem size: 510.00MiB 00:11:37.219 Block group profiles: 00:11:37.219 Data: single 8.00MiB 00:11:37.219 Metadata: DUP 32.00MiB 00:11:37.219 System: DUP 8.00MiB 00:11:37.219 SSD detected: yes 00:11:37.219 Zoned device: no 00:11:37.219 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:37.219 Checksum: crc32c 00:11:37.219 Number of devices: 1 00:11:37.219 Devices: 00:11:37.219 ID SIZE PATH 00:11:37.219 1 510.00MiB /dev/nvme0n1p1 00:11:37.219 00:11:37.219 00:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:37.219 00:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:37.477 00:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:37.477 00:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:37.477 00:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:37.477 00:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:37.477 00:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:37.477 00:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:37.477 00:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 166854 00:11:37.477 00:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:37.478 00:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:37.478 00:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:37.478 00:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:37.738 00:11:37.738 real 0m0.828s 00:11:37.738 user 0m0.020s 00:11:37.738 sys 0m0.096s 00:11:37.738 00:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:37.738 00:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:37.738 ************************************ 00:11:37.738 END TEST filesystem_in_capsule_btrfs 00:11:37.738 ************************************ 00:11:37.738 00:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:37.738 00:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:37.738 00:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:37.738 00:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.738 ************************************ 00:11:37.738 START TEST filesystem_in_capsule_xfs 00:11:37.738 ************************************ 00:11:37.738 00:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:37.738 00:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:37.738 00:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:37.738 00:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:37.738 00:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:37.738 00:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:37.738 00:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:37.738 00:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:11:37.738 00:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:37.738 00:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:37.738 00:17:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:37.738 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:37.738 = sectsz=512 attr=2, projid32bit=1 00:11:37.738 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:37.738 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:37.738 data = bsize=4096 blocks=130560, imaxpct=25 00:11:37.738 = sunit=0 swidth=0 blks 00:11:37.738 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:37.738 log =internal log bsize=4096 blocks=16384, version=2 00:11:37.738 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:37.738 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:38.674 Discarding blocks...Done. 00:11:38.674 00:17:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:38.674 00:17:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:41.210 00:17:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:41.210 00:17:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:41.210 00:17:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:41.210 00:17:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:41.210 00:17:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:41.210 00:17:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:41.210 00:17:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 166854 00:11:41.210 00:17:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:41.210 00:17:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:41.210 00:17:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:41.210 00:17:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:41.210 00:11:41.210 real 0m3.556s 00:11:41.210 user 0m0.013s 00:11:41.210 sys 0m0.063s 00:11:41.210 00:17:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:41.210 00:17:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:41.210 ************************************ 00:11:41.210 END TEST filesystem_in_capsule_xfs 00:11:41.210 ************************************ 00:11:41.210 00:17:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:41.210 00:17:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:41.210 00:17:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:41.468 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.468 00:17:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:41.468 00:17:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:41.468 00:17:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:41.468 00:17:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:41.468 00:17:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:41.468 00:17:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:41.468 00:17:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:41.468 00:17:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:41.468 00:17:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.468 00:17:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:41.468 00:17:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.468 00:17:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:41.468 00:17:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 166854 00:11:41.468 00:17:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 166854 ']' 00:11:41.468 00:17:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 166854 00:11:41.468 00:17:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:41.468 00:17:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:41.468 00:17:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 166854 00:11:41.468 00:17:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:41.468 00:17:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:41.468 00:17:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 166854' 00:11:41.468 killing process with pid 166854 00:11:41.468 00:17:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 166854 00:11:41.468 00:17:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 166854 00:11:42.037 00:17:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:42.037 00:11:42.037 real 0m21.741s 00:11:42.037 user 1m24.389s 00:11:42.037 sys 0m2.570s 00:11:42.037 00:17:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:42.037 00:17:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:42.037 ************************************ 00:11:42.037 END TEST nvmf_filesystem_in_capsule 00:11:42.037 ************************************ 00:11:42.037 00:17:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:42.037 00:17:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:42.037 00:17:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:42.037 00:17:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:42.037 00:17:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:42.037 00:17:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:42.037 00:17:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:42.037 rmmod nvme_tcp 00:11:42.037 rmmod nvme_fabrics 00:11:42.037 rmmod nvme_keyring 00:11:42.037 00:17:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:42.037 00:17:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:42.037 00:17:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:42.037 00:17:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:42.037 00:17:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:42.037 00:17:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:42.037 00:17:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:42.037 00:17:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:42.037 00:17:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:11:42.037 00:17:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:42.037 00:17:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:11:42.037 00:17:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:42.037 00:17:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:42.037 00:17:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:42.037 00:17:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:42.038 00:17:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:43.947 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:43.947 00:11:43.947 real 0m44.157s 00:11:43.947 user 2m33.503s 00:11:43.947 sys 0m6.620s 00:11:43.947 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:43.947 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:43.947 ************************************ 00:11:43.947 END TEST nvmf_filesystem 00:11:43.947 ************************************ 00:11:43.947 00:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:43.947 00:17:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:43.947 00:17:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:43.947 00:17:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:44.207 ************************************ 00:11:44.207 START TEST nvmf_target_discovery 00:11:44.207 ************************************ 00:11:44.207 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:44.207 * Looking for test storage... 00:11:44.207 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:44.207 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:44.207 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:11:44.207 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:44.207 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:44.207 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:44.207 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:44.207 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:44.207 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:44.207 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:44.207 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:44.207 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:44.207 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:44.207 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:44.207 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:44.207 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:44.207 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:44.207 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:44.207 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:44.207 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:44.207 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:44.207 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:44.207 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:44.207 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:44.207 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:44.207 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:44.207 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:44.207 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:44.207 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:44.207 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:44.207 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:44.207 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:44.207 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:44.207 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:44.207 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:44.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.207 --rc genhtml_branch_coverage=1 00:11:44.207 --rc genhtml_function_coverage=1 00:11:44.207 --rc genhtml_legend=1 00:11:44.207 --rc geninfo_all_blocks=1 00:11:44.207 --rc geninfo_unexecuted_blocks=1 00:11:44.207 00:11:44.207 ' 00:11:44.207 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:44.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.207 --rc genhtml_branch_coverage=1 00:11:44.207 --rc genhtml_function_coverage=1 00:11:44.207 --rc genhtml_legend=1 00:11:44.207 --rc geninfo_all_blocks=1 00:11:44.207 --rc geninfo_unexecuted_blocks=1 00:11:44.207 00:11:44.207 ' 00:11:44.207 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:44.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.207 --rc genhtml_branch_coverage=1 00:11:44.207 --rc genhtml_function_coverage=1 00:11:44.207 --rc genhtml_legend=1 00:11:44.207 --rc geninfo_all_blocks=1 00:11:44.207 --rc geninfo_unexecuted_blocks=1 00:11:44.207 00:11:44.207 ' 00:11:44.207 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:44.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.207 --rc genhtml_branch_coverage=1 00:11:44.207 --rc genhtml_function_coverage=1 00:11:44.207 --rc genhtml_legend=1 00:11:44.207 --rc geninfo_all_blocks=1 00:11:44.207 --rc geninfo_unexecuted_blocks=1 00:11:44.207 00:11:44.207 ' 00:11:44.207 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:44.207 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:44.207 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:44.207 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:44.207 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:44.207 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:44.207 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:44.207 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:44.207 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:44.207 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:44.207 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:44.207 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:44.207 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:44.207 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:44.207 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:44.207 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:44.207 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:44.207 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:44.207 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:44.207 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:44.207 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:44.207 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:44.207 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:44.207 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.207 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.207 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.207 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:44.208 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.208 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:44.208 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:44.208 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:44.208 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:44.208 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:44.208 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:44.208 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:44.208 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:44.208 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:44.208 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:44.208 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:44.208 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:44.208 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:44.208 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:44.208 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:44.208 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:44.208 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:44.208 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:44.208 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:44.208 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:44.208 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:44.208 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:44.208 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:44.208 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:44.208 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:44.208 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:44.208 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:44.208 00:17:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:46.759 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:46.759 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:46.759 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:46.759 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:46.759 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:46.759 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:46.759 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:46.759 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:46.759 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:46.759 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:46.759 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:46.759 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:46.759 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:46.759 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:46.759 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:46.759 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:46.759 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:46.759 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:46.759 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:46.759 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:46.759 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:46.759 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:46.759 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:46.759 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:46.759 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:46.759 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:46.759 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:46.759 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:46.759 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:46.759 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:46.759 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:46.759 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:46.759 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:46.759 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:46.759 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:46.759 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:46.759 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:46.759 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:46.759 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:46.759 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:46.759 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:46.759 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:46.759 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:46.759 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:46.759 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:46.759 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:46.759 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:46.759 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:46.759 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:46.759 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:46.759 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:46.759 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:46.759 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:46.760 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:46.760 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:46.760 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:46.760 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:11:46.760 00:11:46.760 --- 10.0.0.2 ping statistics --- 00:11:46.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:46.760 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:46.760 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:46.760 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:11:46.760 00:11:46.760 --- 10.0.0.1 ping statistics --- 00:11:46.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:46.760 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=171553 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 171553 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 171553 ']' 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:46.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:46.760 [2024-11-18 00:17:10.317168] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:11:46.760 [2024-11-18 00:17:10.317257] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:46.760 [2024-11-18 00:17:10.391916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:46.760 [2024-11-18 00:17:10.445112] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:46.760 [2024-11-18 00:17:10.445167] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:46.760 [2024-11-18 00:17:10.445196] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:46.760 [2024-11-18 00:17:10.445207] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:46.760 [2024-11-18 00:17:10.445216] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:46.760 [2024-11-18 00:17:10.447021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:46.760 [2024-11-18 00:17:10.447054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:46.760 [2024-11-18 00:17:10.447102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:46.760 [2024-11-18 00:17:10.447104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:46.760 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.020 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:47.020 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:47.020 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.020 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.020 [2024-11-18 00:17:10.598613] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:47.020 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.020 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:47.020 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:47.020 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:47.020 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.020 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.020 Null1 00:11:47.020 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.020 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:47.020 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.020 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.020 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.020 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:47.020 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.020 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.020 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.020 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:47.020 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.020 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.020 [2024-11-18 00:17:10.646936] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:47.020 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.020 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:47.020 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:47.020 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.020 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.020 Null2 00:11:47.020 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.020 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:47.020 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.020 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.020 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.021 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:47.021 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.021 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.021 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.021 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:47.021 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.021 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.021 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.021 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:47.021 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:47.021 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.021 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.021 Null3 00:11:47.021 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.021 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:47.021 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.021 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.021 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.021 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:47.021 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.021 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.021 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.021 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:47.021 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.021 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.021 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.021 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:47.021 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:47.021 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.021 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.021 Null4 00:11:47.021 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.021 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:47.021 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.021 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.021 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.021 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:47.021 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.021 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.021 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.021 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:47.021 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.021 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.021 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.021 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:47.021 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.021 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.021 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.021 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:47.021 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.021 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.021 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.021 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:11:47.280 00:11:47.280 Discovery Log Number of Records 6, Generation counter 6 00:11:47.280 =====Discovery Log Entry 0====== 00:11:47.280 trtype: tcp 00:11:47.280 adrfam: ipv4 00:11:47.280 subtype: current discovery subsystem 00:11:47.280 treq: not required 00:11:47.280 portid: 0 00:11:47.280 trsvcid: 4420 00:11:47.280 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:47.280 traddr: 10.0.0.2 00:11:47.280 eflags: explicit discovery connections, duplicate discovery information 00:11:47.280 sectype: none 00:11:47.280 =====Discovery Log Entry 1====== 00:11:47.280 trtype: tcp 00:11:47.280 adrfam: ipv4 00:11:47.280 subtype: nvme subsystem 00:11:47.280 treq: not required 00:11:47.280 portid: 0 00:11:47.280 trsvcid: 4420 00:11:47.280 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:47.280 traddr: 10.0.0.2 00:11:47.280 eflags: none 00:11:47.280 sectype: none 00:11:47.280 =====Discovery Log Entry 2====== 00:11:47.280 trtype: tcp 00:11:47.280 adrfam: ipv4 00:11:47.280 subtype: nvme subsystem 00:11:47.280 treq: not required 00:11:47.280 portid: 0 00:11:47.280 trsvcid: 4420 00:11:47.280 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:47.280 traddr: 10.0.0.2 00:11:47.280 eflags: none 00:11:47.280 sectype: none 00:11:47.280 =====Discovery Log Entry 3====== 00:11:47.280 trtype: tcp 00:11:47.280 adrfam: ipv4 00:11:47.280 subtype: nvme subsystem 00:11:47.280 treq: not required 00:11:47.280 portid: 0 00:11:47.280 trsvcid: 4420 00:11:47.280 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:47.280 traddr: 10.0.0.2 00:11:47.280 eflags: none 00:11:47.280 sectype: none 00:11:47.280 =====Discovery Log Entry 4====== 00:11:47.280 trtype: tcp 00:11:47.280 adrfam: ipv4 00:11:47.280 subtype: nvme subsystem 00:11:47.280 treq: not required 00:11:47.280 portid: 0 00:11:47.280 trsvcid: 4420 00:11:47.280 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:47.280 traddr: 10.0.0.2 00:11:47.280 eflags: none 00:11:47.280 sectype: none 00:11:47.280 =====Discovery Log Entry 5====== 00:11:47.280 trtype: tcp 00:11:47.280 adrfam: ipv4 00:11:47.280 subtype: discovery subsystem referral 00:11:47.280 treq: not required 00:11:47.280 portid: 0 00:11:47.280 trsvcid: 4430 00:11:47.280 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:47.280 traddr: 10.0.0.2 00:11:47.280 eflags: none 00:11:47.280 sectype: none 00:11:47.280 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:47.280 Perform nvmf subsystem discovery via RPC 00:11:47.280 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:47.280 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.280 00:17:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.280 [ 00:11:47.280 { 00:11:47.280 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:47.280 "subtype": "Discovery", 00:11:47.280 "listen_addresses": [ 00:11:47.280 { 00:11:47.280 "trtype": "TCP", 00:11:47.280 "adrfam": "IPv4", 00:11:47.280 "traddr": "10.0.0.2", 00:11:47.280 "trsvcid": "4420" 00:11:47.280 } 00:11:47.280 ], 00:11:47.280 "allow_any_host": true, 00:11:47.280 "hosts": [] 00:11:47.280 }, 00:11:47.280 { 00:11:47.280 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:47.280 "subtype": "NVMe", 00:11:47.280 "listen_addresses": [ 00:11:47.280 { 00:11:47.280 "trtype": "TCP", 00:11:47.280 "adrfam": "IPv4", 00:11:47.280 "traddr": "10.0.0.2", 00:11:47.280 "trsvcid": "4420" 00:11:47.280 } 00:11:47.280 ], 00:11:47.280 "allow_any_host": true, 00:11:47.280 "hosts": [], 00:11:47.280 "serial_number": "SPDK00000000000001", 00:11:47.280 "model_number": "SPDK bdev Controller", 00:11:47.280 "max_namespaces": 32, 00:11:47.280 "min_cntlid": 1, 00:11:47.280 "max_cntlid": 65519, 00:11:47.280 "namespaces": [ 00:11:47.280 { 00:11:47.280 "nsid": 1, 00:11:47.280 "bdev_name": "Null1", 00:11:47.280 "name": "Null1", 00:11:47.280 "nguid": "847E8CFD5E9842A5AD8446EADFD31D6F", 00:11:47.280 "uuid": "847e8cfd-5e98-42a5-ad84-46eadfd31d6f" 00:11:47.280 } 00:11:47.280 ] 00:11:47.280 }, 00:11:47.280 { 00:11:47.280 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:47.280 "subtype": "NVMe", 00:11:47.280 "listen_addresses": [ 00:11:47.280 { 00:11:47.280 "trtype": "TCP", 00:11:47.280 "adrfam": "IPv4", 00:11:47.280 "traddr": "10.0.0.2", 00:11:47.280 "trsvcid": "4420" 00:11:47.280 } 00:11:47.280 ], 00:11:47.280 "allow_any_host": true, 00:11:47.280 "hosts": [], 00:11:47.280 "serial_number": "SPDK00000000000002", 00:11:47.280 "model_number": "SPDK bdev Controller", 00:11:47.280 "max_namespaces": 32, 00:11:47.280 "min_cntlid": 1, 00:11:47.280 "max_cntlid": 65519, 00:11:47.280 "namespaces": [ 00:11:47.280 { 00:11:47.280 "nsid": 1, 00:11:47.280 "bdev_name": "Null2", 00:11:47.280 "name": "Null2", 00:11:47.280 "nguid": "87FF961BC7B34244B59DD710A20C09DC", 00:11:47.280 "uuid": "87ff961b-c7b3-4244-b59d-d710a20c09dc" 00:11:47.280 } 00:11:47.280 ] 00:11:47.280 }, 00:11:47.280 { 00:11:47.280 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:47.280 "subtype": "NVMe", 00:11:47.280 "listen_addresses": [ 00:11:47.280 { 00:11:47.280 "trtype": "TCP", 00:11:47.280 "adrfam": "IPv4", 00:11:47.280 "traddr": "10.0.0.2", 00:11:47.280 "trsvcid": "4420" 00:11:47.280 } 00:11:47.280 ], 00:11:47.280 "allow_any_host": true, 00:11:47.280 "hosts": [], 00:11:47.280 "serial_number": "SPDK00000000000003", 00:11:47.280 "model_number": "SPDK bdev Controller", 00:11:47.280 "max_namespaces": 32, 00:11:47.280 "min_cntlid": 1, 00:11:47.280 "max_cntlid": 65519, 00:11:47.280 "namespaces": [ 00:11:47.280 { 00:11:47.280 "nsid": 1, 00:11:47.280 "bdev_name": "Null3", 00:11:47.280 "name": "Null3", 00:11:47.280 "nguid": "FFC7E92169C64474814B67FABDBC0FD0", 00:11:47.280 "uuid": "ffc7e921-69c6-4474-814b-67fabdbc0fd0" 00:11:47.280 } 00:11:47.280 ] 00:11:47.281 }, 00:11:47.281 { 00:11:47.281 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:47.281 "subtype": "NVMe", 00:11:47.281 "listen_addresses": [ 00:11:47.281 { 00:11:47.281 "trtype": "TCP", 00:11:47.281 "adrfam": "IPv4", 00:11:47.281 "traddr": "10.0.0.2", 00:11:47.281 "trsvcid": "4420" 00:11:47.281 } 00:11:47.281 ], 00:11:47.281 "allow_any_host": true, 00:11:47.281 "hosts": [], 00:11:47.281 "serial_number": "SPDK00000000000004", 00:11:47.281 "model_number": "SPDK bdev Controller", 00:11:47.281 "max_namespaces": 32, 00:11:47.281 "min_cntlid": 1, 00:11:47.281 "max_cntlid": 65519, 00:11:47.281 "namespaces": [ 00:11:47.281 { 00:11:47.281 "nsid": 1, 00:11:47.281 "bdev_name": "Null4", 00:11:47.281 "name": "Null4", 00:11:47.281 "nguid": "499242DF0A7A4937A1261D2BDBAEBAB7", 00:11:47.281 "uuid": "499242df-0a7a-4937-a126-1d2bdbaebab7" 00:11:47.281 } 00:11:47.281 ] 00:11:47.281 } 00:11:47.281 ] 00:11:47.281 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.281 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:47.281 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:47.281 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:47.281 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.281 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.281 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.281 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:47.281 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.281 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.281 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.281 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:47.281 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:47.281 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.281 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.281 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.281 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:47.281 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.281 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.281 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.281 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:47.281 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:47.281 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.281 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.281 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.281 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:47.281 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.281 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.281 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.281 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:47.281 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:47.281 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.281 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.281 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.281 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:47.281 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.281 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.281 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.310 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:47.310 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.310 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.310 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.310 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:47.310 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:47.310 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.310 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.310 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.570 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:47.570 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:47.570 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:47.570 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:47.570 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:47.570 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:47.570 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:47.570 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:47.570 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:47.570 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:47.570 rmmod nvme_tcp 00:11:47.570 rmmod nvme_fabrics 00:11:47.570 rmmod nvme_keyring 00:11:47.570 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:47.570 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:47.570 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:47.570 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 171553 ']' 00:11:47.570 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 171553 00:11:47.570 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 171553 ']' 00:11:47.570 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 171553 00:11:47.570 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:11:47.570 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:47.570 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 171553 00:11:47.570 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:47.570 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:47.570 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 171553' 00:11:47.570 killing process with pid 171553 00:11:47.570 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 171553 00:11:47.570 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 171553 00:11:47.830 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:47.830 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:47.830 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:47.830 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:47.830 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:11:47.830 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:47.830 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:11:47.830 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:47.830 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:47.830 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:47.830 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:47.830 00:17:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:49.755 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:49.755 00:11:49.755 real 0m5.700s 00:11:49.755 user 0m4.920s 00:11:49.755 sys 0m1.994s 00:11:49.755 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:49.755 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.755 ************************************ 00:11:49.755 END TEST nvmf_target_discovery 00:11:49.755 ************************************ 00:11:49.755 00:17:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:49.755 00:17:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:49.755 00:17:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:49.755 00:17:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:49.755 ************************************ 00:11:49.755 START TEST nvmf_referrals 00:11:49.755 ************************************ 00:11:49.755 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:49.755 * Looking for test storage... 00:11:49.755 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:49.755 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:49.755 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:11:49.755 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:50.015 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:50.015 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:50.015 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:50.015 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:50.015 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:50.015 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:50.015 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:50.015 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:50.015 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:50.015 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:50.015 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:50.015 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:50.015 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:50.015 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:50.015 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:50.015 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:50.015 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:50.015 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:50.015 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:50.015 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:50.015 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:50.015 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:50.015 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:50.015 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:50.015 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:50.015 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:50.015 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:50.015 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:50.015 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:50.015 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:50.015 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:50.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.015 --rc genhtml_branch_coverage=1 00:11:50.015 --rc genhtml_function_coverage=1 00:11:50.015 --rc genhtml_legend=1 00:11:50.015 --rc geninfo_all_blocks=1 00:11:50.015 --rc geninfo_unexecuted_blocks=1 00:11:50.015 00:11:50.015 ' 00:11:50.015 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:50.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.015 --rc genhtml_branch_coverage=1 00:11:50.015 --rc genhtml_function_coverage=1 00:11:50.015 --rc genhtml_legend=1 00:11:50.015 --rc geninfo_all_blocks=1 00:11:50.015 --rc geninfo_unexecuted_blocks=1 00:11:50.015 00:11:50.015 ' 00:11:50.015 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:50.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.015 --rc genhtml_branch_coverage=1 00:11:50.015 --rc genhtml_function_coverage=1 00:11:50.015 --rc genhtml_legend=1 00:11:50.016 --rc geninfo_all_blocks=1 00:11:50.016 --rc geninfo_unexecuted_blocks=1 00:11:50.016 00:11:50.016 ' 00:11:50.016 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:50.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.016 --rc genhtml_branch_coverage=1 00:11:50.016 --rc genhtml_function_coverage=1 00:11:50.016 --rc genhtml_legend=1 00:11:50.016 --rc geninfo_all_blocks=1 00:11:50.016 --rc geninfo_unexecuted_blocks=1 00:11:50.016 00:11:50.016 ' 00:11:50.016 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:50.016 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:50.016 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:50.016 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:50.016 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:50.016 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:50.016 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:50.016 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:50.016 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:50.016 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:50.016 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:50.016 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:50.016 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:50.016 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:50.016 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:50.016 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:50.016 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:50.016 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:50.016 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:50.016 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:50.016 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:50.016 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:50.016 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:50.016 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.016 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.016 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.016 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:50.016 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.016 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:50.016 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:50.016 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:50.016 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:50.016 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:50.016 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:50.016 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:50.016 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:50.016 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:50.016 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:50.016 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:50.016 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:50.016 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:50.016 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:50.016 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:50.016 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:50.016 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:50.016 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:50.016 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:50.016 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:50.016 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:50.016 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:50.016 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:50.016 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:50.016 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:50.016 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.016 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:50.016 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:50.017 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:50.017 00:17:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:51.927 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:51.927 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:51.927 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:51.927 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:51.927 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:51.928 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:51.928 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:51.928 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:51.928 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:51.928 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:51.928 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:51.928 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:51.928 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:51.928 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:51.928 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:51.928 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:51.928 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:52.197 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:52.197 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:52.197 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:52.197 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:52.197 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:52.197 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:52.197 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:52.197 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:52.197 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:52.197 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:11:52.197 00:11:52.197 --- 10.0.0.2 ping statistics --- 00:11:52.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:52.197 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:11:52.197 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:52.197 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:52.197 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:11:52.197 00:11:52.197 --- 10.0.0.1 ping statistics --- 00:11:52.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:52.197 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:11:52.197 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:52.197 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:11:52.197 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:52.197 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:52.197 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:52.197 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:52.197 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:52.197 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:52.197 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:52.197 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:52.197 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:52.197 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:52.197 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:52.197 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=173655 00:11:52.197 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:52.197 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 173655 00:11:52.198 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 173655 ']' 00:11:52.198 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:52.198 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:52.198 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:52.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:52.198 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:52.198 00:17:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:52.198 [2024-11-18 00:17:15.978484] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:11:52.198 [2024-11-18 00:17:15.978583] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:52.458 [2024-11-18 00:17:16.053051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:52.458 [2024-11-18 00:17:16.103359] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:52.458 [2024-11-18 00:17:16.103414] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:52.458 [2024-11-18 00:17:16.103444] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:52.458 [2024-11-18 00:17:16.103456] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:52.458 [2024-11-18 00:17:16.103467] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:52.458 [2024-11-18 00:17:16.105106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:52.458 [2024-11-18 00:17:16.105176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:52.458 [2024-11-18 00:17:16.105236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:52.458 [2024-11-18 00:17:16.105239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.458 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:52.458 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:11:52.458 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:52.458 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:52.458 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:52.458 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:52.458 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:52.458 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.458 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:52.458 [2024-11-18 00:17:16.259814] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:52.458 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.458 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:52.458 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.458 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:52.458 [2024-11-18 00:17:16.272066] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:52.458 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.458 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:52.458 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.458 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:52.717 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.717 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:52.717 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.717 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:52.717 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.717 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:52.717 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.717 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:52.717 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.717 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:52.717 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:52.717 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.717 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:52.717 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.717 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:52.717 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:52.717 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:52.717 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:52.717 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:52.717 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.717 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:52.717 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:52.717 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.717 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:52.717 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:52.717 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:52.717 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:52.717 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:52.717 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:52.717 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:52.717 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:52.975 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:52.975 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:52.975 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:52.975 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.975 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:52.975 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.975 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:52.975 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.975 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:52.975 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.975 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:52.975 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.975 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:52.975 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.975 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:52.975 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:52.975 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.975 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:52.975 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.975 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:52.975 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:52.976 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:52.976 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:52.976 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:52.976 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:52.976 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:53.237 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:53.237 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:53.237 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:53.237 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.237 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:53.237 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.237 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:53.237 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.237 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:53.237 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.237 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:53.237 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:53.237 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:53.237 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.237 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:53.237 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:53.237 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:53.237 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.237 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:53.237 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:53.237 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:53.237 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:53.237 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:53.237 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:53.237 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:53.237 00:17:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:53.237 00:17:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:53.237 00:17:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:53.237 00:17:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:53.237 00:17:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:53.237 00:17:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:53.237 00:17:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:53.237 00:17:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:53.495 00:17:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:53.495 00:17:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:53.495 00:17:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:53.495 00:17:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:53.495 00:17:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:53.495 00:17:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:53.753 00:17:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:53.753 00:17:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:53.753 00:17:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.753 00:17:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:53.753 00:17:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.753 00:17:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:53.753 00:17:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:53.753 00:17:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:53.753 00:17:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:53.753 00:17:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.753 00:17:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:53.753 00:17:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:53.753 00:17:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.753 00:17:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:53.753 00:17:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:53.753 00:17:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:53.753 00:17:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:53.753 00:17:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:53.753 00:17:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:53.753 00:17:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:53.753 00:17:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:53.753 00:17:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:53.753 00:17:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:53.753 00:17:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:53.753 00:17:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:53.753 00:17:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:53.753 00:17:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:53.753 00:17:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:54.011 00:17:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:54.011 00:17:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:54.011 00:17:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:54.011 00:17:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:54.011 00:17:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:54.011 00:17:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:54.011 00:17:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:54.011 00:17:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:54.011 00:17:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.011 00:17:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:54.011 00:17:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.011 00:17:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:54.011 00:17:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.011 00:17:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:54.011 00:17:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:54.270 00:17:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.270 00:17:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:54.270 00:17:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:54.270 00:17:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:54.270 00:17:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:54.270 00:17:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:54.270 00:17:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:54.270 00:17:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:54.270 00:17:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:54.270 00:17:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:54.270 00:17:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:54.270 00:17:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:54.270 00:17:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:54.270 00:17:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:54.270 00:17:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:54.270 00:17:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:54.270 00:17:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:54.270 00:17:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:54.270 rmmod nvme_tcp 00:11:54.529 rmmod nvme_fabrics 00:11:54.529 rmmod nvme_keyring 00:11:54.529 00:17:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:54.529 00:17:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:54.529 00:17:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:54.529 00:17:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 173655 ']' 00:11:54.529 00:17:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 173655 00:11:54.529 00:17:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 173655 ']' 00:11:54.529 00:17:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 173655 00:11:54.529 00:17:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:11:54.529 00:17:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:54.529 00:17:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 173655 00:11:54.529 00:17:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:54.529 00:17:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:54.529 00:17:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 173655' 00:11:54.529 killing process with pid 173655 00:11:54.529 00:17:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 173655 00:11:54.529 00:17:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 173655 00:11:54.789 00:17:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:54.789 00:17:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:54.789 00:17:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:54.789 00:17:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:11:54.789 00:17:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:11:54.789 00:17:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:54.789 00:17:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:11:54.789 00:17:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:54.789 00:17:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:54.789 00:17:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:54.789 00:17:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:54.789 00:17:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:56.702 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:56.702 00:11:56.702 real 0m6.896s 00:11:56.702 user 0m10.549s 00:11:56.702 sys 0m2.263s 00:11:56.702 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:56.702 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:56.702 ************************************ 00:11:56.702 END TEST nvmf_referrals 00:11:56.702 ************************************ 00:11:56.702 00:17:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:56.702 00:17:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:56.702 00:17:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:56.702 00:17:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:56.702 ************************************ 00:11:56.702 START TEST nvmf_connect_disconnect 00:11:56.702 ************************************ 00:11:56.702 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:56.702 * Looking for test storage... 00:11:56.702 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:56.702 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:56.702 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:11:56.961 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:56.961 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:56.961 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:56.961 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:56.961 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:56.961 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:56.961 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:56.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.962 --rc genhtml_branch_coverage=1 00:11:56.962 --rc genhtml_function_coverage=1 00:11:56.962 --rc genhtml_legend=1 00:11:56.962 --rc geninfo_all_blocks=1 00:11:56.962 --rc geninfo_unexecuted_blocks=1 00:11:56.962 00:11:56.962 ' 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:56.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.962 --rc genhtml_branch_coverage=1 00:11:56.962 --rc genhtml_function_coverage=1 00:11:56.962 --rc genhtml_legend=1 00:11:56.962 --rc geninfo_all_blocks=1 00:11:56.962 --rc geninfo_unexecuted_blocks=1 00:11:56.962 00:11:56.962 ' 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:56.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.962 --rc genhtml_branch_coverage=1 00:11:56.962 --rc genhtml_function_coverage=1 00:11:56.962 --rc genhtml_legend=1 00:11:56.962 --rc geninfo_all_blocks=1 00:11:56.962 --rc geninfo_unexecuted_blocks=1 00:11:56.962 00:11:56.962 ' 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:56.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.962 --rc genhtml_branch_coverage=1 00:11:56.962 --rc genhtml_function_coverage=1 00:11:56.962 --rc genhtml_legend=1 00:11:56.962 --rc geninfo_all_blocks=1 00:11:56.962 --rc geninfo_unexecuted_blocks=1 00:11:56.962 00:11:56.962 ' 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:56.962 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:56.962 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:56.963 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:56.963 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:56.963 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:56.963 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:56.963 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:56.963 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:56.963 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:56.963 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:56.963 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:56.963 00:17:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:59.509 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:59.509 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:59.509 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:59.509 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:59.509 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:59.509 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:59.509 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:59.509 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:59.509 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:59.509 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:59.509 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:59.509 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:59.509 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:59.509 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:59.509 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:59.509 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:59.509 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:59.509 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:59.509 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:59.509 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:59.509 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:59.509 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:59.509 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:59.509 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:59.509 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:59.509 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:59.509 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:59.509 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:59.509 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:59.509 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:59.509 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:59.509 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:59.509 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:59.509 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:59.509 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:59.509 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:59.509 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:59.509 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:59.509 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:59.509 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:59.509 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:59.509 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:59.509 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:59.509 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:59.509 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:59.509 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:59.509 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:59.509 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:59.509 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:59.509 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:59.509 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:59.509 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:59.509 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:59.509 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:59.509 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:59.509 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:59.509 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:59.509 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:59.509 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:59.510 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:59.510 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:59.510 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:59.510 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:59.510 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:59.510 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:59.510 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:59.510 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:59.510 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:59.510 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:59.510 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:59.510 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:59.510 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:59.510 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:59.510 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:11:59.510 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:59.510 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:59.510 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:59.510 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:59.510 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:59.510 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:59.510 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:59.510 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:59.510 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:59.510 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:59.510 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:59.510 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:59.510 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:59.510 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:59.510 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:59.510 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:59.510 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:59.510 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:59.510 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:59.510 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:59.510 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:59.510 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:59.510 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:59.510 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:59.510 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:59.510 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:59.510 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:59.510 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:11:59.510 00:11:59.510 --- 10.0.0.2 ping statistics --- 00:11:59.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.510 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:11:59.510 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:59.510 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:59.510 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:11:59.510 00:11:59.510 --- 10.0.0.1 ping statistics --- 00:11:59.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.510 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:11:59.510 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:59.510 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:11:59.510 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:59.510 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:59.510 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:59.510 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:59.510 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:59.510 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:59.510 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:59.510 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:59.510 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:59.510 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:59.510 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:59.510 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=175955 00:11:59.510 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:59.510 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 175955 00:11:59.510 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 175955 ']' 00:11:59.510 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:59.510 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:59.510 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:59.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:59.510 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:59.510 00:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:59.510 [2024-11-18 00:17:23.034308] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:11:59.510 [2024-11-18 00:17:23.034426] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:59.510 [2024-11-18 00:17:23.109172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:59.510 [2024-11-18 00:17:23.158544] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:59.510 [2024-11-18 00:17:23.158600] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:59.510 [2024-11-18 00:17:23.158630] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:59.510 [2024-11-18 00:17:23.158641] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:59.510 [2024-11-18 00:17:23.158650] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:59.510 [2024-11-18 00:17:23.160323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:59.510 [2024-11-18 00:17:23.160357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:59.510 [2024-11-18 00:17:23.160417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:59.510 [2024-11-18 00:17:23.160420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.510 00:17:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:59.510 00:17:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:11:59.510 00:17:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:59.510 00:17:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:59.510 00:17:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:59.510 00:17:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:59.510 00:17:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:59.510 00:17:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.510 00:17:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:59.510 [2024-11-18 00:17:23.303769] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:59.510 00:17:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.510 00:17:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:59.510 00:17:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.510 00:17:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:59.769 00:17:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.769 00:17:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:59.769 00:17:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:59.769 00:17:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.769 00:17:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:59.769 00:17:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.769 00:17:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:59.769 00:17:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.769 00:17:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:59.769 00:17:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.769 00:17:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:59.769 00:17:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.769 00:17:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:59.769 [2024-11-18 00:17:23.370378] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:59.769 00:17:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.769 00:17:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:11:59.769 00:17:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:11:59.769 00:17:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:11:59.769 00:17:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:02.298 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.198 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.724 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.254 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.159 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.695 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.222 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.143 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.672 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.569 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.108 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.641 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.542 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.076 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.607 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.138 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.037 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.566 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.091 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.990 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.521 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.047 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.951 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.476 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.003 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.899 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:02.429 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.964 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.490 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.387 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.913 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.437 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.334 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.881 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.407 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.310 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.837 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.369 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.896 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.791 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.321 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.847 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.743 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.269 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.796 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:46.692 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:49.218 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:51.745 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.653 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.186 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.084 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.609 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:03.138 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.674 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.573 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:10.101 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:12.656 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.554 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:17.086 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.613 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.510 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:24.046 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.953 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.490 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:31.018 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:33.541 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.442 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.989 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:40.516 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:42.416 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.946 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:47.477 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.375 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:51.901 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:54.427 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.328 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:58.853 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:01.379 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:03.929 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.828 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:08.354 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:10.891 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.813 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:15.347 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:17.253 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:19.782 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:22.308 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:24.843 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:26.756 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:29.290 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:31.817 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:34.342 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:36.247 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:38.787 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:41.313 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:43.229 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:45.759 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:48.285 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:50.195 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:52.745 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:52.745 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:15:52.745 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:15:52.745 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:52.745 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:15:52.745 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:52.745 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:15:52.745 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:52.745 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:52.745 rmmod nvme_tcp 00:15:52.745 rmmod nvme_fabrics 00:15:52.745 rmmod nvme_keyring 00:15:52.745 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:52.745 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:15:52.745 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:15:52.745 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 175955 ']' 00:15:52.746 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 175955 00:15:52.746 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 175955 ']' 00:15:52.746 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 175955 00:15:52.746 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:15:52.746 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:52.746 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 175955 00:15:52.746 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:52.746 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:52.746 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 175955' 00:15:52.746 killing process with pid 175955 00:15:52.746 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 175955 00:15:52.746 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 175955 00:15:53.004 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:53.004 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:53.004 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:53.004 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:15:53.004 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:15:53.004 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:53.004 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:15:53.004 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:53.004 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:53.004 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:53.004 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:53.004 00:21:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:54.925 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:54.925 00:15:54.925 real 3m58.217s 00:15:54.925 user 15m7.407s 00:15:54.925 sys 0m35.318s 00:15:54.925 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:54.925 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:54.925 ************************************ 00:15:54.925 END TEST nvmf_connect_disconnect 00:15:54.925 ************************************ 00:15:54.925 00:21:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:15:54.925 00:21:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:54.925 00:21:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:54.925 00:21:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:54.925 ************************************ 00:15:54.925 START TEST nvmf_multitarget 00:15:54.925 ************************************ 00:15:54.925 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:15:55.191 * Looking for test storage... 00:15:55.191 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:55.191 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:55.191 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:15:55.191 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:55.191 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:55.191 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:55.191 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:55.191 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:55.191 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:15:55.191 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:15:55.191 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:15:55.191 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:15:55.191 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:15:55.191 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:15:55.191 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:15:55.191 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:55.191 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:15:55.191 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:15:55.191 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:55.191 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:55.191 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:15:55.191 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:15:55.191 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:55.191 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:15:55.191 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:15:55.191 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:15:55.191 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:15:55.191 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:55.191 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:15:55.191 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:15:55.191 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:55.191 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:55.191 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:15:55.191 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:55.191 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:55.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:55.191 --rc genhtml_branch_coverage=1 00:15:55.191 --rc genhtml_function_coverage=1 00:15:55.191 --rc genhtml_legend=1 00:15:55.191 --rc geninfo_all_blocks=1 00:15:55.191 --rc geninfo_unexecuted_blocks=1 00:15:55.191 00:15:55.191 ' 00:15:55.191 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:55.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:55.191 --rc genhtml_branch_coverage=1 00:15:55.191 --rc genhtml_function_coverage=1 00:15:55.191 --rc genhtml_legend=1 00:15:55.191 --rc geninfo_all_blocks=1 00:15:55.191 --rc geninfo_unexecuted_blocks=1 00:15:55.191 00:15:55.191 ' 00:15:55.192 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:55.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:55.192 --rc genhtml_branch_coverage=1 00:15:55.192 --rc genhtml_function_coverage=1 00:15:55.192 --rc genhtml_legend=1 00:15:55.192 --rc geninfo_all_blocks=1 00:15:55.192 --rc geninfo_unexecuted_blocks=1 00:15:55.192 00:15:55.192 ' 00:15:55.192 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:55.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:55.192 --rc genhtml_branch_coverage=1 00:15:55.192 --rc genhtml_function_coverage=1 00:15:55.192 --rc genhtml_legend=1 00:15:55.192 --rc geninfo_all_blocks=1 00:15:55.192 --rc geninfo_unexecuted_blocks=1 00:15:55.192 00:15:55.192 ' 00:15:55.192 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:55.192 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:15:55.192 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:55.192 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:55.192 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:55.192 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:55.192 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:55.192 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:55.192 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:55.192 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:55.192 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:55.192 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:55.192 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:55.192 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:55.192 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:55.192 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:55.192 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:55.192 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:55.192 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:55.192 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:15:55.192 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:55.192 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:55.192 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:55.192 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.192 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.192 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.192 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:15:55.192 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.192 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:15:55.192 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:55.192 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:55.192 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:55.192 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:55.192 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:55.192 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:55.192 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:55.192 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:55.192 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:55.192 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:55.192 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:55.192 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:15:55.192 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:55.192 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:55.192 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:55.192 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:55.192 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:55.192 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:55.192 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:55.192 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:55.192 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:55.192 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:55.192 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:15:55.192 00:21:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:57.723 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:57.723 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:15:57.723 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:57.723 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:57.723 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:57.723 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:57.723 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:57.723 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:15:57.723 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:57.723 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:15:57.723 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:15:57.723 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:15:57.723 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:15:57.723 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:15:57.723 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:15:57.723 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:57.723 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:57.723 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:57.723 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:57.723 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:57.723 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:57.723 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:57.723 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:57.723 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:57.723 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:57.723 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:57.723 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:57.723 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:57.723 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:57.723 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:57.723 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:57.723 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:57.723 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:57.723 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:57.723 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:57.723 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:57.723 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:57.723 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:57.723 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:57.723 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:57.723 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:57.723 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:57.723 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:57.723 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:57.723 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:57.723 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:57.723 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:57.723 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:57.723 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:57.723 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:57.723 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:57.723 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:57.723 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:57.724 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:57.724 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:57.724 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:57.724 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:57.724 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:57.724 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:57.724 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:57.724 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:57.724 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:57.724 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:57.724 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:57.724 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:57.724 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:57.724 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:57.724 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:57.724 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:57.724 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:57.724 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:57.724 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:57.724 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:57.724 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:15:57.724 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:57.724 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:57.724 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:57.724 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:57.724 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:57.724 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:57.724 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:57.724 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:57.724 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:57.724 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:57.724 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:57.724 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:57.724 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:57.724 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:57.724 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:57.724 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:57.724 00:21:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:57.724 00:21:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:57.724 00:21:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:57.724 00:21:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:57.724 00:21:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:57.724 00:21:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:57.724 00:21:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:57.724 00:21:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:57.724 00:21:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:57.724 00:21:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:57.724 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:57.724 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:15:57.724 00:15:57.724 --- 10.0.0.2 ping statistics --- 00:15:57.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.724 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:15:57.724 00:21:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:57.724 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:57.724 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:15:57.724 00:15:57.724 --- 10.0.0.1 ping statistics --- 00:15:57.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.724 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:15:57.724 00:21:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:57.724 00:21:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:15:57.724 00:21:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:57.724 00:21:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:57.724 00:21:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:57.724 00:21:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:57.724 00:21:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:57.724 00:21:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:57.724 00:21:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:57.724 00:21:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:15:57.724 00:21:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:57.724 00:21:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:57.724 00:21:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:57.724 00:21:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=207974 00:15:57.724 00:21:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:57.724 00:21:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 207974 00:15:57.724 00:21:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 207974 ']' 00:15:57.724 00:21:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:57.724 00:21:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:57.724 00:21:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:57.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:57.724 00:21:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:57.724 00:21:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:57.724 [2024-11-18 00:21:21.190117] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:15:57.724 [2024-11-18 00:21:21.190192] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:57.724 [2024-11-18 00:21:21.264676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:57.724 [2024-11-18 00:21:21.312903] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:57.724 [2024-11-18 00:21:21.312959] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:57.724 [2024-11-18 00:21:21.312989] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:57.724 [2024-11-18 00:21:21.313001] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:57.724 [2024-11-18 00:21:21.313011] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:57.724 [2024-11-18 00:21:21.314689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:57.724 [2024-11-18 00:21:21.314759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:57.724 [2024-11-18 00:21:21.314823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:57.724 [2024-11-18 00:21:21.314826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.724 00:21:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:57.724 00:21:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:15:57.724 00:21:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:57.724 00:21:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:57.724 00:21:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:57.724 00:21:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:57.724 00:21:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:57.724 00:21:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:57.724 00:21:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:15:57.982 00:21:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:15:57.982 00:21:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:15:57.982 "nvmf_tgt_1" 00:15:57.982 00:21:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:15:58.240 "nvmf_tgt_2" 00:15:58.240 00:21:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:58.240 00:21:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:15:58.240 00:21:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:15:58.240 00:21:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:15:58.498 true 00:15:58.498 00:21:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:15:58.498 true 00:15:58.498 00:21:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:58.498 00:21:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:15:58.754 00:21:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:15:58.755 00:21:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:15:58.755 00:21:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:15:58.755 00:21:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:58.755 00:21:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:15:58.755 00:21:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:58.755 00:21:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:15:58.755 00:21:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:58.755 00:21:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:58.755 rmmod nvme_tcp 00:15:58.755 rmmod nvme_fabrics 00:15:58.755 rmmod nvme_keyring 00:15:58.755 00:21:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:58.755 00:21:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:15:58.755 00:21:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:15:58.755 00:21:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 207974 ']' 00:15:58.755 00:21:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 207974 00:15:58.755 00:21:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 207974 ']' 00:15:58.755 00:21:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 207974 00:15:58.755 00:21:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:15:58.755 00:21:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:58.755 00:21:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 207974 00:15:58.755 00:21:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:58.755 00:21:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:58.755 00:21:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 207974' 00:15:58.755 killing process with pid 207974 00:15:58.755 00:21:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 207974 00:15:58.755 00:21:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 207974 00:15:59.013 00:21:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:59.013 00:21:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:59.013 00:21:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:59.013 00:21:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:15:59.013 00:21:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:15:59.013 00:21:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:59.013 00:21:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:15:59.013 00:21:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:59.013 00:21:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:59.013 00:21:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:59.013 00:21:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:59.013 00:21:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:00.917 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:00.917 00:16:00.917 real 0m5.955s 00:16:00.917 user 0m7.148s 00:16:00.917 sys 0m1.975s 00:16:00.917 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:00.917 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:00.917 ************************************ 00:16:00.917 END TEST nvmf_multitarget 00:16:00.918 ************************************ 00:16:00.918 00:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:00.918 00:21:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:00.918 00:21:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:00.918 00:21:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:01.177 ************************************ 00:16:01.177 START TEST nvmf_rpc 00:16:01.177 ************************************ 00:16:01.177 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:01.177 * Looking for test storage... 00:16:01.177 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:01.177 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:01.177 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:16:01.177 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:01.177 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:01.177 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:01.177 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:01.177 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:01.177 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:16:01.177 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:16:01.177 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:16:01.177 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:16:01.177 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:16:01.177 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:16:01.177 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:16:01.177 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:01.177 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:16:01.177 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:16:01.177 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:01.177 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:01.177 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:16:01.177 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:16:01.177 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:01.177 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:16:01.177 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:01.177 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:16:01.177 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:16:01.177 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:01.177 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:16:01.177 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:01.177 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:01.177 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:01.178 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:16:01.178 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:01.178 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:01.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:01.178 --rc genhtml_branch_coverage=1 00:16:01.178 --rc genhtml_function_coverage=1 00:16:01.178 --rc genhtml_legend=1 00:16:01.178 --rc geninfo_all_blocks=1 00:16:01.178 --rc geninfo_unexecuted_blocks=1 00:16:01.178 00:16:01.178 ' 00:16:01.178 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:01.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:01.178 --rc genhtml_branch_coverage=1 00:16:01.178 --rc genhtml_function_coverage=1 00:16:01.178 --rc genhtml_legend=1 00:16:01.178 --rc geninfo_all_blocks=1 00:16:01.178 --rc geninfo_unexecuted_blocks=1 00:16:01.178 00:16:01.178 ' 00:16:01.178 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:01.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:01.178 --rc genhtml_branch_coverage=1 00:16:01.178 --rc genhtml_function_coverage=1 00:16:01.178 --rc genhtml_legend=1 00:16:01.178 --rc geninfo_all_blocks=1 00:16:01.178 --rc geninfo_unexecuted_blocks=1 00:16:01.178 00:16:01.178 ' 00:16:01.178 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:01.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:01.178 --rc genhtml_branch_coverage=1 00:16:01.178 --rc genhtml_function_coverage=1 00:16:01.178 --rc genhtml_legend=1 00:16:01.178 --rc geninfo_all_blocks=1 00:16:01.178 --rc geninfo_unexecuted_blocks=1 00:16:01.178 00:16:01.178 ' 00:16:01.178 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:01.178 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:16:01.178 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:01.178 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:01.178 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:01.178 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:01.178 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:01.178 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:01.178 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:01.178 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:01.178 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:01.178 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:01.178 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:01.178 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:01.178 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:01.178 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:01.178 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:01.178 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:01.178 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:01.178 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:16:01.178 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:01.178 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:01.178 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:01.178 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.178 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.178 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.178 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:16:01.178 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.178 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:16:01.178 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:01.178 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:01.178 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:01.178 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:01.178 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:01.178 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:01.178 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:01.178 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:01.178 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:01.178 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:01.178 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:16:01.178 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:16:01.178 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:01.178 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:01.178 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:01.178 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:01.178 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:01.178 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:01.178 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:01.178 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:01.178 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:01.178 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:01.178 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:16:01.178 00:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:03.718 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:03.718 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:03.718 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:03.718 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:03.718 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:03.719 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:03.719 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:16:03.719 00:16:03.719 --- 10.0.0.2 ping statistics --- 00:16:03.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:03.719 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:16:03.719 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:03.719 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:03.719 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:16:03.719 00:16:03.719 --- 10.0.0.1 ping statistics --- 00:16:03.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:03.719 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:16:03.719 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:03.719 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:16:03.719 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:03.719 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:03.719 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:03.719 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:03.719 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:03.719 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:03.719 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:03.719 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:16:03.719 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:03.719 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:03.719 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:03.719 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=210200 00:16:03.719 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:03.719 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 210200 00:16:03.719 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 210200 ']' 00:16:03.719 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:03.719 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:03.719 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:03.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:03.719 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:03.719 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:03.719 [2024-11-18 00:21:27.327355] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:16:03.719 [2024-11-18 00:21:27.327447] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:03.719 [2024-11-18 00:21:27.398580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:03.719 [2024-11-18 00:21:27.441958] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:03.719 [2024-11-18 00:21:27.442012] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:03.719 [2024-11-18 00:21:27.442047] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:03.719 [2024-11-18 00:21:27.442057] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:03.719 [2024-11-18 00:21:27.442067] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:03.719 [2024-11-18 00:21:27.443681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:03.719 [2024-11-18 00:21:27.443739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:03.719 [2024-11-18 00:21:27.443807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:03.719 [2024-11-18 00:21:27.443810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:03.978 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:03.978 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:03.978 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:03.978 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:03.978 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:03.978 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:03.978 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:16:03.978 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.978 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:03.978 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.978 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:16:03.978 "tick_rate": 2700000000, 00:16:03.978 "poll_groups": [ 00:16:03.978 { 00:16:03.978 "name": "nvmf_tgt_poll_group_000", 00:16:03.978 "admin_qpairs": 0, 00:16:03.978 "io_qpairs": 0, 00:16:03.978 "current_admin_qpairs": 0, 00:16:03.978 "current_io_qpairs": 0, 00:16:03.978 "pending_bdev_io": 0, 00:16:03.978 "completed_nvme_io": 0, 00:16:03.978 "transports": [] 00:16:03.978 }, 00:16:03.978 { 00:16:03.978 "name": "nvmf_tgt_poll_group_001", 00:16:03.978 "admin_qpairs": 0, 00:16:03.978 "io_qpairs": 0, 00:16:03.978 "current_admin_qpairs": 0, 00:16:03.978 "current_io_qpairs": 0, 00:16:03.978 "pending_bdev_io": 0, 00:16:03.978 "completed_nvme_io": 0, 00:16:03.978 "transports": [] 00:16:03.978 }, 00:16:03.978 { 00:16:03.978 "name": "nvmf_tgt_poll_group_002", 00:16:03.978 "admin_qpairs": 0, 00:16:03.978 "io_qpairs": 0, 00:16:03.978 "current_admin_qpairs": 0, 00:16:03.978 "current_io_qpairs": 0, 00:16:03.978 "pending_bdev_io": 0, 00:16:03.978 "completed_nvme_io": 0, 00:16:03.978 "transports": [] 00:16:03.978 }, 00:16:03.978 { 00:16:03.978 "name": "nvmf_tgt_poll_group_003", 00:16:03.978 "admin_qpairs": 0, 00:16:03.978 "io_qpairs": 0, 00:16:03.978 "current_admin_qpairs": 0, 00:16:03.978 "current_io_qpairs": 0, 00:16:03.978 "pending_bdev_io": 0, 00:16:03.978 "completed_nvme_io": 0, 00:16:03.978 "transports": [] 00:16:03.978 } 00:16:03.978 ] 00:16:03.978 }' 00:16:03.978 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:16:03.978 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:16:03.978 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:16:03.978 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:16:03.978 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:16:03.978 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:16:03.978 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:16:03.978 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:03.978 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.978 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:03.978 [2024-11-18 00:21:27.677964] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:03.978 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.978 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:16:03.978 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.978 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:03.978 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.978 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:16:03.978 "tick_rate": 2700000000, 00:16:03.978 "poll_groups": [ 00:16:03.978 { 00:16:03.978 "name": "nvmf_tgt_poll_group_000", 00:16:03.978 "admin_qpairs": 0, 00:16:03.978 "io_qpairs": 0, 00:16:03.978 "current_admin_qpairs": 0, 00:16:03.978 "current_io_qpairs": 0, 00:16:03.978 "pending_bdev_io": 0, 00:16:03.978 "completed_nvme_io": 0, 00:16:03.978 "transports": [ 00:16:03.978 { 00:16:03.978 "trtype": "TCP" 00:16:03.978 } 00:16:03.978 ] 00:16:03.978 }, 00:16:03.978 { 00:16:03.978 "name": "nvmf_tgt_poll_group_001", 00:16:03.978 "admin_qpairs": 0, 00:16:03.978 "io_qpairs": 0, 00:16:03.978 "current_admin_qpairs": 0, 00:16:03.978 "current_io_qpairs": 0, 00:16:03.978 "pending_bdev_io": 0, 00:16:03.978 "completed_nvme_io": 0, 00:16:03.978 "transports": [ 00:16:03.978 { 00:16:03.978 "trtype": "TCP" 00:16:03.978 } 00:16:03.978 ] 00:16:03.978 }, 00:16:03.978 { 00:16:03.978 "name": "nvmf_tgt_poll_group_002", 00:16:03.978 "admin_qpairs": 0, 00:16:03.978 "io_qpairs": 0, 00:16:03.978 "current_admin_qpairs": 0, 00:16:03.978 "current_io_qpairs": 0, 00:16:03.978 "pending_bdev_io": 0, 00:16:03.978 "completed_nvme_io": 0, 00:16:03.978 "transports": [ 00:16:03.978 { 00:16:03.978 "trtype": "TCP" 00:16:03.978 } 00:16:03.978 ] 00:16:03.978 }, 00:16:03.978 { 00:16:03.978 "name": "nvmf_tgt_poll_group_003", 00:16:03.978 "admin_qpairs": 0, 00:16:03.978 "io_qpairs": 0, 00:16:03.978 "current_admin_qpairs": 0, 00:16:03.978 "current_io_qpairs": 0, 00:16:03.978 "pending_bdev_io": 0, 00:16:03.978 "completed_nvme_io": 0, 00:16:03.978 "transports": [ 00:16:03.978 { 00:16:03.978 "trtype": "TCP" 00:16:03.978 } 00:16:03.978 ] 00:16:03.978 } 00:16:03.978 ] 00:16:03.978 }' 00:16:03.978 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:16:03.978 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:03.978 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:03.978 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:03.978 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:16:03.978 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:16:03.978 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:03.978 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:03.978 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:03.978 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:16:03.978 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:16:03.978 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:16:03.978 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:16:03.978 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:03.978 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.978 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:04.237 Malloc1 00:16:04.237 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.237 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:04.237 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.237 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:04.237 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.237 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:04.237 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.237 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:04.237 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.237 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:16:04.237 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.237 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:04.237 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.237 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:04.237 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.237 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:04.237 [2024-11-18 00:21:27.842720] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:04.237 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.237 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:16:04.237 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:16:04.237 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:16:04.237 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:16:04.237 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:04.237 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:16:04.237 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:04.237 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:16:04.237 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:04.237 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:16:04.237 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:16:04.238 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:16:04.238 [2024-11-18 00:21:27.865381] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:16:04.238 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:04.238 could not add new controller: failed to write to nvme-fabrics device 00:16:04.238 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:16:04.238 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:04.238 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:04.238 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:04.238 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:04.238 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.238 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:04.238 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.238 00:21:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:04.803 00:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:16:04.803 00:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:04.803 00:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:04.803 00:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:04.803 00:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:07.328 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:07.328 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:07.328 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:07.328 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:07.328 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:07.328 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:07.328 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:07.328 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:07.328 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:07.328 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:07.328 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:07.328 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:07.328 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:07.328 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:07.328 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:07.328 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:07.328 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.328 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:07.328 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.328 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:07.328 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:16:07.328 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:07.328 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:16:07.328 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:07.328 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:16:07.329 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:07.329 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:16:07.329 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:07.329 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:16:07.329 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:16:07.329 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:07.329 [2024-11-18 00:21:30.756473] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:16:07.329 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:07.329 could not add new controller: failed to write to nvme-fabrics device 00:16:07.329 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:16:07.329 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:07.329 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:07.329 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:07.329 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:16:07.329 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.329 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:07.329 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.329 00:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:07.587 00:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:16:07.587 00:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:07.587 00:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:07.587 00:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:07.587 00:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:10.115 00:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:10.115 00:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:10.115 00:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:10.115 00:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:10.115 00:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:10.115 00:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:10.115 00:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:10.115 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:10.115 00:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:10.115 00:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:10.115 00:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:10.115 00:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:10.115 00:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:10.115 00:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:10.115 00:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:10.115 00:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:10.115 00:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.115 00:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:10.115 00:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.115 00:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:16:10.115 00:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:10.115 00:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:10.115 00:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.115 00:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:10.115 00:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.115 00:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:10.115 00:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.115 00:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:10.115 [2024-11-18 00:21:33.517201] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:10.115 00:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.115 00:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:10.115 00:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.115 00:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:10.115 00:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.115 00:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:10.115 00:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.115 00:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:10.115 00:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.115 00:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:10.373 00:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:10.373 00:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:10.373 00:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:10.373 00:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:10.373 00:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:12.923 00:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:12.923 00:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:12.923 00:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:12.923 00:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:12.923 00:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:12.923 00:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:12.923 00:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:12.923 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:12.923 00:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:12.923 00:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:12.923 00:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:12.923 00:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:12.923 00:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:12.923 00:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:12.923 00:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:12.923 00:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:12.923 00:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.923 00:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:12.923 00:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.923 00:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:12.923 00:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.923 00:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:12.923 00:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.923 00:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:12.923 00:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:12.923 00:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.924 00:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:12.924 00:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.924 00:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:12.924 00:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.924 00:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:12.924 [2024-11-18 00:21:36.247544] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:12.924 00:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.924 00:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:12.924 00:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.924 00:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:12.924 00:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.924 00:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:12.924 00:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.924 00:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:12.924 00:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.924 00:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:13.182 00:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:13.182 00:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:13.182 00:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:13.182 00:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:13.182 00:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:15.089 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:15.089 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:15.089 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:15.089 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:15.089 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:15.089 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:15.089 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:15.347 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:15.347 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:15.347 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:15.347 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:15.347 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:15.347 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:15.347 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:15.347 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:15.347 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:15.347 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.347 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:15.347 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.347 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:15.347 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.347 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:15.347 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.347 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:15.347 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:15.347 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.347 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:15.347 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.347 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:15.347 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.347 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:15.347 [2024-11-18 00:21:38.993205] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:15.347 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.347 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:15.347 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.347 00:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:15.347 00:21:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.347 00:21:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:15.347 00:21:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.347 00:21:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:15.347 00:21:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.347 00:21:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:15.928 00:21:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:15.928 00:21:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:15.929 00:21:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:15.929 00:21:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:15.929 00:21:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:17.912 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:17.912 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:17.912 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:17.912 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:17.912 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:17.912 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:17.912 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:18.196 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:18.196 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:18.196 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:18.196 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:18.196 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:18.196 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:18.196 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:18.196 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:18.196 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:18.196 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.196 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.196 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.196 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:18.196 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.196 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.196 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.196 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:18.196 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:18.196 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.196 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.196 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.196 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:18.196 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.196 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.196 [2024-11-18 00:21:41.826424] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:18.196 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.196 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:18.196 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.196 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.196 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.196 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:18.196 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.196 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.196 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.196 00:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:18.809 00:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:18.809 00:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:18.809 00:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:18.809 00:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:18.809 00:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:20.824 00:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:20.824 00:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:20.824 00:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:20.824 00:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:20.824 00:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:20.824 00:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:20.824 00:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:20.824 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:20.824 00:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:20.824 00:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:20.824 00:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:20.824 00:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:20.824 00:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:20.824 00:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:20.824 00:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:20.824 00:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:20.824 00:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.824 00:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:20.824 00:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.824 00:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:20.824 00:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.824 00:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:20.824 00:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.824 00:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:20.824 00:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:20.824 00:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.824 00:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:20.824 00:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.824 00:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:20.824 00:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.824 00:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:20.824 [2024-11-18 00:21:44.608200] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:20.824 00:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.824 00:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:20.824 00:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.824 00:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:20.824 00:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.824 00:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:20.824 00:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.824 00:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:21.109 00:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.109 00:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:21.726 00:21:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:21.726 00:21:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:21.726 00:21:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:21.726 00:21:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:21.726 00:21:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:23.731 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:23.731 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:23.731 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:23.731 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:23.731 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:23.731 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:23.731 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:23.731 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:23.731 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:23.731 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:23.731 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:23.731 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:23.731 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:23.731 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:23.731 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:23.731 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:23.731 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.731 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:23.731 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.731 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:23.731 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.731 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:23.731 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.731 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:16:23.731 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:23.731 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:23.731 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.732 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:23.732 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.732 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:23.732 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.732 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:23.732 [2024-11-18 00:21:47.443974] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:23.732 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.732 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:23.732 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.732 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:23.732 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.732 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:23.732 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.732 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:23.732 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.732 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:23.732 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.732 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:23.732 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.732 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:23.732 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.732 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:23.732 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.732 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:23.732 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:23.732 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.732 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:23.732 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.732 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:23.732 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.732 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:23.732 [2024-11-18 00:21:47.492029] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:23.732 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.732 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:23.732 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.732 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:23.732 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.732 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:23.732 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.732 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:23.732 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.732 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:23.732 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.732 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:23.732 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.732 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:23.732 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.732 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:23.732 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.732 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:23.732 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:23.732 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.732 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:23.732 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.732 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:23.732 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.732 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:23.732 [2024-11-18 00:21:47.540207] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:24.032 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.032 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:24.032 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.032 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.032 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.032 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:24.032 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.032 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.032 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.032 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:24.032 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.032 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.032 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.032 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:24.032 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.032 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.032 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.032 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:24.032 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:24.032 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.032 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.032 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.032 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:24.032 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.032 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.032 [2024-11-18 00:21:47.588372] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:24.032 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.032 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:24.032 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.032 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.032 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.032 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:24.032 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.032 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.032 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.032 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:24.032 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.032 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.032 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.032 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:24.032 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.032 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.032 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.032 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:24.032 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:24.032 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.033 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.033 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.033 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:24.033 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.033 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.033 [2024-11-18 00:21:47.636539] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:24.033 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.033 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:24.033 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.033 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.033 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.033 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:24.033 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.033 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.033 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.033 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:24.033 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.033 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.033 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.033 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:24.033 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.033 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.033 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.033 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:16:24.033 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.033 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:24.033 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.033 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:16:24.033 "tick_rate": 2700000000, 00:16:24.033 "poll_groups": [ 00:16:24.033 { 00:16:24.033 "name": "nvmf_tgt_poll_group_000", 00:16:24.033 "admin_qpairs": 2, 00:16:24.033 "io_qpairs": 84, 00:16:24.033 "current_admin_qpairs": 0, 00:16:24.033 "current_io_qpairs": 0, 00:16:24.033 "pending_bdev_io": 0, 00:16:24.033 "completed_nvme_io": 152, 00:16:24.033 "transports": [ 00:16:24.033 { 00:16:24.033 "trtype": "TCP" 00:16:24.033 } 00:16:24.033 ] 00:16:24.033 }, 00:16:24.033 { 00:16:24.033 "name": "nvmf_tgt_poll_group_001", 00:16:24.033 "admin_qpairs": 2, 00:16:24.033 "io_qpairs": 84, 00:16:24.033 "current_admin_qpairs": 0, 00:16:24.033 "current_io_qpairs": 0, 00:16:24.033 "pending_bdev_io": 0, 00:16:24.033 "completed_nvme_io": 186, 00:16:24.033 "transports": [ 00:16:24.033 { 00:16:24.033 "trtype": "TCP" 00:16:24.033 } 00:16:24.033 ] 00:16:24.033 }, 00:16:24.033 { 00:16:24.033 "name": "nvmf_tgt_poll_group_002", 00:16:24.033 "admin_qpairs": 1, 00:16:24.033 "io_qpairs": 84, 00:16:24.033 "current_admin_qpairs": 0, 00:16:24.033 "current_io_qpairs": 0, 00:16:24.033 "pending_bdev_io": 0, 00:16:24.033 "completed_nvme_io": 257, 00:16:24.033 "transports": [ 00:16:24.033 { 00:16:24.033 "trtype": "TCP" 00:16:24.033 } 00:16:24.033 ] 00:16:24.033 }, 00:16:24.033 { 00:16:24.033 "name": "nvmf_tgt_poll_group_003", 00:16:24.033 "admin_qpairs": 2, 00:16:24.033 "io_qpairs": 84, 00:16:24.033 "current_admin_qpairs": 0, 00:16:24.033 "current_io_qpairs": 0, 00:16:24.033 "pending_bdev_io": 0, 00:16:24.033 "completed_nvme_io": 91, 00:16:24.033 "transports": [ 00:16:24.033 { 00:16:24.033 "trtype": "TCP" 00:16:24.033 } 00:16:24.033 ] 00:16:24.033 } 00:16:24.033 ] 00:16:24.033 }' 00:16:24.033 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:16:24.033 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:24.033 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:24.033 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:24.033 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:16:24.033 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:16:24.033 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:24.033 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:24.033 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:24.033 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:16:24.033 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:16:24.033 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:16:24.033 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:16:24.033 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:24.033 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:16:24.033 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:24.033 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:16:24.033 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:24.033 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:24.033 rmmod nvme_tcp 00:16:24.033 rmmod nvme_fabrics 00:16:24.033 rmmod nvme_keyring 00:16:24.033 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:24.033 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:16:24.033 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:16:24.033 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 210200 ']' 00:16:24.033 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 210200 00:16:24.033 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 210200 ']' 00:16:24.033 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 210200 00:16:24.033 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:16:24.033 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:24.033 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 210200 00:16:24.303 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:24.303 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:24.303 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 210200' 00:16:24.303 killing process with pid 210200 00:16:24.303 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 210200 00:16:24.303 00:21:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 210200 00:16:24.303 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:24.303 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:24.303 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:24.303 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:16:24.303 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:16:24.303 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:24.303 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:16:24.303 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:24.303 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:24.303 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:24.303 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:24.303 00:21:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:26.851 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:26.851 00:16:26.851 real 0m25.384s 00:16:26.851 user 1m21.943s 00:16:26.851 sys 0m4.272s 00:16:26.851 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:26.851 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:26.851 ************************************ 00:16:26.851 END TEST nvmf_rpc 00:16:26.851 ************************************ 00:16:26.851 00:21:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:26.851 00:21:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:26.851 00:21:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:26.851 00:21:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:26.851 ************************************ 00:16:26.851 START TEST nvmf_invalid 00:16:26.851 ************************************ 00:16:26.851 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:26.851 * Looking for test storage... 00:16:26.851 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:26.851 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:26.851 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:16:26.851 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:26.851 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:26.851 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:26.851 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:26.851 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:26.851 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:16:26.851 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:16:26.851 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:16:26.851 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:16:26.851 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:16:26.851 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:16:26.851 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:16:26.851 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:26.851 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:16:26.851 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:16:26.851 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:26.851 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:26.851 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:16:26.851 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:16:26.851 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:26.851 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:16:26.851 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:16:26.851 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:16:26.851 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:16:26.851 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:26.851 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:16:26.851 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:16:26.851 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:26.851 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:26.851 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:16:26.851 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:26.851 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:26.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.851 --rc genhtml_branch_coverage=1 00:16:26.851 --rc genhtml_function_coverage=1 00:16:26.851 --rc genhtml_legend=1 00:16:26.851 --rc geninfo_all_blocks=1 00:16:26.851 --rc geninfo_unexecuted_blocks=1 00:16:26.851 00:16:26.851 ' 00:16:26.851 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:26.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.851 --rc genhtml_branch_coverage=1 00:16:26.851 --rc genhtml_function_coverage=1 00:16:26.851 --rc genhtml_legend=1 00:16:26.851 --rc geninfo_all_blocks=1 00:16:26.851 --rc geninfo_unexecuted_blocks=1 00:16:26.851 00:16:26.851 ' 00:16:26.851 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:26.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.851 --rc genhtml_branch_coverage=1 00:16:26.851 --rc genhtml_function_coverage=1 00:16:26.851 --rc genhtml_legend=1 00:16:26.851 --rc geninfo_all_blocks=1 00:16:26.851 --rc geninfo_unexecuted_blocks=1 00:16:26.851 00:16:26.851 ' 00:16:26.851 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:26.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.851 --rc genhtml_branch_coverage=1 00:16:26.851 --rc genhtml_function_coverage=1 00:16:26.851 --rc genhtml_legend=1 00:16:26.851 --rc geninfo_all_blocks=1 00:16:26.851 --rc geninfo_unexecuted_blocks=1 00:16:26.851 00:16:26.851 ' 00:16:26.851 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:26.851 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:16:26.851 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:26.851 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:26.851 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:26.851 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:26.851 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:26.851 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:26.851 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:26.851 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:26.851 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:26.851 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:26.852 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:26.852 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:26.852 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:26.852 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:26.852 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:26.852 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:26.852 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:26.852 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:16:26.852 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:26.852 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:26.852 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:26.852 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.852 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.852 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.852 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:16:26.852 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.852 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:16:26.852 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:26.852 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:26.852 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:26.852 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:26.852 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:26.852 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:26.852 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:26.852 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:26.852 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:26.852 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:26.852 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:26.852 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:26.852 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:26.852 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:16:26.852 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:16:26.852 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:16:26.852 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:26.852 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:26.852 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:26.852 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:26.852 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:26.852 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:26.852 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:26.852 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:26.852 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:26.852 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:26.852 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:16:26.852 00:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:28.760 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:28.760 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:16:28.760 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:28.760 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:28.760 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:28.760 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:28.760 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:28.760 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:16:28.760 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:28.760 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:16:28.760 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:16:28.760 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:16:28.760 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:16:28.760 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:16:28.760 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:16:28.760 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:28.760 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:28.760 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:28.760 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:28.760 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:28.760 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:28.760 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:28.760 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:28.760 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:28.760 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:28.760 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:28.760 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:28.760 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:28.760 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:28.760 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:28.760 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:28.760 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:28.760 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:28.760 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:28.760 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:28.760 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:28.760 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:28.760 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:28.760 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:28.760 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:28.760 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:28.760 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:28.760 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:28.760 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:28.760 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:28.760 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:28.760 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:28.760 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:28.760 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:28.760 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:28.760 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:28.760 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:28.760 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:28.760 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:28.760 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:28.760 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:28.760 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:28.760 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:28.760 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:28.760 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:28.761 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:28.761 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:28.761 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:28.761 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:28.761 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:28.761 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:28.761 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:28.761 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:28.761 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:28.761 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:28.761 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:28.761 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:28.761 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:28.761 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:16:28.761 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:28.761 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:28.761 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:28.761 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:28.761 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:28.761 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:28.761 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:28.761 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:28.761 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:28.761 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:28.761 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:28.761 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:28.761 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:28.761 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:28.761 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:28.761 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:28.761 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:28.761 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:28.761 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:28.761 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:28.761 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:28.761 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:28.761 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:28.761 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:28.761 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:28.761 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:29.020 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:29.020 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:16:29.020 00:16:29.020 --- 10.0.0.2 ping statistics --- 00:16:29.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:29.020 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:16:29.020 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:29.020 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:29.020 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:16:29.020 00:16:29.020 --- 10.0.0.1 ping statistics --- 00:16:29.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:29.020 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:16:29.020 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:29.020 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:16:29.020 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:29.020 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:29.020 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:29.020 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:29.020 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:29.020 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:29.020 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:29.020 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:16:29.020 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:29.020 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:29.020 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:29.020 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=214739 00:16:29.020 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:29.020 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 214739 00:16:29.020 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 214739 ']' 00:16:29.020 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:29.020 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:29.020 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:29.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:29.020 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:29.020 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:29.020 [2024-11-18 00:21:52.660728] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:16:29.020 [2024-11-18 00:21:52.660795] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:29.020 [2024-11-18 00:21:52.732277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:29.020 [2024-11-18 00:21:52.782020] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:29.020 [2024-11-18 00:21:52.782080] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:29.020 [2024-11-18 00:21:52.782108] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:29.020 [2024-11-18 00:21:52.782120] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:29.020 [2024-11-18 00:21:52.782129] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:29.020 [2024-11-18 00:21:52.783873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:29.020 [2024-11-18 00:21:52.783937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:29.020 [2024-11-18 00:21:52.784003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:29.020 [2024-11-18 00:21:52.784006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:29.279 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:29.279 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:16:29.279 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:29.279 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:29.279 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:29.279 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:29.279 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:29.279 00:21:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode29931 00:16:29.537 [2024-11-18 00:21:53.227093] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:16:29.537 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:16:29.537 { 00:16:29.537 "nqn": "nqn.2016-06.io.spdk:cnode29931", 00:16:29.537 "tgt_name": "foobar", 00:16:29.537 "method": "nvmf_create_subsystem", 00:16:29.537 "req_id": 1 00:16:29.537 } 00:16:29.537 Got JSON-RPC error response 00:16:29.537 response: 00:16:29.537 { 00:16:29.537 "code": -32603, 00:16:29.537 "message": "Unable to find target foobar" 00:16:29.537 }' 00:16:29.537 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:16:29.537 { 00:16:29.537 "nqn": "nqn.2016-06.io.spdk:cnode29931", 00:16:29.537 "tgt_name": "foobar", 00:16:29.537 "method": "nvmf_create_subsystem", 00:16:29.537 "req_id": 1 00:16:29.537 } 00:16:29.537 Got JSON-RPC error response 00:16:29.537 response: 00:16:29.537 { 00:16:29.537 "code": -32603, 00:16:29.537 "message": "Unable to find target foobar" 00:16:29.537 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:16:29.537 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:16:29.537 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode20267 00:16:29.794 [2024-11-18 00:21:53.500013] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20267: invalid serial number 'SPDKISFASTANDAWESOME' 00:16:29.794 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:16:29.794 { 00:16:29.794 "nqn": "nqn.2016-06.io.spdk:cnode20267", 00:16:29.794 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:29.794 "method": "nvmf_create_subsystem", 00:16:29.794 "req_id": 1 00:16:29.794 } 00:16:29.794 Got JSON-RPC error response 00:16:29.794 response: 00:16:29.794 { 00:16:29.794 "code": -32602, 00:16:29.794 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:29.794 }' 00:16:29.794 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:16:29.794 { 00:16:29.794 "nqn": "nqn.2016-06.io.spdk:cnode20267", 00:16:29.794 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:29.794 "method": "nvmf_create_subsystem", 00:16:29.794 "req_id": 1 00:16:29.794 } 00:16:29.794 Got JSON-RPC error response 00:16:29.794 response: 00:16:29.794 { 00:16:29.794 "code": -32602, 00:16:29.794 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:29.794 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:29.794 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:16:29.794 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode1643 00:16:30.052 [2024-11-18 00:21:53.772918] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1643: invalid model number 'SPDK_Controller' 00:16:30.052 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:16:30.052 { 00:16:30.052 "nqn": "nqn.2016-06.io.spdk:cnode1643", 00:16:30.052 "model_number": "SPDK_Controller\u001f", 00:16:30.052 "method": "nvmf_create_subsystem", 00:16:30.052 "req_id": 1 00:16:30.052 } 00:16:30.052 Got JSON-RPC error response 00:16:30.052 response: 00:16:30.052 { 00:16:30.052 "code": -32602, 00:16:30.052 "message": "Invalid MN SPDK_Controller\u001f" 00:16:30.052 }' 00:16:30.052 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:16:30.052 { 00:16:30.052 "nqn": "nqn.2016-06.io.spdk:cnode1643", 00:16:30.052 "model_number": "SPDK_Controller\u001f", 00:16:30.052 "method": "nvmf_create_subsystem", 00:16:30.052 "req_id": 1 00:16:30.052 } 00:16:30.052 Got JSON-RPC error response 00:16:30.052 response: 00:16:30.052 { 00:16:30.052 "code": -32602, 00:16:30.052 "message": "Invalid MN SPDK_Controller\u001f" 00:16:30.052 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:30.052 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:16:30.052 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:16:30.052 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:30.052 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:30.052 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:30.052 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:30.052 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.052 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:16:30.052 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:16:30.052 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:16:30.052 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.052 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.052 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:16:30.052 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:16:30.052 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:16:30.052 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.052 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.052 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:16:30.052 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:16:30.052 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:16:30.052 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.052 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ A == \- ]] 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'Aqj yE29?RNij8q_\ 2c(' 00:16:30.053 00:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'Aqj yE29?RNij8q_\ 2c(' nqn.2016-06.io.spdk:cnode17260 00:16:30.311 [2024-11-18 00:21:54.102100] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17260: invalid serial number 'Aqj yE29?RNij8q_\ 2c(' 00:16:30.311 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:16:30.311 { 00:16:30.311 "nqn": "nqn.2016-06.io.spdk:cnode17260", 00:16:30.311 "serial_number": "Aqj yE29?RNij8q_\\ 2c(", 00:16:30.311 "method": "nvmf_create_subsystem", 00:16:30.311 "req_id": 1 00:16:30.311 } 00:16:30.311 Got JSON-RPC error response 00:16:30.311 response: 00:16:30.311 { 00:16:30.311 "code": -32602, 00:16:30.311 "message": "Invalid SN Aqj yE29?RNij8q_\\ 2c(" 00:16:30.311 }' 00:16:30.311 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:16:30.311 { 00:16:30.311 "nqn": "nqn.2016-06.io.spdk:cnode17260", 00:16:30.311 "serial_number": "Aqj yE29?RNij8q_\\ 2c(", 00:16:30.311 "method": "nvmf_create_subsystem", 00:16:30.311 "req_id": 1 00:16:30.311 } 00:16:30.311 Got JSON-RPC error response 00:16:30.311 response: 00:16:30.311 { 00:16:30.311 "code": -32602, 00:16:30.311 "message": "Invalid SN Aqj yE29?RNij8q_\\ 2c(" 00:16:30.311 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:30.311 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:16:30.311 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:16:30.311 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:30.311 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:30.311 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:30.311 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:30.311 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.311 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:16:30.311 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:16:30.311 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:16:30.311 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.311 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.570 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:16:30.570 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:16:30.570 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:16:30.570 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.570 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.570 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.571 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ & == \- ]] 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '&D`}B"Us_WawXKd.nfa3Jf0dZmt/I ^5x|><|V>53' 00:16:30.572 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '&D`}B"Us_WawXKd.nfa3Jf0dZmt/I ^5x|><|V>53' nqn.2016-06.io.spdk:cnode7480 00:16:30.831 [2024-11-18 00:21:54.535477] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7480: invalid model number '&D`}B"Us_WawXKd.nfa3Jf0dZmt/I ^5x|><|V>53' 00:16:30.831 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:16:30.831 { 00:16:30.831 "nqn": "nqn.2016-06.io.spdk:cnode7480", 00:16:30.831 "model_number": "&D`}B\"Us_WawXKd.nfa3Jf0dZmt/I ^5x|><|V>53", 00:16:30.831 "method": "nvmf_create_subsystem", 00:16:30.831 "req_id": 1 00:16:30.831 } 00:16:30.831 Got JSON-RPC error response 00:16:30.831 response: 00:16:30.831 { 00:16:30.831 "code": -32602, 00:16:30.831 "message": "Invalid MN &D`}B\"Us_WawXKd.nfa3Jf0dZmt/I ^5x|><|V>53" 00:16:30.831 }' 00:16:30.831 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:16:30.831 { 00:16:30.831 "nqn": "nqn.2016-06.io.spdk:cnode7480", 00:16:30.831 "model_number": "&D`}B\"Us_WawXKd.nfa3Jf0dZmt/I ^5x|><|V>53", 00:16:30.831 "method": "nvmf_create_subsystem", 00:16:30.831 "req_id": 1 00:16:30.831 } 00:16:30.831 Got JSON-RPC error response 00:16:30.831 response: 00:16:30.831 { 00:16:30.831 "code": -32602, 00:16:30.831 "message": "Invalid MN &D`}B\"Us_WawXKd.nfa3Jf0dZmt/I ^5x|><|V>53" 00:16:30.831 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:30.831 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:16:31.089 [2024-11-18 00:21:54.800470] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:31.089 00:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:16:31.346 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:16:31.346 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:16:31.346 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:16:31.347 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:16:31.347 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:16:31.604 [2024-11-18 00:21:55.346248] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:16:31.604 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:16:31.604 { 00:16:31.604 "nqn": "nqn.2016-06.io.spdk:cnode", 00:16:31.604 "listen_address": { 00:16:31.604 "trtype": "tcp", 00:16:31.604 "traddr": "", 00:16:31.604 "trsvcid": "4421" 00:16:31.604 }, 00:16:31.604 "method": "nvmf_subsystem_remove_listener", 00:16:31.604 "req_id": 1 00:16:31.604 } 00:16:31.604 Got JSON-RPC error response 00:16:31.604 response: 00:16:31.604 { 00:16:31.604 "code": -32602, 00:16:31.604 "message": "Invalid parameters" 00:16:31.604 }' 00:16:31.604 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:16:31.604 { 00:16:31.604 "nqn": "nqn.2016-06.io.spdk:cnode", 00:16:31.604 "listen_address": { 00:16:31.604 "trtype": "tcp", 00:16:31.604 "traddr": "", 00:16:31.604 "trsvcid": "4421" 00:16:31.604 }, 00:16:31.604 "method": "nvmf_subsystem_remove_listener", 00:16:31.604 "req_id": 1 00:16:31.604 } 00:16:31.604 Got JSON-RPC error response 00:16:31.604 response: 00:16:31.604 { 00:16:31.604 "code": -32602, 00:16:31.604 "message": "Invalid parameters" 00:16:31.604 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:16:31.604 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6431 -i 0 00:16:31.863 [2024-11-18 00:21:55.623117] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6431: invalid cntlid range [0-65519] 00:16:31.863 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:16:31.863 { 00:16:31.863 "nqn": "nqn.2016-06.io.spdk:cnode6431", 00:16:31.863 "min_cntlid": 0, 00:16:31.863 "method": "nvmf_create_subsystem", 00:16:31.863 "req_id": 1 00:16:31.863 } 00:16:31.863 Got JSON-RPC error response 00:16:31.863 response: 00:16:31.863 { 00:16:31.863 "code": -32602, 00:16:31.863 "message": "Invalid cntlid range [0-65519]" 00:16:31.863 }' 00:16:31.863 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:16:31.863 { 00:16:31.863 "nqn": "nqn.2016-06.io.spdk:cnode6431", 00:16:31.863 "min_cntlid": 0, 00:16:31.863 "method": "nvmf_create_subsystem", 00:16:31.863 "req_id": 1 00:16:31.863 } 00:16:31.863 Got JSON-RPC error response 00:16:31.863 response: 00:16:31.863 { 00:16:31.863 "code": -32602, 00:16:31.863 "message": "Invalid cntlid range [0-65519]" 00:16:31.863 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:31.863 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19608 -i 65520 00:16:32.120 [2024-11-18 00:21:55.884021] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19608: invalid cntlid range [65520-65519] 00:16:32.120 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:16:32.120 { 00:16:32.120 "nqn": "nqn.2016-06.io.spdk:cnode19608", 00:16:32.120 "min_cntlid": 65520, 00:16:32.120 "method": "nvmf_create_subsystem", 00:16:32.120 "req_id": 1 00:16:32.120 } 00:16:32.120 Got JSON-RPC error response 00:16:32.120 response: 00:16:32.120 { 00:16:32.120 "code": -32602, 00:16:32.120 "message": "Invalid cntlid range [65520-65519]" 00:16:32.120 }' 00:16:32.120 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:16:32.120 { 00:16:32.120 "nqn": "nqn.2016-06.io.spdk:cnode19608", 00:16:32.120 "min_cntlid": 65520, 00:16:32.120 "method": "nvmf_create_subsystem", 00:16:32.120 "req_id": 1 00:16:32.120 } 00:16:32.120 Got JSON-RPC error response 00:16:32.120 response: 00:16:32.120 { 00:16:32.120 "code": -32602, 00:16:32.120 "message": "Invalid cntlid range [65520-65519]" 00:16:32.120 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:32.120 00:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode988 -I 0 00:16:32.377 [2024-11-18 00:21:56.144915] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode988: invalid cntlid range [1-0] 00:16:32.377 00:21:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:16:32.377 { 00:16:32.377 "nqn": "nqn.2016-06.io.spdk:cnode988", 00:16:32.377 "max_cntlid": 0, 00:16:32.377 "method": "nvmf_create_subsystem", 00:16:32.377 "req_id": 1 00:16:32.377 } 00:16:32.377 Got JSON-RPC error response 00:16:32.377 response: 00:16:32.377 { 00:16:32.377 "code": -32602, 00:16:32.377 "message": "Invalid cntlid range [1-0]" 00:16:32.377 }' 00:16:32.377 00:21:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:16:32.377 { 00:16:32.377 "nqn": "nqn.2016-06.io.spdk:cnode988", 00:16:32.377 "max_cntlid": 0, 00:16:32.377 "method": "nvmf_create_subsystem", 00:16:32.377 "req_id": 1 00:16:32.377 } 00:16:32.377 Got JSON-RPC error response 00:16:32.377 response: 00:16:32.377 { 00:16:32.377 "code": -32602, 00:16:32.377 "message": "Invalid cntlid range [1-0]" 00:16:32.377 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:32.377 00:21:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14305 -I 65520 00:16:32.635 [2024-11-18 00:21:56.405773] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14305: invalid cntlid range [1-65520] 00:16:32.635 00:21:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:16:32.635 { 00:16:32.635 "nqn": "nqn.2016-06.io.spdk:cnode14305", 00:16:32.635 "max_cntlid": 65520, 00:16:32.635 "method": "nvmf_create_subsystem", 00:16:32.635 "req_id": 1 00:16:32.635 } 00:16:32.635 Got JSON-RPC error response 00:16:32.635 response: 00:16:32.635 { 00:16:32.635 "code": -32602, 00:16:32.635 "message": "Invalid cntlid range [1-65520]" 00:16:32.635 }' 00:16:32.635 00:21:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:16:32.635 { 00:16:32.635 "nqn": "nqn.2016-06.io.spdk:cnode14305", 00:16:32.635 "max_cntlid": 65520, 00:16:32.635 "method": "nvmf_create_subsystem", 00:16:32.635 "req_id": 1 00:16:32.635 } 00:16:32.635 Got JSON-RPC error response 00:16:32.635 response: 00:16:32.635 { 00:16:32.635 "code": -32602, 00:16:32.635 "message": "Invalid cntlid range [1-65520]" 00:16:32.635 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:32.635 00:21:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10126 -i 6 -I 5 00:16:32.892 [2024-11-18 00:21:56.690769] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10126: invalid cntlid range [6-5] 00:16:32.892 00:21:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:16:32.892 { 00:16:32.892 "nqn": "nqn.2016-06.io.spdk:cnode10126", 00:16:32.892 "min_cntlid": 6, 00:16:32.892 "max_cntlid": 5, 00:16:32.892 "method": "nvmf_create_subsystem", 00:16:32.892 "req_id": 1 00:16:32.892 } 00:16:32.892 Got JSON-RPC error response 00:16:32.892 response: 00:16:32.892 { 00:16:32.892 "code": -32602, 00:16:32.892 "message": "Invalid cntlid range [6-5]" 00:16:32.892 }' 00:16:32.892 00:21:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:16:32.892 { 00:16:32.892 "nqn": "nqn.2016-06.io.spdk:cnode10126", 00:16:32.892 "min_cntlid": 6, 00:16:32.892 "max_cntlid": 5, 00:16:32.892 "method": "nvmf_create_subsystem", 00:16:32.892 "req_id": 1 00:16:32.892 } 00:16:32.892 Got JSON-RPC error response 00:16:32.892 response: 00:16:32.892 { 00:16:32.892 "code": -32602, 00:16:32.892 "message": "Invalid cntlid range [6-5]" 00:16:32.892 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:32.892 00:21:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:16:33.150 00:21:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:16:33.150 { 00:16:33.150 "name": "foobar", 00:16:33.150 "method": "nvmf_delete_target", 00:16:33.150 "req_id": 1 00:16:33.150 } 00:16:33.150 Got JSON-RPC error response 00:16:33.150 response: 00:16:33.150 { 00:16:33.150 "code": -32602, 00:16:33.150 "message": "The specified target doesn'\''t exist, cannot delete it." 00:16:33.150 }' 00:16:33.150 00:21:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:16:33.150 { 00:16:33.150 "name": "foobar", 00:16:33.150 "method": "nvmf_delete_target", 00:16:33.150 "req_id": 1 00:16:33.150 } 00:16:33.150 Got JSON-RPC error response 00:16:33.150 response: 00:16:33.150 { 00:16:33.150 "code": -32602, 00:16:33.150 "message": "The specified target doesn't exist, cannot delete it." 00:16:33.150 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:16:33.150 00:21:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:16:33.150 00:21:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:16:33.150 00:21:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:33.150 00:21:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:16:33.150 00:21:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:33.150 00:21:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:16:33.150 00:21:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:33.150 00:21:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:33.150 rmmod nvme_tcp 00:16:33.150 rmmod nvme_fabrics 00:16:33.150 rmmod nvme_keyring 00:16:33.150 00:21:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:33.150 00:21:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:16:33.150 00:21:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:16:33.150 00:21:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 214739 ']' 00:16:33.150 00:21:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 214739 00:16:33.150 00:21:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 214739 ']' 00:16:33.150 00:21:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 214739 00:16:33.150 00:21:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:16:33.150 00:21:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:33.150 00:21:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 214739 00:16:33.150 00:21:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:33.150 00:21:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:33.150 00:21:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 214739' 00:16:33.150 killing process with pid 214739 00:16:33.150 00:21:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 214739 00:16:33.150 00:21:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 214739 00:16:33.410 00:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:33.410 00:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:33.410 00:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:33.410 00:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:16:33.410 00:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:16:33.410 00:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:33.410 00:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:16:33.410 00:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:33.410 00:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:33.410 00:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:33.410 00:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:33.410 00:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:35.951 00:16:35.951 real 0m9.015s 00:16:35.951 user 0m21.462s 00:16:35.951 sys 0m2.558s 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:35.951 ************************************ 00:16:35.951 END TEST nvmf_invalid 00:16:35.951 ************************************ 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:35.951 ************************************ 00:16:35.951 START TEST nvmf_connect_stress 00:16:35.951 ************************************ 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:35.951 * Looking for test storage... 00:16:35.951 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:35.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.951 --rc genhtml_branch_coverage=1 00:16:35.951 --rc genhtml_function_coverage=1 00:16:35.951 --rc genhtml_legend=1 00:16:35.951 --rc geninfo_all_blocks=1 00:16:35.951 --rc geninfo_unexecuted_blocks=1 00:16:35.951 00:16:35.951 ' 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:35.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.951 --rc genhtml_branch_coverage=1 00:16:35.951 --rc genhtml_function_coverage=1 00:16:35.951 --rc genhtml_legend=1 00:16:35.951 --rc geninfo_all_blocks=1 00:16:35.951 --rc geninfo_unexecuted_blocks=1 00:16:35.951 00:16:35.951 ' 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:35.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.951 --rc genhtml_branch_coverage=1 00:16:35.951 --rc genhtml_function_coverage=1 00:16:35.951 --rc genhtml_legend=1 00:16:35.951 --rc geninfo_all_blocks=1 00:16:35.951 --rc geninfo_unexecuted_blocks=1 00:16:35.951 00:16:35.951 ' 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:35.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.951 --rc genhtml_branch_coverage=1 00:16:35.951 --rc genhtml_function_coverage=1 00:16:35.951 --rc genhtml_legend=1 00:16:35.951 --rc geninfo_all_blocks=1 00:16:35.951 --rc geninfo_unexecuted_blocks=1 00:16:35.951 00:16:35.951 ' 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:35.951 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:35.952 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.952 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.952 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.952 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:16:35.952 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.952 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:16:35.952 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:35.952 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:35.952 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:35.952 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:35.952 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:35.952 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:35.952 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:35.952 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:35.952 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:35.952 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:35.952 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:16:35.952 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:35.952 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:35.952 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:35.952 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:35.952 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:35.952 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:35.952 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:35.952 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:35.952 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:35.952 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:35.952 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:16:35.952 00:21:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:37.856 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:37.856 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:16:37.856 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:37.856 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:37.856 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:37.856 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:37.856 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:37.856 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:16:37.856 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:37.856 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:16:37.856 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:16:37.856 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:16:37.856 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:16:37.856 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:16:37.856 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:16:37.856 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:37.856 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:37.856 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:37.856 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:37.856 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:37.856 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:37.856 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:37.856 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:37.856 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:37.856 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:37.856 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:37.857 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:37.857 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:37.857 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:37.857 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:37.857 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:37.857 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:16:37.857 00:16:37.857 --- 10.0.0.2 ping statistics --- 00:16:37.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:37.857 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:37.857 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:37.857 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:16:37.857 00:16:37.857 --- 10.0.0.1 ping statistics --- 00:16:37.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:37.857 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:37.857 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:38.116 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:16:38.116 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:38.116 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:38.116 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:38.116 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=217382 00:16:38.116 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:38.116 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 217382 00:16:38.116 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 217382 ']' 00:16:38.116 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:38.116 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:38.116 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:38.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:38.116 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:38.116 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:38.116 [2024-11-18 00:22:01.744183] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:16:38.116 [2024-11-18 00:22:01.744276] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:38.116 [2024-11-18 00:22:01.815659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:38.116 [2024-11-18 00:22:01.858325] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:38.116 [2024-11-18 00:22:01.858379] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:38.116 [2024-11-18 00:22:01.858406] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:38.116 [2024-11-18 00:22:01.858417] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:38.116 [2024-11-18 00:22:01.858426] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:38.116 [2024-11-18 00:22:01.859869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:38.116 [2024-11-18 00:22:01.859930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:38.116 [2024-11-18 00:22:01.859933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:38.374 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:38.374 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:16:38.374 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:38.374 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:38.374 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:38.374 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:38.374 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:38.374 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.374 00:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:38.374 [2024-11-18 00:22:02.005876] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:38.374 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.374 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:38.374 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.374 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:38.374 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.374 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:38.374 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.374 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:38.374 [2024-11-18 00:22:02.023173] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:38.374 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.374 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:38.374 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.374 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:38.374 NULL1 00:16:38.374 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.374 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=217403 00:16:38.374 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:38.374 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:38.374 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:16:38.374 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:16:38.374 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:38.374 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:38.374 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:38.374 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:38.374 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:38.374 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:38.374 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:38.374 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:38.374 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:38.374 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:38.374 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:38.374 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:38.374 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:38.374 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:38.374 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:38.374 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:38.374 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:38.374 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:38.374 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:38.374 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:38.374 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:38.374 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:38.374 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:38.374 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:38.374 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:38.374 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:38.374 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:38.374 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:38.374 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:38.374 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:38.374 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:38.374 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:38.374 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:38.374 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:38.374 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:38.374 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:38.374 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:38.374 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:38.374 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:38.374 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:38.374 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217403 00:16:38.374 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:38.374 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.375 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:38.632 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.632 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217403 00:16:38.632 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:38.632 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.632 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:39.199 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.199 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217403 00:16:39.199 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:39.199 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.199 00:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:39.456 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.456 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217403 00:16:39.456 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:39.456 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.456 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:39.714 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.714 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217403 00:16:39.714 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:39.714 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.714 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:39.972 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.972 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217403 00:16:39.972 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:39.972 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.972 00:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:40.230 00:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.230 00:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217403 00:16:40.230 00:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:40.231 00:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.231 00:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:40.803 00:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.803 00:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217403 00:16:40.803 00:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:40.803 00:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.803 00:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:41.063 00:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.063 00:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217403 00:16:41.063 00:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:41.063 00:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.063 00:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:41.321 00:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.321 00:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217403 00:16:41.321 00:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:41.321 00:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.321 00:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:41.579 00:22:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.579 00:22:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217403 00:16:41.579 00:22:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:41.579 00:22:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.579 00:22:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:41.837 00:22:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.837 00:22:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217403 00:16:41.837 00:22:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:41.837 00:22:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.837 00:22:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:42.419 00:22:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.419 00:22:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217403 00:16:42.419 00:22:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:42.419 00:22:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.419 00:22:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:42.677 00:22:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.677 00:22:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217403 00:16:42.677 00:22:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:42.677 00:22:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.677 00:22:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:42.935 00:22:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.935 00:22:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217403 00:16:42.935 00:22:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:42.935 00:22:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.935 00:22:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:43.192 00:22:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.192 00:22:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217403 00:16:43.192 00:22:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:43.192 00:22:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.192 00:22:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:43.451 00:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.451 00:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217403 00:16:43.451 00:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:43.451 00:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.451 00:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:44.018 00:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.018 00:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217403 00:16:44.018 00:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:44.018 00:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.018 00:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:44.275 00:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.275 00:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217403 00:16:44.275 00:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:44.275 00:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.275 00:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:44.533 00:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.533 00:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217403 00:16:44.533 00:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:44.533 00:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.533 00:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:44.793 00:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.793 00:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217403 00:16:44.793 00:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:44.793 00:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.793 00:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:45.050 00:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.050 00:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217403 00:16:45.050 00:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:45.051 00:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.051 00:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:45.625 00:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.625 00:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217403 00:16:45.625 00:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:45.625 00:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.625 00:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:45.890 00:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.890 00:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217403 00:16:45.890 00:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:45.890 00:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.890 00:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:46.148 00:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.148 00:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217403 00:16:46.148 00:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:46.148 00:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.148 00:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:46.406 00:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.406 00:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217403 00:16:46.406 00:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:46.406 00:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.406 00:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:46.664 00:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.665 00:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217403 00:16:46.665 00:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:46.665 00:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.665 00:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:47.230 00:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.230 00:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217403 00:16:47.230 00:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:47.230 00:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.230 00:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:47.488 00:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.488 00:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217403 00:16:47.488 00:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:47.488 00:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.488 00:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:47.747 00:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.747 00:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217403 00:16:47.747 00:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:47.747 00:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.747 00:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:48.006 00:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.006 00:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217403 00:16:48.006 00:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:48.006 00:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.006 00:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:48.262 00:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.262 00:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217403 00:16:48.262 00:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:48.262 00:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.262 00:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:48.519 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:48.778 00:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.778 00:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 217403 00:16:48.778 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (217403) - No such process 00:16:48.778 00:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 217403 00:16:48.778 00:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:48.778 00:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:48.778 00:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:16:48.778 00:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:48.778 00:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:16:48.778 00:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:48.778 00:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:16:48.778 00:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:48.778 00:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:48.778 rmmod nvme_tcp 00:16:48.778 rmmod nvme_fabrics 00:16:48.778 rmmod nvme_keyring 00:16:48.778 00:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:48.778 00:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:16:48.778 00:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:16:48.778 00:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 217382 ']' 00:16:48.779 00:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 217382 00:16:48.779 00:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 217382 ']' 00:16:48.779 00:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 217382 00:16:48.779 00:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:16:48.779 00:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:48.779 00:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 217382 00:16:48.779 00:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:48.779 00:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:48.779 00:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 217382' 00:16:48.779 killing process with pid 217382 00:16:48.779 00:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 217382 00:16:48.779 00:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 217382 00:16:49.039 00:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:49.039 00:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:49.039 00:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:49.040 00:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:16:49.040 00:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:16:49.040 00:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:49.040 00:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:16:49.040 00:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:49.040 00:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:49.040 00:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:49.040 00:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:49.040 00:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:50.945 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:50.945 00:16:50.945 real 0m15.454s 00:16:50.945 user 0m39.776s 00:16:50.945 sys 0m4.840s 00:16:50.945 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:50.945 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:50.945 ************************************ 00:16:50.945 END TEST nvmf_connect_stress 00:16:50.945 ************************************ 00:16:50.945 00:22:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:16:50.945 00:22:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:50.945 00:22:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:50.945 00:22:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:50.945 ************************************ 00:16:50.945 START TEST nvmf_fused_ordering 00:16:50.945 ************************************ 00:16:50.945 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:16:51.205 * Looking for test storage... 00:16:51.205 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:51.205 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:51.205 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:16:51.205 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:51.205 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:51.205 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:51.205 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:51.205 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:51.205 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:16:51.205 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:16:51.205 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:16:51.205 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:16:51.205 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:16:51.205 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:16:51.205 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:16:51.205 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:51.205 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:16:51.205 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:16:51.205 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:51.205 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:51.205 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:16:51.205 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:16:51.205 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:51.205 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:16:51.205 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:16:51.205 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:16:51.205 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:16:51.205 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:51.205 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:16:51.205 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:16:51.205 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:51.205 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:51.205 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:16:51.205 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:51.205 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:51.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.205 --rc genhtml_branch_coverage=1 00:16:51.205 --rc genhtml_function_coverage=1 00:16:51.205 --rc genhtml_legend=1 00:16:51.205 --rc geninfo_all_blocks=1 00:16:51.205 --rc geninfo_unexecuted_blocks=1 00:16:51.205 00:16:51.205 ' 00:16:51.205 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:51.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.205 --rc genhtml_branch_coverage=1 00:16:51.205 --rc genhtml_function_coverage=1 00:16:51.205 --rc genhtml_legend=1 00:16:51.205 --rc geninfo_all_blocks=1 00:16:51.205 --rc geninfo_unexecuted_blocks=1 00:16:51.205 00:16:51.205 ' 00:16:51.206 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:51.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.206 --rc genhtml_branch_coverage=1 00:16:51.206 --rc genhtml_function_coverage=1 00:16:51.206 --rc genhtml_legend=1 00:16:51.206 --rc geninfo_all_blocks=1 00:16:51.206 --rc geninfo_unexecuted_blocks=1 00:16:51.206 00:16:51.206 ' 00:16:51.206 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:51.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.206 --rc genhtml_branch_coverage=1 00:16:51.206 --rc genhtml_function_coverage=1 00:16:51.206 --rc genhtml_legend=1 00:16:51.206 --rc geninfo_all_blocks=1 00:16:51.206 --rc geninfo_unexecuted_blocks=1 00:16:51.206 00:16:51.206 ' 00:16:51.206 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:51.206 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:16:51.206 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:51.206 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:51.206 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:51.206 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:51.206 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:51.206 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:51.206 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:51.206 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:51.206 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:51.206 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:51.206 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:51.206 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:51.206 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:51.206 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:51.206 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:51.206 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:51.206 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:51.206 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:16:51.206 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:51.206 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:51.206 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:51.206 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.206 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.206 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.206 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:16:51.206 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.206 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:16:51.206 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:51.206 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:51.206 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:51.206 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:51.206 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:51.206 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:51.206 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:51.206 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:51.206 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:51.206 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:51.206 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:16:51.206 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:51.206 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:51.206 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:51.206 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:51.206 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:51.206 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:51.206 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:51.206 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:51.206 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:51.206 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:51.206 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:16:51.206 00:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:53.740 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:53.740 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:53.740 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:53.740 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:53.740 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:53.741 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:53.741 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:53.741 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:53.741 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:16:53.741 00:16:53.741 --- 10.0.0.2 ping statistics --- 00:16:53.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:53.741 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:16:53.741 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:53.741 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:53.741 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:16:53.741 00:16:53.741 --- 10.0.0.1 ping statistics --- 00:16:53.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:53.741 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:16:53.741 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:53.741 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:16:53.741 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:53.741 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:53.741 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:53.741 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:53.741 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:53.741 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:53.741 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:53.741 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:16:53.741 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:53.741 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:53.741 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:53.741 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=220669 00:16:53.741 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:53.741 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 220669 00:16:53.741 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 220669 ']' 00:16:53.741 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:53.741 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:53.741 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:53.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:53.741 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:53.741 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:53.741 [2024-11-18 00:22:17.330676] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:16:53.741 [2024-11-18 00:22:17.330747] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:53.741 [2024-11-18 00:22:17.402925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.741 [2024-11-18 00:22:17.444516] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:53.741 [2024-11-18 00:22:17.444574] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:53.741 [2024-11-18 00:22:17.444601] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:53.741 [2024-11-18 00:22:17.444612] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:53.741 [2024-11-18 00:22:17.444621] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:53.741 [2024-11-18 00:22:17.445203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:53.741 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:53.741 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:16:53.741 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:53.741 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:53.741 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:53.999 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:53.999 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:53.999 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.999 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:53.999 [2024-11-18 00:22:17.583768] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:53.999 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.999 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:53.999 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.999 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:53.999 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.999 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:53.999 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.999 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:53.999 [2024-11-18 00:22:17.599976] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:53.999 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.999 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:53.999 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.000 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:54.000 NULL1 00:16:54.000 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.000 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:16:54.000 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.000 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:54.000 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.000 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:16:54.000 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.000 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:54.000 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.000 00:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:16:54.000 [2024-11-18 00:22:17.643202] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:16:54.000 [2024-11-18 00:22:17.643235] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid220693 ] 00:16:54.262 Attached to nqn.2016-06.io.spdk:cnode1 00:16:54.262 Namespace ID: 1 size: 1GB 00:16:54.262 fused_ordering(0) 00:16:54.262 fused_ordering(1) 00:16:54.262 fused_ordering(2) 00:16:54.262 fused_ordering(3) 00:16:54.262 fused_ordering(4) 00:16:54.262 fused_ordering(5) 00:16:54.262 fused_ordering(6) 00:16:54.262 fused_ordering(7) 00:16:54.262 fused_ordering(8) 00:16:54.262 fused_ordering(9) 00:16:54.262 fused_ordering(10) 00:16:54.262 fused_ordering(11) 00:16:54.262 fused_ordering(12) 00:16:54.262 fused_ordering(13) 00:16:54.262 fused_ordering(14) 00:16:54.262 fused_ordering(15) 00:16:54.262 fused_ordering(16) 00:16:54.262 fused_ordering(17) 00:16:54.262 fused_ordering(18) 00:16:54.262 fused_ordering(19) 00:16:54.262 fused_ordering(20) 00:16:54.262 fused_ordering(21) 00:16:54.262 fused_ordering(22) 00:16:54.262 fused_ordering(23) 00:16:54.262 fused_ordering(24) 00:16:54.262 fused_ordering(25) 00:16:54.262 fused_ordering(26) 00:16:54.262 fused_ordering(27) 00:16:54.262 fused_ordering(28) 00:16:54.262 fused_ordering(29) 00:16:54.262 fused_ordering(30) 00:16:54.262 fused_ordering(31) 00:16:54.262 fused_ordering(32) 00:16:54.262 fused_ordering(33) 00:16:54.262 fused_ordering(34) 00:16:54.262 fused_ordering(35) 00:16:54.262 fused_ordering(36) 00:16:54.262 fused_ordering(37) 00:16:54.262 fused_ordering(38) 00:16:54.262 fused_ordering(39) 00:16:54.262 fused_ordering(40) 00:16:54.262 fused_ordering(41) 00:16:54.262 fused_ordering(42) 00:16:54.262 fused_ordering(43) 00:16:54.262 fused_ordering(44) 00:16:54.262 fused_ordering(45) 00:16:54.262 fused_ordering(46) 00:16:54.262 fused_ordering(47) 00:16:54.262 fused_ordering(48) 00:16:54.262 fused_ordering(49) 00:16:54.262 fused_ordering(50) 00:16:54.262 fused_ordering(51) 00:16:54.262 fused_ordering(52) 00:16:54.262 fused_ordering(53) 00:16:54.262 fused_ordering(54) 00:16:54.262 fused_ordering(55) 00:16:54.262 fused_ordering(56) 00:16:54.262 fused_ordering(57) 00:16:54.262 fused_ordering(58) 00:16:54.262 fused_ordering(59) 00:16:54.263 fused_ordering(60) 00:16:54.263 fused_ordering(61) 00:16:54.263 fused_ordering(62) 00:16:54.263 fused_ordering(63) 00:16:54.263 fused_ordering(64) 00:16:54.263 fused_ordering(65) 00:16:54.263 fused_ordering(66) 00:16:54.263 fused_ordering(67) 00:16:54.263 fused_ordering(68) 00:16:54.263 fused_ordering(69) 00:16:54.263 fused_ordering(70) 00:16:54.263 fused_ordering(71) 00:16:54.263 fused_ordering(72) 00:16:54.263 fused_ordering(73) 00:16:54.263 fused_ordering(74) 00:16:54.263 fused_ordering(75) 00:16:54.263 fused_ordering(76) 00:16:54.263 fused_ordering(77) 00:16:54.263 fused_ordering(78) 00:16:54.263 fused_ordering(79) 00:16:54.263 fused_ordering(80) 00:16:54.263 fused_ordering(81) 00:16:54.263 fused_ordering(82) 00:16:54.263 fused_ordering(83) 00:16:54.263 fused_ordering(84) 00:16:54.263 fused_ordering(85) 00:16:54.263 fused_ordering(86) 00:16:54.263 fused_ordering(87) 00:16:54.263 fused_ordering(88) 00:16:54.263 fused_ordering(89) 00:16:54.263 fused_ordering(90) 00:16:54.263 fused_ordering(91) 00:16:54.263 fused_ordering(92) 00:16:54.263 fused_ordering(93) 00:16:54.263 fused_ordering(94) 00:16:54.263 fused_ordering(95) 00:16:54.263 fused_ordering(96) 00:16:54.263 fused_ordering(97) 00:16:54.263 fused_ordering(98) 00:16:54.263 fused_ordering(99) 00:16:54.263 fused_ordering(100) 00:16:54.263 fused_ordering(101) 00:16:54.263 fused_ordering(102) 00:16:54.263 fused_ordering(103) 00:16:54.263 fused_ordering(104) 00:16:54.263 fused_ordering(105) 00:16:54.263 fused_ordering(106) 00:16:54.263 fused_ordering(107) 00:16:54.263 fused_ordering(108) 00:16:54.263 fused_ordering(109) 00:16:54.263 fused_ordering(110) 00:16:54.263 fused_ordering(111) 00:16:54.263 fused_ordering(112) 00:16:54.263 fused_ordering(113) 00:16:54.263 fused_ordering(114) 00:16:54.263 fused_ordering(115) 00:16:54.263 fused_ordering(116) 00:16:54.263 fused_ordering(117) 00:16:54.263 fused_ordering(118) 00:16:54.263 fused_ordering(119) 00:16:54.263 fused_ordering(120) 00:16:54.263 fused_ordering(121) 00:16:54.263 fused_ordering(122) 00:16:54.263 fused_ordering(123) 00:16:54.263 fused_ordering(124) 00:16:54.263 fused_ordering(125) 00:16:54.263 fused_ordering(126) 00:16:54.263 fused_ordering(127) 00:16:54.263 fused_ordering(128) 00:16:54.263 fused_ordering(129) 00:16:54.263 fused_ordering(130) 00:16:54.263 fused_ordering(131) 00:16:54.263 fused_ordering(132) 00:16:54.263 fused_ordering(133) 00:16:54.263 fused_ordering(134) 00:16:54.263 fused_ordering(135) 00:16:54.263 fused_ordering(136) 00:16:54.263 fused_ordering(137) 00:16:54.263 fused_ordering(138) 00:16:54.263 fused_ordering(139) 00:16:54.263 fused_ordering(140) 00:16:54.263 fused_ordering(141) 00:16:54.263 fused_ordering(142) 00:16:54.263 fused_ordering(143) 00:16:54.263 fused_ordering(144) 00:16:54.263 fused_ordering(145) 00:16:54.263 fused_ordering(146) 00:16:54.263 fused_ordering(147) 00:16:54.263 fused_ordering(148) 00:16:54.263 fused_ordering(149) 00:16:54.263 fused_ordering(150) 00:16:54.263 fused_ordering(151) 00:16:54.263 fused_ordering(152) 00:16:54.263 fused_ordering(153) 00:16:54.263 fused_ordering(154) 00:16:54.263 fused_ordering(155) 00:16:54.263 fused_ordering(156) 00:16:54.263 fused_ordering(157) 00:16:54.263 fused_ordering(158) 00:16:54.263 fused_ordering(159) 00:16:54.263 fused_ordering(160) 00:16:54.263 fused_ordering(161) 00:16:54.263 fused_ordering(162) 00:16:54.263 fused_ordering(163) 00:16:54.263 fused_ordering(164) 00:16:54.263 fused_ordering(165) 00:16:54.263 fused_ordering(166) 00:16:54.263 fused_ordering(167) 00:16:54.263 fused_ordering(168) 00:16:54.263 fused_ordering(169) 00:16:54.263 fused_ordering(170) 00:16:54.263 fused_ordering(171) 00:16:54.263 fused_ordering(172) 00:16:54.263 fused_ordering(173) 00:16:54.263 fused_ordering(174) 00:16:54.263 fused_ordering(175) 00:16:54.263 fused_ordering(176) 00:16:54.263 fused_ordering(177) 00:16:54.263 fused_ordering(178) 00:16:54.263 fused_ordering(179) 00:16:54.263 fused_ordering(180) 00:16:54.263 fused_ordering(181) 00:16:54.263 fused_ordering(182) 00:16:54.263 fused_ordering(183) 00:16:54.263 fused_ordering(184) 00:16:54.263 fused_ordering(185) 00:16:54.263 fused_ordering(186) 00:16:54.263 fused_ordering(187) 00:16:54.263 fused_ordering(188) 00:16:54.263 fused_ordering(189) 00:16:54.263 fused_ordering(190) 00:16:54.263 fused_ordering(191) 00:16:54.263 fused_ordering(192) 00:16:54.263 fused_ordering(193) 00:16:54.263 fused_ordering(194) 00:16:54.263 fused_ordering(195) 00:16:54.263 fused_ordering(196) 00:16:54.263 fused_ordering(197) 00:16:54.263 fused_ordering(198) 00:16:54.263 fused_ordering(199) 00:16:54.263 fused_ordering(200) 00:16:54.263 fused_ordering(201) 00:16:54.263 fused_ordering(202) 00:16:54.263 fused_ordering(203) 00:16:54.263 fused_ordering(204) 00:16:54.263 fused_ordering(205) 00:16:54.830 fused_ordering(206) 00:16:54.830 fused_ordering(207) 00:16:54.830 fused_ordering(208) 00:16:54.830 fused_ordering(209) 00:16:54.830 fused_ordering(210) 00:16:54.830 fused_ordering(211) 00:16:54.830 fused_ordering(212) 00:16:54.830 fused_ordering(213) 00:16:54.830 fused_ordering(214) 00:16:54.830 fused_ordering(215) 00:16:54.830 fused_ordering(216) 00:16:54.830 fused_ordering(217) 00:16:54.830 fused_ordering(218) 00:16:54.830 fused_ordering(219) 00:16:54.830 fused_ordering(220) 00:16:54.830 fused_ordering(221) 00:16:54.830 fused_ordering(222) 00:16:54.830 fused_ordering(223) 00:16:54.830 fused_ordering(224) 00:16:54.830 fused_ordering(225) 00:16:54.830 fused_ordering(226) 00:16:54.830 fused_ordering(227) 00:16:54.830 fused_ordering(228) 00:16:54.830 fused_ordering(229) 00:16:54.830 fused_ordering(230) 00:16:54.830 fused_ordering(231) 00:16:54.830 fused_ordering(232) 00:16:54.830 fused_ordering(233) 00:16:54.830 fused_ordering(234) 00:16:54.830 fused_ordering(235) 00:16:54.830 fused_ordering(236) 00:16:54.830 fused_ordering(237) 00:16:54.830 fused_ordering(238) 00:16:54.830 fused_ordering(239) 00:16:54.830 fused_ordering(240) 00:16:54.830 fused_ordering(241) 00:16:54.830 fused_ordering(242) 00:16:54.830 fused_ordering(243) 00:16:54.830 fused_ordering(244) 00:16:54.830 fused_ordering(245) 00:16:54.830 fused_ordering(246) 00:16:54.830 fused_ordering(247) 00:16:54.830 fused_ordering(248) 00:16:54.830 fused_ordering(249) 00:16:54.830 fused_ordering(250) 00:16:54.830 fused_ordering(251) 00:16:54.830 fused_ordering(252) 00:16:54.830 fused_ordering(253) 00:16:54.830 fused_ordering(254) 00:16:54.830 fused_ordering(255) 00:16:54.830 fused_ordering(256) 00:16:54.830 fused_ordering(257) 00:16:54.830 fused_ordering(258) 00:16:54.830 fused_ordering(259) 00:16:54.830 fused_ordering(260) 00:16:54.830 fused_ordering(261) 00:16:54.830 fused_ordering(262) 00:16:54.830 fused_ordering(263) 00:16:54.830 fused_ordering(264) 00:16:54.830 fused_ordering(265) 00:16:54.830 fused_ordering(266) 00:16:54.830 fused_ordering(267) 00:16:54.830 fused_ordering(268) 00:16:54.830 fused_ordering(269) 00:16:54.830 fused_ordering(270) 00:16:54.830 fused_ordering(271) 00:16:54.830 fused_ordering(272) 00:16:54.830 fused_ordering(273) 00:16:54.830 fused_ordering(274) 00:16:54.830 fused_ordering(275) 00:16:54.830 fused_ordering(276) 00:16:54.830 fused_ordering(277) 00:16:54.830 fused_ordering(278) 00:16:54.830 fused_ordering(279) 00:16:54.830 fused_ordering(280) 00:16:54.830 fused_ordering(281) 00:16:54.830 fused_ordering(282) 00:16:54.830 fused_ordering(283) 00:16:54.830 fused_ordering(284) 00:16:54.830 fused_ordering(285) 00:16:54.830 fused_ordering(286) 00:16:54.830 fused_ordering(287) 00:16:54.830 fused_ordering(288) 00:16:54.830 fused_ordering(289) 00:16:54.830 fused_ordering(290) 00:16:54.830 fused_ordering(291) 00:16:54.830 fused_ordering(292) 00:16:54.830 fused_ordering(293) 00:16:54.830 fused_ordering(294) 00:16:54.830 fused_ordering(295) 00:16:54.830 fused_ordering(296) 00:16:54.830 fused_ordering(297) 00:16:54.830 fused_ordering(298) 00:16:54.830 fused_ordering(299) 00:16:54.830 fused_ordering(300) 00:16:54.830 fused_ordering(301) 00:16:54.830 fused_ordering(302) 00:16:54.830 fused_ordering(303) 00:16:54.830 fused_ordering(304) 00:16:54.830 fused_ordering(305) 00:16:54.830 fused_ordering(306) 00:16:54.830 fused_ordering(307) 00:16:54.830 fused_ordering(308) 00:16:54.830 fused_ordering(309) 00:16:54.830 fused_ordering(310) 00:16:54.830 fused_ordering(311) 00:16:54.830 fused_ordering(312) 00:16:54.830 fused_ordering(313) 00:16:54.830 fused_ordering(314) 00:16:54.830 fused_ordering(315) 00:16:54.830 fused_ordering(316) 00:16:54.830 fused_ordering(317) 00:16:54.830 fused_ordering(318) 00:16:54.830 fused_ordering(319) 00:16:54.830 fused_ordering(320) 00:16:54.830 fused_ordering(321) 00:16:54.830 fused_ordering(322) 00:16:54.830 fused_ordering(323) 00:16:54.830 fused_ordering(324) 00:16:54.830 fused_ordering(325) 00:16:54.830 fused_ordering(326) 00:16:54.830 fused_ordering(327) 00:16:54.830 fused_ordering(328) 00:16:54.830 fused_ordering(329) 00:16:54.830 fused_ordering(330) 00:16:54.830 fused_ordering(331) 00:16:54.830 fused_ordering(332) 00:16:54.830 fused_ordering(333) 00:16:54.830 fused_ordering(334) 00:16:54.830 fused_ordering(335) 00:16:54.830 fused_ordering(336) 00:16:54.830 fused_ordering(337) 00:16:54.830 fused_ordering(338) 00:16:54.830 fused_ordering(339) 00:16:54.830 fused_ordering(340) 00:16:54.830 fused_ordering(341) 00:16:54.830 fused_ordering(342) 00:16:54.830 fused_ordering(343) 00:16:54.830 fused_ordering(344) 00:16:54.830 fused_ordering(345) 00:16:54.830 fused_ordering(346) 00:16:54.830 fused_ordering(347) 00:16:54.830 fused_ordering(348) 00:16:54.830 fused_ordering(349) 00:16:54.830 fused_ordering(350) 00:16:54.830 fused_ordering(351) 00:16:54.830 fused_ordering(352) 00:16:54.830 fused_ordering(353) 00:16:54.830 fused_ordering(354) 00:16:54.830 fused_ordering(355) 00:16:54.830 fused_ordering(356) 00:16:54.830 fused_ordering(357) 00:16:54.830 fused_ordering(358) 00:16:54.830 fused_ordering(359) 00:16:54.830 fused_ordering(360) 00:16:54.830 fused_ordering(361) 00:16:54.830 fused_ordering(362) 00:16:54.830 fused_ordering(363) 00:16:54.830 fused_ordering(364) 00:16:54.830 fused_ordering(365) 00:16:54.830 fused_ordering(366) 00:16:54.830 fused_ordering(367) 00:16:54.830 fused_ordering(368) 00:16:54.830 fused_ordering(369) 00:16:54.830 fused_ordering(370) 00:16:54.830 fused_ordering(371) 00:16:54.830 fused_ordering(372) 00:16:54.830 fused_ordering(373) 00:16:54.830 fused_ordering(374) 00:16:54.830 fused_ordering(375) 00:16:54.830 fused_ordering(376) 00:16:54.830 fused_ordering(377) 00:16:54.830 fused_ordering(378) 00:16:54.830 fused_ordering(379) 00:16:54.830 fused_ordering(380) 00:16:54.830 fused_ordering(381) 00:16:54.830 fused_ordering(382) 00:16:54.830 fused_ordering(383) 00:16:54.830 fused_ordering(384) 00:16:54.830 fused_ordering(385) 00:16:54.830 fused_ordering(386) 00:16:54.830 fused_ordering(387) 00:16:54.830 fused_ordering(388) 00:16:54.830 fused_ordering(389) 00:16:54.830 fused_ordering(390) 00:16:54.830 fused_ordering(391) 00:16:54.830 fused_ordering(392) 00:16:54.830 fused_ordering(393) 00:16:54.830 fused_ordering(394) 00:16:54.830 fused_ordering(395) 00:16:54.830 fused_ordering(396) 00:16:54.830 fused_ordering(397) 00:16:54.830 fused_ordering(398) 00:16:54.830 fused_ordering(399) 00:16:54.830 fused_ordering(400) 00:16:54.830 fused_ordering(401) 00:16:54.830 fused_ordering(402) 00:16:54.830 fused_ordering(403) 00:16:54.830 fused_ordering(404) 00:16:54.830 fused_ordering(405) 00:16:54.830 fused_ordering(406) 00:16:54.830 fused_ordering(407) 00:16:54.830 fused_ordering(408) 00:16:54.830 fused_ordering(409) 00:16:54.830 fused_ordering(410) 00:16:55.089 fused_ordering(411) 00:16:55.089 fused_ordering(412) 00:16:55.089 fused_ordering(413) 00:16:55.089 fused_ordering(414) 00:16:55.089 fused_ordering(415) 00:16:55.089 fused_ordering(416) 00:16:55.089 fused_ordering(417) 00:16:55.089 fused_ordering(418) 00:16:55.089 fused_ordering(419) 00:16:55.089 fused_ordering(420) 00:16:55.089 fused_ordering(421) 00:16:55.089 fused_ordering(422) 00:16:55.089 fused_ordering(423) 00:16:55.089 fused_ordering(424) 00:16:55.089 fused_ordering(425) 00:16:55.089 fused_ordering(426) 00:16:55.089 fused_ordering(427) 00:16:55.089 fused_ordering(428) 00:16:55.089 fused_ordering(429) 00:16:55.089 fused_ordering(430) 00:16:55.089 fused_ordering(431) 00:16:55.089 fused_ordering(432) 00:16:55.089 fused_ordering(433) 00:16:55.089 fused_ordering(434) 00:16:55.089 fused_ordering(435) 00:16:55.089 fused_ordering(436) 00:16:55.089 fused_ordering(437) 00:16:55.089 fused_ordering(438) 00:16:55.089 fused_ordering(439) 00:16:55.089 fused_ordering(440) 00:16:55.089 fused_ordering(441) 00:16:55.089 fused_ordering(442) 00:16:55.089 fused_ordering(443) 00:16:55.089 fused_ordering(444) 00:16:55.089 fused_ordering(445) 00:16:55.089 fused_ordering(446) 00:16:55.089 fused_ordering(447) 00:16:55.089 fused_ordering(448) 00:16:55.089 fused_ordering(449) 00:16:55.089 fused_ordering(450) 00:16:55.089 fused_ordering(451) 00:16:55.089 fused_ordering(452) 00:16:55.089 fused_ordering(453) 00:16:55.089 fused_ordering(454) 00:16:55.089 fused_ordering(455) 00:16:55.089 fused_ordering(456) 00:16:55.089 fused_ordering(457) 00:16:55.089 fused_ordering(458) 00:16:55.089 fused_ordering(459) 00:16:55.089 fused_ordering(460) 00:16:55.089 fused_ordering(461) 00:16:55.089 fused_ordering(462) 00:16:55.089 fused_ordering(463) 00:16:55.089 fused_ordering(464) 00:16:55.089 fused_ordering(465) 00:16:55.089 fused_ordering(466) 00:16:55.089 fused_ordering(467) 00:16:55.089 fused_ordering(468) 00:16:55.089 fused_ordering(469) 00:16:55.089 fused_ordering(470) 00:16:55.089 fused_ordering(471) 00:16:55.089 fused_ordering(472) 00:16:55.089 fused_ordering(473) 00:16:55.089 fused_ordering(474) 00:16:55.089 fused_ordering(475) 00:16:55.089 fused_ordering(476) 00:16:55.089 fused_ordering(477) 00:16:55.089 fused_ordering(478) 00:16:55.089 fused_ordering(479) 00:16:55.089 fused_ordering(480) 00:16:55.089 fused_ordering(481) 00:16:55.089 fused_ordering(482) 00:16:55.089 fused_ordering(483) 00:16:55.089 fused_ordering(484) 00:16:55.089 fused_ordering(485) 00:16:55.089 fused_ordering(486) 00:16:55.089 fused_ordering(487) 00:16:55.089 fused_ordering(488) 00:16:55.089 fused_ordering(489) 00:16:55.089 fused_ordering(490) 00:16:55.089 fused_ordering(491) 00:16:55.089 fused_ordering(492) 00:16:55.089 fused_ordering(493) 00:16:55.089 fused_ordering(494) 00:16:55.089 fused_ordering(495) 00:16:55.089 fused_ordering(496) 00:16:55.089 fused_ordering(497) 00:16:55.089 fused_ordering(498) 00:16:55.089 fused_ordering(499) 00:16:55.089 fused_ordering(500) 00:16:55.089 fused_ordering(501) 00:16:55.089 fused_ordering(502) 00:16:55.089 fused_ordering(503) 00:16:55.089 fused_ordering(504) 00:16:55.089 fused_ordering(505) 00:16:55.089 fused_ordering(506) 00:16:55.089 fused_ordering(507) 00:16:55.089 fused_ordering(508) 00:16:55.089 fused_ordering(509) 00:16:55.089 fused_ordering(510) 00:16:55.089 fused_ordering(511) 00:16:55.089 fused_ordering(512) 00:16:55.089 fused_ordering(513) 00:16:55.089 fused_ordering(514) 00:16:55.089 fused_ordering(515) 00:16:55.089 fused_ordering(516) 00:16:55.089 fused_ordering(517) 00:16:55.089 fused_ordering(518) 00:16:55.089 fused_ordering(519) 00:16:55.089 fused_ordering(520) 00:16:55.089 fused_ordering(521) 00:16:55.089 fused_ordering(522) 00:16:55.089 fused_ordering(523) 00:16:55.089 fused_ordering(524) 00:16:55.089 fused_ordering(525) 00:16:55.089 fused_ordering(526) 00:16:55.089 fused_ordering(527) 00:16:55.089 fused_ordering(528) 00:16:55.089 fused_ordering(529) 00:16:55.089 fused_ordering(530) 00:16:55.089 fused_ordering(531) 00:16:55.089 fused_ordering(532) 00:16:55.089 fused_ordering(533) 00:16:55.089 fused_ordering(534) 00:16:55.089 fused_ordering(535) 00:16:55.089 fused_ordering(536) 00:16:55.089 fused_ordering(537) 00:16:55.089 fused_ordering(538) 00:16:55.089 fused_ordering(539) 00:16:55.089 fused_ordering(540) 00:16:55.089 fused_ordering(541) 00:16:55.089 fused_ordering(542) 00:16:55.089 fused_ordering(543) 00:16:55.089 fused_ordering(544) 00:16:55.089 fused_ordering(545) 00:16:55.089 fused_ordering(546) 00:16:55.089 fused_ordering(547) 00:16:55.089 fused_ordering(548) 00:16:55.089 fused_ordering(549) 00:16:55.089 fused_ordering(550) 00:16:55.089 fused_ordering(551) 00:16:55.089 fused_ordering(552) 00:16:55.089 fused_ordering(553) 00:16:55.089 fused_ordering(554) 00:16:55.089 fused_ordering(555) 00:16:55.089 fused_ordering(556) 00:16:55.089 fused_ordering(557) 00:16:55.089 fused_ordering(558) 00:16:55.089 fused_ordering(559) 00:16:55.089 fused_ordering(560) 00:16:55.089 fused_ordering(561) 00:16:55.089 fused_ordering(562) 00:16:55.089 fused_ordering(563) 00:16:55.089 fused_ordering(564) 00:16:55.089 fused_ordering(565) 00:16:55.089 fused_ordering(566) 00:16:55.089 fused_ordering(567) 00:16:55.089 fused_ordering(568) 00:16:55.089 fused_ordering(569) 00:16:55.089 fused_ordering(570) 00:16:55.089 fused_ordering(571) 00:16:55.089 fused_ordering(572) 00:16:55.089 fused_ordering(573) 00:16:55.089 fused_ordering(574) 00:16:55.089 fused_ordering(575) 00:16:55.089 fused_ordering(576) 00:16:55.089 fused_ordering(577) 00:16:55.089 fused_ordering(578) 00:16:55.089 fused_ordering(579) 00:16:55.089 fused_ordering(580) 00:16:55.089 fused_ordering(581) 00:16:55.089 fused_ordering(582) 00:16:55.089 fused_ordering(583) 00:16:55.089 fused_ordering(584) 00:16:55.089 fused_ordering(585) 00:16:55.089 fused_ordering(586) 00:16:55.089 fused_ordering(587) 00:16:55.089 fused_ordering(588) 00:16:55.089 fused_ordering(589) 00:16:55.089 fused_ordering(590) 00:16:55.089 fused_ordering(591) 00:16:55.089 fused_ordering(592) 00:16:55.089 fused_ordering(593) 00:16:55.089 fused_ordering(594) 00:16:55.089 fused_ordering(595) 00:16:55.089 fused_ordering(596) 00:16:55.089 fused_ordering(597) 00:16:55.089 fused_ordering(598) 00:16:55.089 fused_ordering(599) 00:16:55.089 fused_ordering(600) 00:16:55.089 fused_ordering(601) 00:16:55.089 fused_ordering(602) 00:16:55.089 fused_ordering(603) 00:16:55.089 fused_ordering(604) 00:16:55.089 fused_ordering(605) 00:16:55.089 fused_ordering(606) 00:16:55.089 fused_ordering(607) 00:16:55.089 fused_ordering(608) 00:16:55.089 fused_ordering(609) 00:16:55.089 fused_ordering(610) 00:16:55.089 fused_ordering(611) 00:16:55.089 fused_ordering(612) 00:16:55.089 fused_ordering(613) 00:16:55.089 fused_ordering(614) 00:16:55.089 fused_ordering(615) 00:16:55.655 fused_ordering(616) 00:16:55.655 fused_ordering(617) 00:16:55.655 fused_ordering(618) 00:16:55.655 fused_ordering(619) 00:16:55.655 fused_ordering(620) 00:16:55.655 fused_ordering(621) 00:16:55.655 fused_ordering(622) 00:16:55.655 fused_ordering(623) 00:16:55.655 fused_ordering(624) 00:16:55.655 fused_ordering(625) 00:16:55.655 fused_ordering(626) 00:16:55.655 fused_ordering(627) 00:16:55.655 fused_ordering(628) 00:16:55.655 fused_ordering(629) 00:16:55.655 fused_ordering(630) 00:16:55.655 fused_ordering(631) 00:16:55.655 fused_ordering(632) 00:16:55.655 fused_ordering(633) 00:16:55.655 fused_ordering(634) 00:16:55.655 fused_ordering(635) 00:16:55.655 fused_ordering(636) 00:16:55.655 fused_ordering(637) 00:16:55.655 fused_ordering(638) 00:16:55.655 fused_ordering(639) 00:16:55.655 fused_ordering(640) 00:16:55.655 fused_ordering(641) 00:16:55.655 fused_ordering(642) 00:16:55.655 fused_ordering(643) 00:16:55.655 fused_ordering(644) 00:16:55.655 fused_ordering(645) 00:16:55.655 fused_ordering(646) 00:16:55.655 fused_ordering(647) 00:16:55.655 fused_ordering(648) 00:16:55.655 fused_ordering(649) 00:16:55.655 fused_ordering(650) 00:16:55.655 fused_ordering(651) 00:16:55.655 fused_ordering(652) 00:16:55.655 fused_ordering(653) 00:16:55.655 fused_ordering(654) 00:16:55.655 fused_ordering(655) 00:16:55.655 fused_ordering(656) 00:16:55.655 fused_ordering(657) 00:16:55.655 fused_ordering(658) 00:16:55.655 fused_ordering(659) 00:16:55.655 fused_ordering(660) 00:16:55.655 fused_ordering(661) 00:16:55.655 fused_ordering(662) 00:16:55.655 fused_ordering(663) 00:16:55.655 fused_ordering(664) 00:16:55.655 fused_ordering(665) 00:16:55.655 fused_ordering(666) 00:16:55.655 fused_ordering(667) 00:16:55.655 fused_ordering(668) 00:16:55.655 fused_ordering(669) 00:16:55.655 fused_ordering(670) 00:16:55.655 fused_ordering(671) 00:16:55.655 fused_ordering(672) 00:16:55.655 fused_ordering(673) 00:16:55.655 fused_ordering(674) 00:16:55.655 fused_ordering(675) 00:16:55.655 fused_ordering(676) 00:16:55.655 fused_ordering(677) 00:16:55.655 fused_ordering(678) 00:16:55.655 fused_ordering(679) 00:16:55.655 fused_ordering(680) 00:16:55.655 fused_ordering(681) 00:16:55.655 fused_ordering(682) 00:16:55.655 fused_ordering(683) 00:16:55.655 fused_ordering(684) 00:16:55.655 fused_ordering(685) 00:16:55.655 fused_ordering(686) 00:16:55.655 fused_ordering(687) 00:16:55.655 fused_ordering(688) 00:16:55.655 fused_ordering(689) 00:16:55.655 fused_ordering(690) 00:16:55.655 fused_ordering(691) 00:16:55.655 fused_ordering(692) 00:16:55.655 fused_ordering(693) 00:16:55.655 fused_ordering(694) 00:16:55.655 fused_ordering(695) 00:16:55.655 fused_ordering(696) 00:16:55.655 fused_ordering(697) 00:16:55.655 fused_ordering(698) 00:16:55.655 fused_ordering(699) 00:16:55.655 fused_ordering(700) 00:16:55.655 fused_ordering(701) 00:16:55.655 fused_ordering(702) 00:16:55.655 fused_ordering(703) 00:16:55.655 fused_ordering(704) 00:16:55.655 fused_ordering(705) 00:16:55.655 fused_ordering(706) 00:16:55.655 fused_ordering(707) 00:16:55.655 fused_ordering(708) 00:16:55.655 fused_ordering(709) 00:16:55.655 fused_ordering(710) 00:16:55.655 fused_ordering(711) 00:16:55.655 fused_ordering(712) 00:16:55.655 fused_ordering(713) 00:16:55.655 fused_ordering(714) 00:16:55.655 fused_ordering(715) 00:16:55.655 fused_ordering(716) 00:16:55.655 fused_ordering(717) 00:16:55.655 fused_ordering(718) 00:16:55.655 fused_ordering(719) 00:16:55.655 fused_ordering(720) 00:16:55.655 fused_ordering(721) 00:16:55.655 fused_ordering(722) 00:16:55.655 fused_ordering(723) 00:16:55.655 fused_ordering(724) 00:16:55.655 fused_ordering(725) 00:16:55.655 fused_ordering(726) 00:16:55.655 fused_ordering(727) 00:16:55.655 fused_ordering(728) 00:16:55.655 fused_ordering(729) 00:16:55.655 fused_ordering(730) 00:16:55.655 fused_ordering(731) 00:16:55.655 fused_ordering(732) 00:16:55.655 fused_ordering(733) 00:16:55.655 fused_ordering(734) 00:16:55.655 fused_ordering(735) 00:16:55.655 fused_ordering(736) 00:16:55.655 fused_ordering(737) 00:16:55.655 fused_ordering(738) 00:16:55.655 fused_ordering(739) 00:16:55.655 fused_ordering(740) 00:16:55.655 fused_ordering(741) 00:16:55.655 fused_ordering(742) 00:16:55.655 fused_ordering(743) 00:16:55.655 fused_ordering(744) 00:16:55.655 fused_ordering(745) 00:16:55.655 fused_ordering(746) 00:16:55.655 fused_ordering(747) 00:16:55.655 fused_ordering(748) 00:16:55.655 fused_ordering(749) 00:16:55.655 fused_ordering(750) 00:16:55.655 fused_ordering(751) 00:16:55.655 fused_ordering(752) 00:16:55.655 fused_ordering(753) 00:16:55.655 fused_ordering(754) 00:16:55.655 fused_ordering(755) 00:16:55.655 fused_ordering(756) 00:16:55.655 fused_ordering(757) 00:16:55.655 fused_ordering(758) 00:16:55.655 fused_ordering(759) 00:16:55.655 fused_ordering(760) 00:16:55.655 fused_ordering(761) 00:16:55.655 fused_ordering(762) 00:16:55.655 fused_ordering(763) 00:16:55.655 fused_ordering(764) 00:16:55.655 fused_ordering(765) 00:16:55.655 fused_ordering(766) 00:16:55.655 fused_ordering(767) 00:16:55.655 fused_ordering(768) 00:16:55.655 fused_ordering(769) 00:16:55.655 fused_ordering(770) 00:16:55.655 fused_ordering(771) 00:16:55.655 fused_ordering(772) 00:16:55.655 fused_ordering(773) 00:16:55.655 fused_ordering(774) 00:16:55.655 fused_ordering(775) 00:16:55.655 fused_ordering(776) 00:16:55.655 fused_ordering(777) 00:16:55.655 fused_ordering(778) 00:16:55.655 fused_ordering(779) 00:16:55.655 fused_ordering(780) 00:16:55.655 fused_ordering(781) 00:16:55.655 fused_ordering(782) 00:16:55.655 fused_ordering(783) 00:16:55.655 fused_ordering(784) 00:16:55.655 fused_ordering(785) 00:16:55.655 fused_ordering(786) 00:16:55.655 fused_ordering(787) 00:16:55.655 fused_ordering(788) 00:16:55.655 fused_ordering(789) 00:16:55.655 fused_ordering(790) 00:16:55.655 fused_ordering(791) 00:16:55.655 fused_ordering(792) 00:16:55.655 fused_ordering(793) 00:16:55.655 fused_ordering(794) 00:16:55.655 fused_ordering(795) 00:16:55.655 fused_ordering(796) 00:16:55.655 fused_ordering(797) 00:16:55.655 fused_ordering(798) 00:16:55.655 fused_ordering(799) 00:16:55.655 fused_ordering(800) 00:16:55.655 fused_ordering(801) 00:16:55.655 fused_ordering(802) 00:16:55.655 fused_ordering(803) 00:16:55.655 fused_ordering(804) 00:16:55.655 fused_ordering(805) 00:16:55.655 fused_ordering(806) 00:16:55.655 fused_ordering(807) 00:16:55.655 fused_ordering(808) 00:16:55.655 fused_ordering(809) 00:16:55.655 fused_ordering(810) 00:16:55.655 fused_ordering(811) 00:16:55.655 fused_ordering(812) 00:16:55.655 fused_ordering(813) 00:16:55.655 fused_ordering(814) 00:16:55.655 fused_ordering(815) 00:16:55.655 fused_ordering(816) 00:16:55.655 fused_ordering(817) 00:16:55.655 fused_ordering(818) 00:16:55.655 fused_ordering(819) 00:16:55.655 fused_ordering(820) 00:16:55.913 fused_ordering(821) 00:16:55.913 fused_ordering(822) 00:16:55.913 fused_ordering(823) 00:16:55.913 fused_ordering(824) 00:16:55.913 fused_ordering(825) 00:16:55.913 fused_ordering(826) 00:16:55.913 fused_ordering(827) 00:16:55.913 fused_ordering(828) 00:16:55.913 fused_ordering(829) 00:16:55.913 fused_ordering(830) 00:16:55.913 fused_ordering(831) 00:16:55.913 fused_ordering(832) 00:16:55.913 fused_ordering(833) 00:16:55.913 fused_ordering(834) 00:16:55.913 fused_ordering(835) 00:16:55.913 fused_ordering(836) 00:16:55.913 fused_ordering(837) 00:16:55.913 fused_ordering(838) 00:16:55.913 fused_ordering(839) 00:16:55.913 fused_ordering(840) 00:16:55.913 fused_ordering(841) 00:16:55.913 fused_ordering(842) 00:16:55.913 fused_ordering(843) 00:16:55.913 fused_ordering(844) 00:16:55.913 fused_ordering(845) 00:16:55.913 fused_ordering(846) 00:16:55.913 fused_ordering(847) 00:16:55.913 fused_ordering(848) 00:16:55.913 fused_ordering(849) 00:16:55.913 fused_ordering(850) 00:16:55.913 fused_ordering(851) 00:16:55.913 fused_ordering(852) 00:16:55.913 fused_ordering(853) 00:16:55.914 fused_ordering(854) 00:16:55.914 fused_ordering(855) 00:16:55.914 fused_ordering(856) 00:16:55.914 fused_ordering(857) 00:16:55.914 fused_ordering(858) 00:16:55.914 fused_ordering(859) 00:16:55.914 fused_ordering(860) 00:16:55.914 fused_ordering(861) 00:16:55.914 fused_ordering(862) 00:16:55.914 fused_ordering(863) 00:16:55.914 fused_ordering(864) 00:16:55.914 fused_ordering(865) 00:16:55.914 fused_ordering(866) 00:16:55.914 fused_ordering(867) 00:16:55.914 fused_ordering(868) 00:16:55.914 fused_ordering(869) 00:16:55.914 fused_ordering(870) 00:16:55.914 fused_ordering(871) 00:16:55.914 fused_ordering(872) 00:16:55.914 fused_ordering(873) 00:16:55.914 fused_ordering(874) 00:16:55.914 fused_ordering(875) 00:16:55.914 fused_ordering(876) 00:16:55.914 fused_ordering(877) 00:16:55.914 fused_ordering(878) 00:16:55.914 fused_ordering(879) 00:16:55.914 fused_ordering(880) 00:16:55.914 fused_ordering(881) 00:16:55.914 fused_ordering(882) 00:16:55.914 fused_ordering(883) 00:16:55.914 fused_ordering(884) 00:16:55.914 fused_ordering(885) 00:16:55.914 fused_ordering(886) 00:16:55.914 fused_ordering(887) 00:16:55.914 fused_ordering(888) 00:16:55.914 fused_ordering(889) 00:16:55.914 fused_ordering(890) 00:16:55.914 fused_ordering(891) 00:16:55.914 fused_ordering(892) 00:16:55.914 fused_ordering(893) 00:16:55.914 fused_ordering(894) 00:16:55.914 fused_ordering(895) 00:16:55.914 fused_ordering(896) 00:16:55.914 fused_ordering(897) 00:16:55.914 fused_ordering(898) 00:16:55.914 fused_ordering(899) 00:16:55.914 fused_ordering(900) 00:16:55.914 fused_ordering(901) 00:16:55.914 fused_ordering(902) 00:16:55.914 fused_ordering(903) 00:16:55.914 fused_ordering(904) 00:16:55.914 fused_ordering(905) 00:16:55.914 fused_ordering(906) 00:16:55.914 fused_ordering(907) 00:16:55.914 fused_ordering(908) 00:16:55.914 fused_ordering(909) 00:16:55.914 fused_ordering(910) 00:16:55.914 fused_ordering(911) 00:16:55.914 fused_ordering(912) 00:16:55.914 fused_ordering(913) 00:16:55.914 fused_ordering(914) 00:16:55.914 fused_ordering(915) 00:16:55.914 fused_ordering(916) 00:16:55.914 fused_ordering(917) 00:16:55.914 fused_ordering(918) 00:16:55.914 fused_ordering(919) 00:16:55.914 fused_ordering(920) 00:16:55.914 fused_ordering(921) 00:16:55.914 fused_ordering(922) 00:16:55.914 fused_ordering(923) 00:16:55.914 fused_ordering(924) 00:16:55.914 fused_ordering(925) 00:16:55.914 fused_ordering(926) 00:16:55.914 fused_ordering(927) 00:16:55.914 fused_ordering(928) 00:16:55.914 fused_ordering(929) 00:16:55.914 fused_ordering(930) 00:16:55.914 fused_ordering(931) 00:16:55.914 fused_ordering(932) 00:16:55.914 fused_ordering(933) 00:16:55.914 fused_ordering(934) 00:16:55.914 fused_ordering(935) 00:16:55.914 fused_ordering(936) 00:16:55.914 fused_ordering(937) 00:16:55.914 fused_ordering(938) 00:16:55.914 fused_ordering(939) 00:16:55.914 fused_ordering(940) 00:16:55.914 fused_ordering(941) 00:16:55.914 fused_ordering(942) 00:16:55.914 fused_ordering(943) 00:16:55.914 fused_ordering(944) 00:16:55.914 fused_ordering(945) 00:16:55.914 fused_ordering(946) 00:16:55.914 fused_ordering(947) 00:16:55.914 fused_ordering(948) 00:16:55.914 fused_ordering(949) 00:16:55.914 fused_ordering(950) 00:16:55.914 fused_ordering(951) 00:16:55.914 fused_ordering(952) 00:16:55.914 fused_ordering(953) 00:16:55.914 fused_ordering(954) 00:16:55.914 fused_ordering(955) 00:16:55.914 fused_ordering(956) 00:16:55.914 fused_ordering(957) 00:16:55.914 fused_ordering(958) 00:16:55.914 fused_ordering(959) 00:16:55.914 fused_ordering(960) 00:16:55.914 fused_ordering(961) 00:16:55.914 fused_ordering(962) 00:16:55.914 fused_ordering(963) 00:16:55.914 fused_ordering(964) 00:16:55.914 fused_ordering(965) 00:16:55.914 fused_ordering(966) 00:16:55.914 fused_ordering(967) 00:16:55.914 fused_ordering(968) 00:16:55.914 fused_ordering(969) 00:16:55.914 fused_ordering(970) 00:16:55.914 fused_ordering(971) 00:16:55.914 fused_ordering(972) 00:16:55.914 fused_ordering(973) 00:16:55.914 fused_ordering(974) 00:16:55.914 fused_ordering(975) 00:16:55.914 fused_ordering(976) 00:16:55.914 fused_ordering(977) 00:16:55.914 fused_ordering(978) 00:16:55.914 fused_ordering(979) 00:16:55.914 fused_ordering(980) 00:16:55.914 fused_ordering(981) 00:16:55.914 fused_ordering(982) 00:16:55.914 fused_ordering(983) 00:16:55.914 fused_ordering(984) 00:16:55.914 fused_ordering(985) 00:16:55.914 fused_ordering(986) 00:16:55.914 fused_ordering(987) 00:16:55.914 fused_ordering(988) 00:16:55.914 fused_ordering(989) 00:16:55.914 fused_ordering(990) 00:16:55.914 fused_ordering(991) 00:16:55.914 fused_ordering(992) 00:16:55.914 fused_ordering(993) 00:16:55.914 fused_ordering(994) 00:16:55.914 fused_ordering(995) 00:16:55.914 fused_ordering(996) 00:16:55.914 fused_ordering(997) 00:16:55.914 fused_ordering(998) 00:16:55.914 fused_ordering(999) 00:16:55.914 fused_ordering(1000) 00:16:55.914 fused_ordering(1001) 00:16:55.914 fused_ordering(1002) 00:16:55.914 fused_ordering(1003) 00:16:55.914 fused_ordering(1004) 00:16:55.914 fused_ordering(1005) 00:16:55.914 fused_ordering(1006) 00:16:55.914 fused_ordering(1007) 00:16:55.914 fused_ordering(1008) 00:16:55.914 fused_ordering(1009) 00:16:55.914 fused_ordering(1010) 00:16:55.914 fused_ordering(1011) 00:16:55.914 fused_ordering(1012) 00:16:55.914 fused_ordering(1013) 00:16:55.914 fused_ordering(1014) 00:16:55.914 fused_ordering(1015) 00:16:55.914 fused_ordering(1016) 00:16:55.914 fused_ordering(1017) 00:16:55.914 fused_ordering(1018) 00:16:55.914 fused_ordering(1019) 00:16:55.914 fused_ordering(1020) 00:16:55.914 fused_ordering(1021) 00:16:55.914 fused_ordering(1022) 00:16:55.914 fused_ordering(1023) 00:16:55.914 00:22:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:16:55.914 00:22:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:16:55.914 00:22:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:55.914 00:22:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:16:55.914 00:22:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:55.914 00:22:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:16:55.914 00:22:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:55.915 00:22:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:55.915 rmmod nvme_tcp 00:16:56.173 rmmod nvme_fabrics 00:16:56.173 rmmod nvme_keyring 00:16:56.173 00:22:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:56.173 00:22:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:16:56.173 00:22:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:16:56.173 00:22:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 220669 ']' 00:16:56.173 00:22:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 220669 00:16:56.173 00:22:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 220669 ']' 00:16:56.173 00:22:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 220669 00:16:56.173 00:22:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:16:56.173 00:22:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:56.173 00:22:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 220669 00:16:56.173 00:22:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:56.173 00:22:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:56.173 00:22:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 220669' 00:16:56.173 killing process with pid 220669 00:16:56.173 00:22:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 220669 00:16:56.173 00:22:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 220669 00:16:56.443 00:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:56.443 00:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:56.443 00:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:56.443 00:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:16:56.443 00:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:16:56.443 00:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:56.443 00:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:16:56.443 00:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:56.443 00:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:56.443 00:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.443 00:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:56.443 00:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:58.369 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:58.369 00:16:58.369 real 0m7.316s 00:16:58.369 user 0m4.932s 00:16:58.369 sys 0m2.810s 00:16:58.369 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:58.369 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:58.369 ************************************ 00:16:58.369 END TEST nvmf_fused_ordering 00:16:58.369 ************************************ 00:16:58.369 00:22:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:16:58.369 00:22:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:58.369 00:22:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:58.369 00:22:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:58.369 ************************************ 00:16:58.369 START TEST nvmf_ns_masking 00:16:58.369 ************************************ 00:16:58.369 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:16:58.369 * Looking for test storage... 00:16:58.369 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:58.369 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:58.369 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:16:58.369 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:58.630 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:58.630 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:58.630 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:58.630 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:58.630 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:16:58.630 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:16:58.630 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:16:58.630 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:16:58.630 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:16:58.630 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:16:58.630 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:16:58.630 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:58.630 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:16:58.630 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:16:58.630 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:58.630 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:58.630 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:16:58.630 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:16:58.630 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:58.630 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:16:58.630 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:16:58.630 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:16:58.630 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:16:58.630 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:58.630 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:16:58.630 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:16:58.630 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:58.630 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:58.630 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:16:58.630 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:58.630 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:58.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:58.630 --rc genhtml_branch_coverage=1 00:16:58.630 --rc genhtml_function_coverage=1 00:16:58.630 --rc genhtml_legend=1 00:16:58.630 --rc geninfo_all_blocks=1 00:16:58.630 --rc geninfo_unexecuted_blocks=1 00:16:58.630 00:16:58.630 ' 00:16:58.630 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:58.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:58.630 --rc genhtml_branch_coverage=1 00:16:58.630 --rc genhtml_function_coverage=1 00:16:58.630 --rc genhtml_legend=1 00:16:58.630 --rc geninfo_all_blocks=1 00:16:58.630 --rc geninfo_unexecuted_blocks=1 00:16:58.630 00:16:58.630 ' 00:16:58.630 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:58.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:58.630 --rc genhtml_branch_coverage=1 00:16:58.630 --rc genhtml_function_coverage=1 00:16:58.630 --rc genhtml_legend=1 00:16:58.630 --rc geninfo_all_blocks=1 00:16:58.630 --rc geninfo_unexecuted_blocks=1 00:16:58.630 00:16:58.630 ' 00:16:58.630 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:58.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:58.630 --rc genhtml_branch_coverage=1 00:16:58.630 --rc genhtml_function_coverage=1 00:16:58.630 --rc genhtml_legend=1 00:16:58.630 --rc geninfo_all_blocks=1 00:16:58.630 --rc geninfo_unexecuted_blocks=1 00:16:58.630 00:16:58.630 ' 00:16:58.630 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:58.630 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:16:58.630 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:58.630 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:58.630 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:58.630 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:58.630 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:58.630 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:58.630 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:58.630 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:58.631 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:58.631 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:58.631 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:58.631 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:58.631 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:58.631 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:58.631 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:58.631 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:58.631 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:58.631 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:16:58.631 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:58.631 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:58.631 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:58.631 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.631 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.631 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.631 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:16:58.631 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.631 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:16:58.631 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:58.631 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:58.631 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:58.631 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:58.631 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:58.631 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:58.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:58.631 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:58.631 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:58.631 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:58.631 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:58.631 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:16:58.631 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:16:58.631 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:16:58.631 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=74e2a612-a943-459b-b9f5-ec29b0d967c7 00:16:58.631 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:16:58.631 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=191da529-272d-4450-b286-3e5c1dfa2c5a 00:16:58.631 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:16:58.631 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:16:58.631 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:16:58.631 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:16:58.631 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=574fd9eb-8e6e-451c-a81a-31ce252b7aae 00:16:58.631 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:16:58.631 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:58.631 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:58.631 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:58.631 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:58.631 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:58.631 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:58.631 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:58.631 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:58.631 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:58.631 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:58.631 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:16:58.631 00:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:01.168 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:01.168 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:01.168 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:01.168 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:01.168 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:01.169 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:01.169 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:01.169 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:01.169 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:01.169 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:01.169 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:01.169 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:01.169 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:01.169 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:01.169 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:01.169 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:01.169 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:01.169 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:01.169 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:01.169 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:01.169 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:01.169 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:01.169 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:01.169 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:01.169 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:01.169 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:01.169 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:17:01.169 00:17:01.169 --- 10.0.0.2 ping statistics --- 00:17:01.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.169 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:17:01.169 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:01.169 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:01.169 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:17:01.169 00:17:01.169 --- 10.0.0.1 ping statistics --- 00:17:01.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.169 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:17:01.169 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:01.169 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:17:01.169 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:01.169 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:01.169 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:01.169 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:01.169 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:01.169 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:01.169 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:01.169 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:17:01.169 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:01.169 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:01.169 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:01.169 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=222900 00:17:01.169 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:01.169 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 222900 00:17:01.169 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 222900 ']' 00:17:01.169 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:01.169 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:01.169 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:01.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:01.169 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:01.169 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:01.169 [2024-11-18 00:22:24.636234] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:17:01.169 [2024-11-18 00:22:24.636332] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:01.169 [2024-11-18 00:22:24.706981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.169 [2024-11-18 00:22:24.748449] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:01.169 [2024-11-18 00:22:24.748508] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:01.169 [2024-11-18 00:22:24.748535] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:01.169 [2024-11-18 00:22:24.748545] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:01.169 [2024-11-18 00:22:24.748555] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:01.169 [2024-11-18 00:22:24.749133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.169 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:01.169 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:17:01.169 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:01.169 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:01.169 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:01.169 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:01.169 00:22:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:01.427 [2024-11-18 00:22:25.184458] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:01.427 00:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:17:01.427 00:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:17:01.427 00:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:01.697 Malloc1 00:17:01.697 00:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:02.263 Malloc2 00:17:02.263 00:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:02.521 00:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:17:02.779 00:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:03.037 [2024-11-18 00:22:26.620430] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:03.037 00:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:17:03.037 00:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 574fd9eb-8e6e-451c-a81a-31ce252b7aae -a 10.0.0.2 -s 4420 -i 4 00:17:03.037 00:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:17:03.037 00:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:03.037 00:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:03.037 00:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:03.037 00:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:05.566 00:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:05.566 00:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:05.566 00:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:05.566 00:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:05.566 00:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:05.566 00:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:05.566 00:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:05.566 00:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:05.566 00:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:05.566 00:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:05.566 00:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:17:05.566 00:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:05.566 00:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:05.566 [ 0]:0x1 00:17:05.566 00:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:05.566 00:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:05.566 00:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6cc84e537f3d49e39a3dfa06c38c28d8 00:17:05.566 00:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6cc84e537f3d49e39a3dfa06c38c28d8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:05.566 00:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:17:05.566 00:22:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:17:05.566 00:22:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:05.566 00:22:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:05.566 [ 0]:0x1 00:17:05.566 00:22:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:05.566 00:22:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:05.566 00:22:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6cc84e537f3d49e39a3dfa06c38c28d8 00:17:05.566 00:22:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6cc84e537f3d49e39a3dfa06c38c28d8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:05.566 00:22:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:17:05.566 00:22:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:05.566 00:22:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:05.566 [ 1]:0x2 00:17:05.566 00:22:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:05.566 00:22:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:05.566 00:22:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fc57cfcf08d84fdb98a02aff396df7a9 00:17:05.566 00:22:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fc57cfcf08d84fdb98a02aff396df7a9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:05.566 00:22:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:17:05.566 00:22:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:05.566 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:05.566 00:22:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:06.130 00:22:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:17:06.387 00:22:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:17:06.387 00:22:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 574fd9eb-8e6e-451c-a81a-31ce252b7aae -a 10.0.0.2 -s 4420 -i 4 00:17:06.387 00:22:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:17:06.387 00:22:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:06.387 00:22:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:06.387 00:22:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:17:06.387 00:22:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:17:06.387 00:22:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:08.287 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:08.287 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:08.287 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:08.287 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:08.287 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:08.287 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:08.287 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:08.287 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:08.545 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:08.545 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:08.545 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:17:08.545 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:08.545 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:08.545 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:08.545 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:08.545 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:08.545 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:08.545 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:08.545 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:08.545 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:08.545 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:08.545 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:08.545 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:08.545 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:08.545 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:08.545 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:08.545 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:08.545 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:08.545 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:17:08.545 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:08.545 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:08.545 [ 0]:0x2 00:17:08.545 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:08.545 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:08.545 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fc57cfcf08d84fdb98a02aff396df7a9 00:17:08.545 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fc57cfcf08d84fdb98a02aff396df7a9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:08.545 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:08.804 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:17:08.804 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:08.804 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:08.804 [ 0]:0x1 00:17:08.804 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:08.804 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:08.804 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6cc84e537f3d49e39a3dfa06c38c28d8 00:17:08.804 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6cc84e537f3d49e39a3dfa06c38c28d8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:08.804 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:17:08.804 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:08.804 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:08.804 [ 1]:0x2 00:17:08.804 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:08.804 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:08.804 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fc57cfcf08d84fdb98a02aff396df7a9 00:17:08.804 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fc57cfcf08d84fdb98a02aff396df7a9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:08.804 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:09.371 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:17:09.371 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:09.371 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:09.371 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:09.371 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:09.371 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:09.371 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:09.371 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:09.371 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:09.371 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:09.371 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:09.371 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:09.371 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:09.371 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:09.371 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:09.371 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:09.371 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:09.371 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:09.371 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:17:09.371 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:09.371 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:09.371 [ 0]:0x2 00:17:09.371 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:09.371 00:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:09.371 00:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fc57cfcf08d84fdb98a02aff396df7a9 00:17:09.371 00:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fc57cfcf08d84fdb98a02aff396df7a9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:09.371 00:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:17:09.371 00:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:09.371 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:09.371 00:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:09.639 00:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:17:09.639 00:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 574fd9eb-8e6e-451c-a81a-31ce252b7aae -a 10.0.0.2 -s 4420 -i 4 00:17:09.898 00:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:09.898 00:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:09.898 00:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:09.898 00:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:17:09.898 00:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:17:09.898 00:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:11.797 00:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:11.797 00:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:11.797 00:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:11.797 00:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:17:11.797 00:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:11.797 00:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:11.797 00:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:11.797 00:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:11.797 00:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:11.797 00:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:11.797 00:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:17:11.797 00:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:11.797 00:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:11.797 [ 0]:0x1 00:17:11.797 00:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:11.797 00:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:12.055 00:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6cc84e537f3d49e39a3dfa06c38c28d8 00:17:12.055 00:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6cc84e537f3d49e39a3dfa06c38c28d8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:12.055 00:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:17:12.055 00:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:12.055 00:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:12.055 [ 1]:0x2 00:17:12.055 00:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:12.055 00:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:12.055 00:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fc57cfcf08d84fdb98a02aff396df7a9 00:17:12.055 00:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fc57cfcf08d84fdb98a02aff396df7a9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:12.055 00:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:12.313 00:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:17:12.313 00:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:12.313 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:12.313 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:12.313 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:12.313 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:12.313 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:12.313 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:12.313 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:12.313 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:12.313 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:12.313 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:12.313 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:12.313 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:12.313 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:12.313 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:12.313 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:12.313 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:12.313 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:17:12.313 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:12.313 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:12.313 [ 0]:0x2 00:17:12.313 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:12.314 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:12.314 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fc57cfcf08d84fdb98a02aff396df7a9 00:17:12.314 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fc57cfcf08d84fdb98a02aff396df7a9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:12.314 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:12.314 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:12.314 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:12.314 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:12.314 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:12.314 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:12.314 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:12.314 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:12.314 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:12.314 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:12.314 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:12.314 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:12.572 [2024-11-18 00:22:36.386039] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:17:12.572 request: 00:17:12.572 { 00:17:12.572 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:12.572 "nsid": 2, 00:17:12.572 "host": "nqn.2016-06.io.spdk:host1", 00:17:12.572 "method": "nvmf_ns_remove_host", 00:17:12.572 "req_id": 1 00:17:12.572 } 00:17:12.572 Got JSON-RPC error response 00:17:12.572 response: 00:17:12.572 { 00:17:12.572 "code": -32602, 00:17:12.572 "message": "Invalid parameters" 00:17:12.572 } 00:17:12.830 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:12.830 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:12.830 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:12.830 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:12.830 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:17:12.830 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:12.830 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:12.830 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:12.830 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:12.830 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:12.830 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:12.830 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:12.830 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:12.830 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:12.830 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:12.830 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:12.830 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:12.830 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:12.830 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:12.830 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:12.830 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:12.830 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:12.830 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:17:12.830 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:12.830 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:12.830 [ 0]:0x2 00:17:12.830 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:12.830 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:12.830 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fc57cfcf08d84fdb98a02aff396df7a9 00:17:12.830 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fc57cfcf08d84fdb98a02aff396df7a9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:12.830 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:17:12.830 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:13.089 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:13.089 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=224514 00:17:13.089 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:17:13.089 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:17:13.089 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 224514 /var/tmp/host.sock 00:17:13.089 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 224514 ']' 00:17:13.089 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:17:13.089 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:13.089 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:13.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:13.089 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:13.089 00:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:13.089 [2024-11-18 00:22:36.734058] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:17:13.089 [2024-11-18 00:22:36.734152] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid224514 ] 00:17:13.089 [2024-11-18 00:22:36.803191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.089 [2024-11-18 00:22:36.853023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:13.347 00:22:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:13.347 00:22:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:17:13.347 00:22:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:13.913 00:22:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:14.171 00:22:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 74e2a612-a943-459b-b9f5-ec29b0d967c7 00:17:14.171 00:22:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:14.171 00:22:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 74E2A612A943459BB9F5EC29B0D967C7 -i 00:17:14.429 00:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 191da529-272d-4450-b286-3e5c1dfa2c5a 00:17:14.429 00:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:14.429 00:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 191DA529272D4450B2863E5C1DFA2C5A -i 00:17:14.687 00:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:14.944 00:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:17:15.202 00:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:15.202 00:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:15.463 nvme0n1 00:17:15.463 00:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:15.463 00:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:16.042 nvme1n2 00:17:16.042 00:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:17:16.042 00:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:17:16.042 00:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:16.042 00:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:17:16.042 00:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:17:16.304 00:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:17:16.304 00:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:17:16.304 00:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:17:16.304 00:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:17:16.562 00:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 74e2a612-a943-459b-b9f5-ec29b0d967c7 == \7\4\e\2\a\6\1\2\-\a\9\4\3\-\4\5\9\b\-\b\9\f\5\-\e\c\2\9\b\0\d\9\6\7\c\7 ]] 00:17:16.562 00:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:17:16.562 00:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:17:16.562 00:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:17:17.128 00:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 191da529-272d-4450-b286-3e5c1dfa2c5a == \1\9\1\d\a\5\2\9\-\2\7\2\d\-\4\4\5\0\-\b\2\8\6\-\3\e\5\c\1\d\f\a\2\c\5\a ]] 00:17:17.128 00:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:17.128 00:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:17.386 00:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 74e2a612-a943-459b-b9f5-ec29b0d967c7 00:17:17.386 00:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:17.386 00:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 74E2A612A943459BB9F5EC29B0D967C7 00:17:17.386 00:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:17.386 00:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 74E2A612A943459BB9F5EC29B0D967C7 00:17:17.386 00:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:17.644 00:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:17.644 00:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:17.644 00:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:17.644 00:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:17.644 00:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:17.644 00:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:17.644 00:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:17.644 00:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 74E2A612A943459BB9F5EC29B0D967C7 00:17:17.644 [2024-11-18 00:22:41.452877] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:17:17.644 [2024-11-18 00:22:41.452916] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:17:17.644 [2024-11-18 00:22:41.452946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.644 request: 00:17:17.644 { 00:17:17.644 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:17.644 "namespace": { 00:17:17.644 "bdev_name": "invalid", 00:17:17.644 "nsid": 1, 00:17:17.644 "nguid": "74E2A612A943459BB9F5EC29B0D967C7", 00:17:17.644 "no_auto_visible": false 00:17:17.644 }, 00:17:17.644 "method": "nvmf_subsystem_add_ns", 00:17:17.644 "req_id": 1 00:17:17.645 } 00:17:17.645 Got JSON-RPC error response 00:17:17.645 response: 00:17:17.645 { 00:17:17.645 "code": -32602, 00:17:17.645 "message": "Invalid parameters" 00:17:17.645 } 00:17:17.903 00:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:17.903 00:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:17.903 00:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:17.903 00:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:17.903 00:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 74e2a612-a943-459b-b9f5-ec29b0d967c7 00:17:17.903 00:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:17.903 00:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 74E2A612A943459BB9F5EC29B0D967C7 -i 00:17:18.161 00:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:17:20.058 00:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:17:20.058 00:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:17:20.058 00:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:20.316 00:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:17:20.316 00:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 224514 00:17:20.316 00:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 224514 ']' 00:17:20.316 00:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 224514 00:17:20.316 00:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:17:20.316 00:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:20.316 00:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 224514 00:17:20.316 00:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:20.316 00:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:20.316 00:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 224514' 00:17:20.316 killing process with pid 224514 00:17:20.316 00:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 224514 00:17:20.316 00:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 224514 00:17:20.882 00:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:20.882 00:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:17:20.882 00:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:17:20.882 00:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:20.882 00:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:17:20.882 00:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:20.882 00:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:17:20.882 00:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:20.882 00:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:20.882 rmmod nvme_tcp 00:17:20.882 rmmod nvme_fabrics 00:17:21.140 rmmod nvme_keyring 00:17:21.140 00:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:21.140 00:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:17:21.140 00:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:17:21.140 00:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 222900 ']' 00:17:21.140 00:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 222900 00:17:21.140 00:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 222900 ']' 00:17:21.140 00:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 222900 00:17:21.140 00:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:17:21.140 00:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:21.140 00:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 222900 00:17:21.140 00:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:21.140 00:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:21.140 00:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 222900' 00:17:21.140 killing process with pid 222900 00:17:21.140 00:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 222900 00:17:21.140 00:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 222900 00:17:21.400 00:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:21.400 00:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:21.400 00:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:21.400 00:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:17:21.400 00:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:17:21.400 00:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:21.400 00:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:17:21.400 00:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:21.400 00:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:21.400 00:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:21.400 00:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:21.400 00:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:23.305 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:23.305 00:17:23.305 real 0m24.944s 00:17:23.305 user 0m36.407s 00:17:23.305 sys 0m4.675s 00:17:23.305 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:23.305 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:23.305 ************************************ 00:17:23.305 END TEST nvmf_ns_masking 00:17:23.305 ************************************ 00:17:23.305 00:22:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:17:23.305 00:22:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:23.305 00:22:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:23.305 00:22:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:23.305 00:22:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:23.305 ************************************ 00:17:23.305 START TEST nvmf_nvme_cli 00:17:23.305 ************************************ 00:17:23.305 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:23.564 * Looking for test storage... 00:17:23.564 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:23.564 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:23.564 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:17:23.564 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:23.564 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:23.564 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:23.564 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:23.564 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:23.564 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:17:23.564 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:17:23.564 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:17:23.564 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:17:23.564 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:17:23.564 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:17:23.564 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:17:23.564 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:23.564 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:17:23.564 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:17:23.564 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:23.564 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:23.564 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:17:23.564 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:17:23.564 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:23.564 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:17:23.564 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:17:23.564 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:17:23.564 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:17:23.564 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:23.564 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:17:23.564 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:17:23.564 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:23.564 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:23.565 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:17:23.565 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:23.565 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:23.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.565 --rc genhtml_branch_coverage=1 00:17:23.565 --rc genhtml_function_coverage=1 00:17:23.565 --rc genhtml_legend=1 00:17:23.565 --rc geninfo_all_blocks=1 00:17:23.565 --rc geninfo_unexecuted_blocks=1 00:17:23.565 00:17:23.565 ' 00:17:23.565 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:23.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.565 --rc genhtml_branch_coverage=1 00:17:23.565 --rc genhtml_function_coverage=1 00:17:23.565 --rc genhtml_legend=1 00:17:23.565 --rc geninfo_all_blocks=1 00:17:23.565 --rc geninfo_unexecuted_blocks=1 00:17:23.565 00:17:23.565 ' 00:17:23.565 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:23.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.565 --rc genhtml_branch_coverage=1 00:17:23.565 --rc genhtml_function_coverage=1 00:17:23.565 --rc genhtml_legend=1 00:17:23.565 --rc geninfo_all_blocks=1 00:17:23.565 --rc geninfo_unexecuted_blocks=1 00:17:23.565 00:17:23.565 ' 00:17:23.565 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:23.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.565 --rc genhtml_branch_coverage=1 00:17:23.565 --rc genhtml_function_coverage=1 00:17:23.565 --rc genhtml_legend=1 00:17:23.565 --rc geninfo_all_blocks=1 00:17:23.565 --rc geninfo_unexecuted_blocks=1 00:17:23.565 00:17:23.565 ' 00:17:23.565 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:23.565 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:17:23.565 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:23.565 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:23.565 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:23.565 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:23.565 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:23.565 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:23.565 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:23.565 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:23.565 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:23.565 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:23.565 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:23.565 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:23.565 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:23.565 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:23.565 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:23.565 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:23.565 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:23.565 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:17:23.565 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:23.565 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:23.565 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:23.565 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.565 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.565 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.565 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:17:23.565 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.565 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:17:23.565 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:23.565 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:23.565 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:23.565 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:23.565 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:23.565 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:23.565 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:23.565 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:23.565 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:23.565 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:23.565 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:23.565 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:23.565 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:17:23.565 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:17:23.565 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:23.565 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:23.565 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:23.565 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:23.565 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:23.565 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:23.565 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:23.565 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:23.565 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:23.565 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:23.565 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:17:23.565 00:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:26.101 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:26.101 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:17:26.101 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:26.101 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:26.101 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:26.101 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:26.101 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:26.101 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:17:26.101 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:26.101 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:17:26.101 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:17:26.101 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:17:26.101 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:17:26.101 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:17:26.101 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:26.102 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:26.102 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:26.102 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:26.102 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:26.102 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:26.103 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:26.103 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:26.103 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:26.103 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:17:26.103 00:17:26.103 --- 10.0.0.2 ping statistics --- 00:17:26.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:26.103 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:17:26.103 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:26.103 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:26.103 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.067 ms 00:17:26.103 00:17:26.103 --- 10.0.0.1 ping statistics --- 00:17:26.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:26.103 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:17:26.103 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:26.103 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:17:26.103 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:26.103 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:26.103 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:26.103 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:26.103 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:26.103 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:26.103 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:26.103 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:17:26.103 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:26.103 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:26.103 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:26.103 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=227443 00:17:26.103 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:26.103 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 227443 00:17:26.103 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 227443 ']' 00:17:26.103 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:26.103 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:26.103 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:26.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:26.103 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:26.103 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:26.103 [2024-11-18 00:22:49.629916] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:17:26.103 [2024-11-18 00:22:49.630006] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:26.103 [2024-11-18 00:22:49.700038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:26.103 [2024-11-18 00:22:49.744538] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:26.103 [2024-11-18 00:22:49.744593] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:26.103 [2024-11-18 00:22:49.744608] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:26.103 [2024-11-18 00:22:49.744620] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:26.103 [2024-11-18 00:22:49.744630] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:26.103 [2024-11-18 00:22:49.746286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:26.103 [2024-11-18 00:22:49.746380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:26.103 [2024-11-18 00:22:49.746383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.103 [2024-11-18 00:22:49.746353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:26.103 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:26.103 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:17:26.103 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:26.103 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:26.103 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:26.103 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:26.103 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:26.103 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.103 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:26.103 [2024-11-18 00:22:49.889418] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:26.103 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.103 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:26.103 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.103 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:26.362 Malloc0 00:17:26.362 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.362 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:26.362 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.362 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:26.362 Malloc1 00:17:26.362 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.362 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:17:26.362 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.362 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:26.362 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.362 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:26.362 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.362 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:26.362 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.362 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:26.362 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.362 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:26.362 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.362 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:26.362 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.362 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:26.362 [2024-11-18 00:22:49.981000] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:26.362 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.362 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:26.362 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.362 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:26.362 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.362 00:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:17:26.362 00:17:26.362 Discovery Log Number of Records 2, Generation counter 2 00:17:26.362 =====Discovery Log Entry 0====== 00:17:26.362 trtype: tcp 00:17:26.362 adrfam: ipv4 00:17:26.362 subtype: current discovery subsystem 00:17:26.362 treq: not required 00:17:26.362 portid: 0 00:17:26.362 trsvcid: 4420 00:17:26.362 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:26.362 traddr: 10.0.0.2 00:17:26.362 eflags: explicit discovery connections, duplicate discovery information 00:17:26.362 sectype: none 00:17:26.362 =====Discovery Log Entry 1====== 00:17:26.362 trtype: tcp 00:17:26.362 adrfam: ipv4 00:17:26.362 subtype: nvme subsystem 00:17:26.362 treq: not required 00:17:26.362 portid: 0 00:17:26.362 trsvcid: 4420 00:17:26.362 subnqn: nqn.2016-06.io.spdk:cnode1 00:17:26.362 traddr: 10.0.0.2 00:17:26.362 eflags: none 00:17:26.362 sectype: none 00:17:26.362 00:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:17:26.362 00:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:17:26.362 00:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:17:26.362 00:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:26.362 00:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:17:26.362 00:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:17:26.363 00:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:26.363 00:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:17:26.363 00:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:26.363 00:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:17:26.363 00:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:27.296 00:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:27.296 00:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:17:27.296 00:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:27.296 00:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:17:27.296 00:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:17:27.296 00:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:17:29.340 00:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:29.340 00:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:29.340 00:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:29.340 00:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:17:29.340 00:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:29.340 00:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:17:29.340 00:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:17:29.340 00:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:17:29.340 00:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:29.340 00:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:17:29.340 00:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:17:29.340 00:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:29.341 00:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:17:29.341 00:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:29.341 00:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:29.341 00:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:17:29.341 00:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:29.341 00:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:29.341 00:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:17:29.341 00:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:29.341 00:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:17:29.341 /dev/nvme0n2 ]] 00:17:29.341 00:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:17:29.341 00:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:17:29.341 00:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:17:29.341 00:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:29.341 00:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:17:29.341 00:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:17:29.341 00:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:29.341 00:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:17:29.341 00:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:29.341 00:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:29.341 00:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:17:29.341 00:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:29.341 00:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:29.341 00:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:17:29.341 00:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:29.341 00:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:17:29.341 00:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:29.639 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:29.639 00:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:29.639 00:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:17:29.639 00:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:29.639 00:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:29.639 00:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:29.639 00:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:29.639 00:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:17:29.639 00:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:17:29.639 00:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:29.639 00:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.639 00:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:29.639 00:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.639 00:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:17:29.639 00:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:17:29.639 00:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:29.639 00:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:17:29.639 00:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:29.639 00:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:17:29.639 00:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:29.639 00:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:29.639 rmmod nvme_tcp 00:17:29.639 rmmod nvme_fabrics 00:17:29.639 rmmod nvme_keyring 00:17:29.639 00:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:29.639 00:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:17:29.639 00:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:17:29.639 00:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 227443 ']' 00:17:29.639 00:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 227443 00:17:29.639 00:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 227443 ']' 00:17:29.639 00:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 227443 00:17:29.639 00:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:17:29.639 00:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:29.639 00:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 227443 00:17:29.639 00:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:29.639 00:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:29.639 00:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 227443' 00:17:29.639 killing process with pid 227443 00:17:29.639 00:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 227443 00:17:29.639 00:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 227443 00:17:29.913 00:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:29.913 00:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:29.913 00:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:29.913 00:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:17:29.913 00:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:17:29.913 00:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:29.913 00:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:17:29.913 00:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:29.913 00:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:29.913 00:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:29.913 00:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:29.913 00:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:32.458 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:32.458 00:17:32.458 real 0m8.561s 00:17:32.458 user 0m16.036s 00:17:32.458 sys 0m2.370s 00:17:32.458 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:32.458 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:32.458 ************************************ 00:17:32.458 END TEST nvmf_nvme_cli 00:17:32.458 ************************************ 00:17:32.458 00:22:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:17:32.458 00:22:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:32.458 00:22:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:32.458 00:22:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:32.458 00:22:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:32.458 ************************************ 00:17:32.458 START TEST nvmf_vfio_user 00:17:32.458 ************************************ 00:17:32.458 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:32.458 * Looking for test storage... 00:17:32.458 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:32.458 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:32.458 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:17:32.458 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:32.458 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:32.458 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:32.458 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:32.458 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:32.458 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:17:32.458 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:17:32.458 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:17:32.458 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:17:32.458 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:17:32.458 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:17:32.458 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:17:32.458 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:32.458 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:17:32.458 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:17:32.458 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:32.458 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:32.458 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:17:32.458 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:17:32.459 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:32.459 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:17:32.459 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:17:32.459 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:17:32.459 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:17:32.459 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:32.459 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:17:32.459 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:17:32.459 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:32.459 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:32.459 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:17:32.459 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:32.459 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:32.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:32.459 --rc genhtml_branch_coverage=1 00:17:32.459 --rc genhtml_function_coverage=1 00:17:32.459 --rc genhtml_legend=1 00:17:32.459 --rc geninfo_all_blocks=1 00:17:32.459 --rc geninfo_unexecuted_blocks=1 00:17:32.459 00:17:32.459 ' 00:17:32.459 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:32.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:32.459 --rc genhtml_branch_coverage=1 00:17:32.459 --rc genhtml_function_coverage=1 00:17:32.459 --rc genhtml_legend=1 00:17:32.459 --rc geninfo_all_blocks=1 00:17:32.459 --rc geninfo_unexecuted_blocks=1 00:17:32.459 00:17:32.459 ' 00:17:32.459 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:32.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:32.459 --rc genhtml_branch_coverage=1 00:17:32.459 --rc genhtml_function_coverage=1 00:17:32.459 --rc genhtml_legend=1 00:17:32.459 --rc geninfo_all_blocks=1 00:17:32.459 --rc geninfo_unexecuted_blocks=1 00:17:32.459 00:17:32.459 ' 00:17:32.459 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:32.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:32.459 --rc genhtml_branch_coverage=1 00:17:32.459 --rc genhtml_function_coverage=1 00:17:32.459 --rc genhtml_legend=1 00:17:32.459 --rc geninfo_all_blocks=1 00:17:32.459 --rc geninfo_unexecuted_blocks=1 00:17:32.459 00:17:32.459 ' 00:17:32.459 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:32.459 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:17:32.459 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:32.459 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:32.459 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:32.459 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:32.459 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:32.459 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:32.459 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:32.459 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:32.459 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:32.459 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:32.459 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:32.459 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:32.459 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:32.459 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:32.459 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:32.459 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:32.459 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:32.459 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:17:32.459 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:32.459 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:32.459 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:32.459 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.459 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.459 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.459 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:17:32.459 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.459 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:17:32.459 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:32.459 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:32.459 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:32.459 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:32.459 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:32.459 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:32.459 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:32.459 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:32.459 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:32.459 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:32.459 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:32.459 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:32.459 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:17:32.460 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:32.460 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:32.460 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:32.460 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:17:32.460 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:17:32.460 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:17:32.460 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:17:32.460 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=228375 00:17:32.460 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:17:32.460 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 228375' 00:17:32.460 Process pid: 228375 00:17:32.460 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:32.460 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 228375 00:17:32.460 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 228375 ']' 00:17:32.460 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:32.460 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:32.460 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:32.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:32.460 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:32.460 00:22:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:32.460 [2024-11-18 00:22:55.945929] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:17:32.460 [2024-11-18 00:22:55.946022] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:32.460 [2024-11-18 00:22:56.015091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:32.460 [2024-11-18 00:22:56.063282] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:32.460 [2024-11-18 00:22:56.063354] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:32.460 [2024-11-18 00:22:56.063370] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:32.460 [2024-11-18 00:22:56.063398] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:32.460 [2024-11-18 00:22:56.063408] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:32.460 [2024-11-18 00:22:56.064925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:32.460 [2024-11-18 00:22:56.064991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:32.460 [2024-11-18 00:22:56.065058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:32.460 [2024-11-18 00:22:56.065061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:32.460 00:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:32.460 00:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:17:32.460 00:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:33.394 00:22:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:17:33.652 00:22:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:33.652 00:22:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:33.652 00:22:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:33.652 00:22:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:33.911 00:22:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:34.168 Malloc1 00:17:34.168 00:22:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:34.426 00:22:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:34.685 00:22:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:34.944 00:22:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:34.944 00:22:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:34.944 00:22:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:35.203 Malloc2 00:17:35.203 00:22:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:35.462 00:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:35.720 00:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:35.977 00:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:17:35.977 00:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:17:35.977 00:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:35.977 00:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:17:35.977 00:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:17:35.977 00:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:17:35.977 [2024-11-18 00:22:59.723637] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:17:35.977 [2024-11-18 00:22:59.723692] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid228805 ] 00:17:35.978 [2024-11-18 00:22:59.774542] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:17:35.978 [2024-11-18 00:22:59.779765] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:35.978 [2024-11-18 00:22:59.779800] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7ffa5d3e7000 00:17:35.978 [2024-11-18 00:22:59.780760] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:35.978 [2024-11-18 00:22:59.781755] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:35.978 [2024-11-18 00:22:59.782762] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:35.978 [2024-11-18 00:22:59.783769] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:35.978 [2024-11-18 00:22:59.784777] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:35.978 [2024-11-18 00:22:59.785781] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:35.978 [2024-11-18 00:22:59.786788] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:35.978 [2024-11-18 00:22:59.787791] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:35.978 [2024-11-18 00:22:59.788797] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:35.978 [2024-11-18 00:22:59.788817] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7ffa5c0df000 00:17:35.978 [2024-11-18 00:22:59.789933] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:36.238 [2024-11-18 00:22:59.805222] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:17:36.238 [2024-11-18 00:22:59.805262] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:17:36.238 [2024-11-18 00:22:59.809935] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:36.238 [2024-11-18 00:22:59.809996] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:17:36.238 [2024-11-18 00:22:59.810092] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:17:36.238 [2024-11-18 00:22:59.810126] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:17:36.238 [2024-11-18 00:22:59.810137] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:17:36.238 [2024-11-18 00:22:59.812321] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:17:36.238 [2024-11-18 00:22:59.812344] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:17:36.238 [2024-11-18 00:22:59.812357] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:17:36.238 [2024-11-18 00:22:59.812930] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:36.238 [2024-11-18 00:22:59.812948] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:17:36.238 [2024-11-18 00:22:59.812961] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:17:36.238 [2024-11-18 00:22:59.813935] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:17:36.238 [2024-11-18 00:22:59.813959] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:36.238 [2024-11-18 00:22:59.814938] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:17:36.238 [2024-11-18 00:22:59.814957] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:17:36.238 [2024-11-18 00:22:59.814965] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:17:36.238 [2024-11-18 00:22:59.814976] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:36.238 [2024-11-18 00:22:59.815086] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:17:36.239 [2024-11-18 00:22:59.815094] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:36.239 [2024-11-18 00:22:59.815103] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:17:36.239 [2024-11-18 00:22:59.815953] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:17:36.239 [2024-11-18 00:22:59.816949] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:17:36.239 [2024-11-18 00:22:59.817958] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:36.239 [2024-11-18 00:22:59.818953] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:36.239 [2024-11-18 00:22:59.819064] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:36.239 [2024-11-18 00:22:59.819964] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:17:36.239 [2024-11-18 00:22:59.819981] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:36.239 [2024-11-18 00:22:59.819990] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:17:36.239 [2024-11-18 00:22:59.820015] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:17:36.239 [2024-11-18 00:22:59.820030] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:17:36.239 [2024-11-18 00:22:59.820060] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:36.239 [2024-11-18 00:22:59.820070] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:36.239 [2024-11-18 00:22:59.820077] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:36.239 [2024-11-18 00:22:59.820098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:36.239 [2024-11-18 00:22:59.820165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:17:36.239 [2024-11-18 00:22:59.820184] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:17:36.239 [2024-11-18 00:22:59.820192] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:17:36.239 [2024-11-18 00:22:59.820203] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:17:36.239 [2024-11-18 00:22:59.820212] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:17:36.239 [2024-11-18 00:22:59.820223] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:17:36.239 [2024-11-18 00:22:59.820233] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:17:36.239 [2024-11-18 00:22:59.820241] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:17:36.239 [2024-11-18 00:22:59.820257] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:17:36.239 [2024-11-18 00:22:59.820273] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:17:36.239 [2024-11-18 00:22:59.820306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:17:36.239 [2024-11-18 00:22:59.820332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:36.239 [2024-11-18 00:22:59.820345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:36.239 [2024-11-18 00:22:59.820358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:36.239 [2024-11-18 00:22:59.820369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:36.239 [2024-11-18 00:22:59.820378] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:17:36.239 [2024-11-18 00:22:59.820391] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:36.239 [2024-11-18 00:22:59.820404] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:17:36.239 [2024-11-18 00:22:59.820416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:17:36.239 [2024-11-18 00:22:59.820433] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:17:36.239 [2024-11-18 00:22:59.820442] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:36.239 [2024-11-18 00:22:59.820454] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:17:36.239 [2024-11-18 00:22:59.820465] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:17:36.239 [2024-11-18 00:22:59.820479] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:36.239 [2024-11-18 00:22:59.820491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:17:36.239 [2024-11-18 00:22:59.820560] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:17:36.239 [2024-11-18 00:22:59.820577] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:17:36.239 [2024-11-18 00:22:59.820594] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:17:36.239 [2024-11-18 00:22:59.820619] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:17:36.239 [2024-11-18 00:22:59.820625] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:36.239 [2024-11-18 00:22:59.820634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:17:36.239 [2024-11-18 00:22:59.820651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:17:36.239 [2024-11-18 00:22:59.820691] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:17:36.239 [2024-11-18 00:22:59.820708] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:17:36.239 [2024-11-18 00:22:59.820724] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:17:36.239 [2024-11-18 00:22:59.820735] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:36.239 [2024-11-18 00:22:59.820743] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:36.239 [2024-11-18 00:22:59.820748] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:36.239 [2024-11-18 00:22:59.820757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:36.239 [2024-11-18 00:22:59.820786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:17:36.239 [2024-11-18 00:22:59.820809] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:36.239 [2024-11-18 00:22:59.820824] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:36.239 [2024-11-18 00:22:59.820836] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:36.239 [2024-11-18 00:22:59.820843] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:36.239 [2024-11-18 00:22:59.820849] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:36.239 [2024-11-18 00:22:59.820858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:36.239 [2024-11-18 00:22:59.820874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:17:36.239 [2024-11-18 00:22:59.820889] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:36.239 [2024-11-18 00:22:59.820900] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:17:36.239 [2024-11-18 00:22:59.820914] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:17:36.239 [2024-11-18 00:22:59.820925] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:17:36.239 [2024-11-18 00:22:59.820934] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:36.239 [2024-11-18 00:22:59.820942] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:17:36.239 [2024-11-18 00:22:59.820955] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:17:36.239 [2024-11-18 00:22:59.820962] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:17:36.239 [2024-11-18 00:22:59.820971] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:17:36.240 [2024-11-18 00:22:59.821000] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:17:36.240 [2024-11-18 00:22:59.821018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:17:36.240 [2024-11-18 00:22:59.821037] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:17:36.240 [2024-11-18 00:22:59.821048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:17:36.240 [2024-11-18 00:22:59.821063] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:17:36.240 [2024-11-18 00:22:59.821074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:17:36.240 [2024-11-18 00:22:59.821089] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:36.240 [2024-11-18 00:22:59.821100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:17:36.240 [2024-11-18 00:22:59.821122] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:17:36.240 [2024-11-18 00:22:59.821131] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:17:36.240 [2024-11-18 00:22:59.821137] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:17:36.240 [2024-11-18 00:22:59.821143] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:17:36.240 [2024-11-18 00:22:59.821148] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:17:36.240 [2024-11-18 00:22:59.821157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:17:36.240 [2024-11-18 00:22:59.821168] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:17:36.240 [2024-11-18 00:22:59.821176] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:17:36.240 [2024-11-18 00:22:59.821181] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:36.240 [2024-11-18 00:22:59.821190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:17:36.240 [2024-11-18 00:22:59.821200] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:17:36.240 [2024-11-18 00:22:59.821208] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:36.240 [2024-11-18 00:22:59.821213] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:36.240 [2024-11-18 00:22:59.821222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:36.240 [2024-11-18 00:22:59.821234] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:17:36.240 [2024-11-18 00:22:59.821241] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:17:36.240 [2024-11-18 00:22:59.821247] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:36.240 [2024-11-18 00:22:59.821261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:17:36.240 [2024-11-18 00:22:59.821273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:17:36.240 [2024-11-18 00:22:59.821307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:17:36.240 [2024-11-18 00:22:59.821338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:17:36.240 [2024-11-18 00:22:59.821351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:17:36.240 ===================================================== 00:17:36.240 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:36.240 ===================================================== 00:17:36.240 Controller Capabilities/Features 00:17:36.240 ================================ 00:17:36.240 Vendor ID: 4e58 00:17:36.240 Subsystem Vendor ID: 4e58 00:17:36.240 Serial Number: SPDK1 00:17:36.240 Model Number: SPDK bdev Controller 00:17:36.240 Firmware Version: 25.01 00:17:36.240 Recommended Arb Burst: 6 00:17:36.240 IEEE OUI Identifier: 8d 6b 50 00:17:36.240 Multi-path I/O 00:17:36.240 May have multiple subsystem ports: Yes 00:17:36.240 May have multiple controllers: Yes 00:17:36.240 Associated with SR-IOV VF: No 00:17:36.240 Max Data Transfer Size: 131072 00:17:36.240 Max Number of Namespaces: 32 00:17:36.240 Max Number of I/O Queues: 127 00:17:36.240 NVMe Specification Version (VS): 1.3 00:17:36.240 NVMe Specification Version (Identify): 1.3 00:17:36.240 Maximum Queue Entries: 256 00:17:36.240 Contiguous Queues Required: Yes 00:17:36.240 Arbitration Mechanisms Supported 00:17:36.240 Weighted Round Robin: Not Supported 00:17:36.240 Vendor Specific: Not Supported 00:17:36.240 Reset Timeout: 15000 ms 00:17:36.240 Doorbell Stride: 4 bytes 00:17:36.240 NVM Subsystem Reset: Not Supported 00:17:36.240 Command Sets Supported 00:17:36.240 NVM Command Set: Supported 00:17:36.240 Boot Partition: Not Supported 00:17:36.240 Memory Page Size Minimum: 4096 bytes 00:17:36.240 Memory Page Size Maximum: 4096 bytes 00:17:36.240 Persistent Memory Region: Not Supported 00:17:36.240 Optional Asynchronous Events Supported 00:17:36.240 Namespace Attribute Notices: Supported 00:17:36.240 Firmware Activation Notices: Not Supported 00:17:36.240 ANA Change Notices: Not Supported 00:17:36.240 PLE Aggregate Log Change Notices: Not Supported 00:17:36.240 LBA Status Info Alert Notices: Not Supported 00:17:36.240 EGE Aggregate Log Change Notices: Not Supported 00:17:36.240 Normal NVM Subsystem Shutdown event: Not Supported 00:17:36.240 Zone Descriptor Change Notices: Not Supported 00:17:36.240 Discovery Log Change Notices: Not Supported 00:17:36.240 Controller Attributes 00:17:36.240 128-bit Host Identifier: Supported 00:17:36.240 Non-Operational Permissive Mode: Not Supported 00:17:36.240 NVM Sets: Not Supported 00:17:36.240 Read Recovery Levels: Not Supported 00:17:36.240 Endurance Groups: Not Supported 00:17:36.240 Predictable Latency Mode: Not Supported 00:17:36.240 Traffic Based Keep ALive: Not Supported 00:17:36.240 Namespace Granularity: Not Supported 00:17:36.240 SQ Associations: Not Supported 00:17:36.240 UUID List: Not Supported 00:17:36.240 Multi-Domain Subsystem: Not Supported 00:17:36.240 Fixed Capacity Management: Not Supported 00:17:36.240 Variable Capacity Management: Not Supported 00:17:36.240 Delete Endurance Group: Not Supported 00:17:36.240 Delete NVM Set: Not Supported 00:17:36.240 Extended LBA Formats Supported: Not Supported 00:17:36.240 Flexible Data Placement Supported: Not Supported 00:17:36.240 00:17:36.240 Controller Memory Buffer Support 00:17:36.240 ================================ 00:17:36.240 Supported: No 00:17:36.240 00:17:36.240 Persistent Memory Region Support 00:17:36.240 ================================ 00:17:36.240 Supported: No 00:17:36.240 00:17:36.240 Admin Command Set Attributes 00:17:36.240 ============================ 00:17:36.240 Security Send/Receive: Not Supported 00:17:36.240 Format NVM: Not Supported 00:17:36.240 Firmware Activate/Download: Not Supported 00:17:36.240 Namespace Management: Not Supported 00:17:36.240 Device Self-Test: Not Supported 00:17:36.240 Directives: Not Supported 00:17:36.240 NVMe-MI: Not Supported 00:17:36.240 Virtualization Management: Not Supported 00:17:36.240 Doorbell Buffer Config: Not Supported 00:17:36.240 Get LBA Status Capability: Not Supported 00:17:36.240 Command & Feature Lockdown Capability: Not Supported 00:17:36.240 Abort Command Limit: 4 00:17:36.240 Async Event Request Limit: 4 00:17:36.240 Number of Firmware Slots: N/A 00:17:36.240 Firmware Slot 1 Read-Only: N/A 00:17:36.240 Firmware Activation Without Reset: N/A 00:17:36.240 Multiple Update Detection Support: N/A 00:17:36.240 Firmware Update Granularity: No Information Provided 00:17:36.240 Per-Namespace SMART Log: No 00:17:36.240 Asymmetric Namespace Access Log Page: Not Supported 00:17:36.240 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:17:36.240 Command Effects Log Page: Supported 00:17:36.240 Get Log Page Extended Data: Supported 00:17:36.240 Telemetry Log Pages: Not Supported 00:17:36.240 Persistent Event Log Pages: Not Supported 00:17:36.241 Supported Log Pages Log Page: May Support 00:17:36.241 Commands Supported & Effects Log Page: Not Supported 00:17:36.241 Feature Identifiers & Effects Log Page:May Support 00:17:36.241 NVMe-MI Commands & Effects Log Page: May Support 00:17:36.241 Data Area 4 for Telemetry Log: Not Supported 00:17:36.241 Error Log Page Entries Supported: 128 00:17:36.241 Keep Alive: Supported 00:17:36.241 Keep Alive Granularity: 10000 ms 00:17:36.241 00:17:36.241 NVM Command Set Attributes 00:17:36.241 ========================== 00:17:36.241 Submission Queue Entry Size 00:17:36.241 Max: 64 00:17:36.241 Min: 64 00:17:36.241 Completion Queue Entry Size 00:17:36.241 Max: 16 00:17:36.241 Min: 16 00:17:36.241 Number of Namespaces: 32 00:17:36.241 Compare Command: Supported 00:17:36.241 Write Uncorrectable Command: Not Supported 00:17:36.241 Dataset Management Command: Supported 00:17:36.241 Write Zeroes Command: Supported 00:17:36.241 Set Features Save Field: Not Supported 00:17:36.241 Reservations: Not Supported 00:17:36.241 Timestamp: Not Supported 00:17:36.241 Copy: Supported 00:17:36.241 Volatile Write Cache: Present 00:17:36.241 Atomic Write Unit (Normal): 1 00:17:36.241 Atomic Write Unit (PFail): 1 00:17:36.241 Atomic Compare & Write Unit: 1 00:17:36.241 Fused Compare & Write: Supported 00:17:36.241 Scatter-Gather List 00:17:36.241 SGL Command Set: Supported (Dword aligned) 00:17:36.241 SGL Keyed: Not Supported 00:17:36.241 SGL Bit Bucket Descriptor: Not Supported 00:17:36.241 SGL Metadata Pointer: Not Supported 00:17:36.241 Oversized SGL: Not Supported 00:17:36.241 SGL Metadata Address: Not Supported 00:17:36.241 SGL Offset: Not Supported 00:17:36.241 Transport SGL Data Block: Not Supported 00:17:36.241 Replay Protected Memory Block: Not Supported 00:17:36.241 00:17:36.241 Firmware Slot Information 00:17:36.241 ========================= 00:17:36.241 Active slot: 1 00:17:36.241 Slot 1 Firmware Revision: 25.01 00:17:36.241 00:17:36.241 00:17:36.241 Commands Supported and Effects 00:17:36.241 ============================== 00:17:36.241 Admin Commands 00:17:36.241 -------------- 00:17:36.241 Get Log Page (02h): Supported 00:17:36.241 Identify (06h): Supported 00:17:36.241 Abort (08h): Supported 00:17:36.241 Set Features (09h): Supported 00:17:36.241 Get Features (0Ah): Supported 00:17:36.241 Asynchronous Event Request (0Ch): Supported 00:17:36.241 Keep Alive (18h): Supported 00:17:36.241 I/O Commands 00:17:36.241 ------------ 00:17:36.241 Flush (00h): Supported LBA-Change 00:17:36.241 Write (01h): Supported LBA-Change 00:17:36.241 Read (02h): Supported 00:17:36.241 Compare (05h): Supported 00:17:36.241 Write Zeroes (08h): Supported LBA-Change 00:17:36.241 Dataset Management (09h): Supported LBA-Change 00:17:36.241 Copy (19h): Supported LBA-Change 00:17:36.241 00:17:36.241 Error Log 00:17:36.241 ========= 00:17:36.241 00:17:36.241 Arbitration 00:17:36.241 =========== 00:17:36.241 Arbitration Burst: 1 00:17:36.241 00:17:36.241 Power Management 00:17:36.241 ================ 00:17:36.241 Number of Power States: 1 00:17:36.241 Current Power State: Power State #0 00:17:36.241 Power State #0: 00:17:36.241 Max Power: 0.00 W 00:17:36.241 Non-Operational State: Operational 00:17:36.241 Entry Latency: Not Reported 00:17:36.241 Exit Latency: Not Reported 00:17:36.241 Relative Read Throughput: 0 00:17:36.241 Relative Read Latency: 0 00:17:36.241 Relative Write Throughput: 0 00:17:36.241 Relative Write Latency: 0 00:17:36.241 Idle Power: Not Reported 00:17:36.241 Active Power: Not Reported 00:17:36.241 Non-Operational Permissive Mode: Not Supported 00:17:36.241 00:17:36.241 Health Information 00:17:36.241 ================== 00:17:36.241 Critical Warnings: 00:17:36.241 Available Spare Space: OK 00:17:36.241 Temperature: OK 00:17:36.241 Device Reliability: OK 00:17:36.241 Read Only: No 00:17:36.241 Volatile Memory Backup: OK 00:17:36.241 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:36.241 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:36.241 Available Spare: 0% 00:17:36.241 Available Sp[2024-11-18 00:22:59.821482] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:17:36.241 [2024-11-18 00:22:59.821499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:17:36.241 [2024-11-18 00:22:59.821547] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:17:36.241 [2024-11-18 00:22:59.821566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.241 [2024-11-18 00:22:59.821578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.241 [2024-11-18 00:22:59.821588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.241 [2024-11-18 00:22:59.821613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.241 [2024-11-18 00:22:59.824321] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:36.241 [2024-11-18 00:22:59.824345] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:17:36.241 [2024-11-18 00:22:59.824985] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:36.241 [2024-11-18 00:22:59.825077] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:17:36.241 [2024-11-18 00:22:59.825090] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:17:36.241 [2024-11-18 00:22:59.826000] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:17:36.241 [2024-11-18 00:22:59.826023] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:17:36.241 [2024-11-18 00:22:59.826079] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:17:36.241 [2024-11-18 00:22:59.828039] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:36.241 are Threshold: 0% 00:17:36.241 Life Percentage Used: 0% 00:17:36.241 Data Units Read: 0 00:17:36.241 Data Units Written: 0 00:17:36.241 Host Read Commands: 0 00:17:36.241 Host Write Commands: 0 00:17:36.241 Controller Busy Time: 0 minutes 00:17:36.241 Power Cycles: 0 00:17:36.241 Power On Hours: 0 hours 00:17:36.241 Unsafe Shutdowns: 0 00:17:36.241 Unrecoverable Media Errors: 0 00:17:36.241 Lifetime Error Log Entries: 0 00:17:36.241 Warning Temperature Time: 0 minutes 00:17:36.241 Critical Temperature Time: 0 minutes 00:17:36.241 00:17:36.241 Number of Queues 00:17:36.242 ================ 00:17:36.242 Number of I/O Submission Queues: 127 00:17:36.242 Number of I/O Completion Queues: 127 00:17:36.242 00:17:36.242 Active Namespaces 00:17:36.242 ================= 00:17:36.242 Namespace ID:1 00:17:36.242 Error Recovery Timeout: Unlimited 00:17:36.242 Command Set Identifier: NVM (00h) 00:17:36.242 Deallocate: Supported 00:17:36.242 Deallocated/Unwritten Error: Not Supported 00:17:36.242 Deallocated Read Value: Unknown 00:17:36.242 Deallocate in Write Zeroes: Not Supported 00:17:36.242 Deallocated Guard Field: 0xFFFF 00:17:36.242 Flush: Supported 00:17:36.242 Reservation: Supported 00:17:36.242 Namespace Sharing Capabilities: Multiple Controllers 00:17:36.242 Size (in LBAs): 131072 (0GiB) 00:17:36.242 Capacity (in LBAs): 131072 (0GiB) 00:17:36.242 Utilization (in LBAs): 131072 (0GiB) 00:17:36.242 NGUID: 6E81247102DA4856B2BF53D67B77D7EC 00:17:36.242 UUID: 6e812471-02da-4856-b2bf-53d67b77d7ec 00:17:36.242 Thin Provisioning: Not Supported 00:17:36.242 Per-NS Atomic Units: Yes 00:17:36.242 Atomic Boundary Size (Normal): 0 00:17:36.242 Atomic Boundary Size (PFail): 0 00:17:36.242 Atomic Boundary Offset: 0 00:17:36.242 Maximum Single Source Range Length: 65535 00:17:36.242 Maximum Copy Length: 65535 00:17:36.242 Maximum Source Range Count: 1 00:17:36.242 NGUID/EUI64 Never Reused: No 00:17:36.242 Namespace Write Protected: No 00:17:36.242 Number of LBA Formats: 1 00:17:36.242 Current LBA Format: LBA Format #00 00:17:36.242 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:36.242 00:17:36.242 00:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:17:36.500 [2024-11-18 00:23:00.073224] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:41.776 Initializing NVMe Controllers 00:17:41.776 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:41.776 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:41.776 Initialization complete. Launching workers. 00:17:41.776 ======================================================== 00:17:41.776 Latency(us) 00:17:41.776 Device Information : IOPS MiB/s Average min max 00:17:41.776 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 34846.40 136.12 3673.02 1149.67 7365.22 00:17:41.776 ======================================================== 00:17:41.776 Total : 34846.40 136.12 3673.02 1149.67 7365.22 00:17:41.776 00:17:41.776 [2024-11-18 00:23:05.096419] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:41.776 00:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:17:41.776 [2024-11-18 00:23:05.352721] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:47.066 Initializing NVMe Controllers 00:17:47.066 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:47.066 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:47.066 Initialization complete. Launching workers. 00:17:47.066 ======================================================== 00:17:47.066 Latency(us) 00:17:47.066 Device Information : IOPS MiB/s Average min max 00:17:47.066 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.16 62.70 7984.45 5998.20 11984.18 00:17:47.066 ======================================================== 00:17:47.066 Total : 16051.16 62.70 7984.45 5998.20 11984.18 00:17:47.066 00:17:47.066 [2024-11-18 00:23:10.390665] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:47.067 00:23:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:17:47.067 [2024-11-18 00:23:10.619808] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:52.335 [2024-11-18 00:23:15.691686] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:52.335 Initializing NVMe Controllers 00:17:52.335 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:52.335 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:52.335 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:17:52.335 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:17:52.335 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:17:52.335 Initialization complete. Launching workers. 00:17:52.335 Starting thread on core 2 00:17:52.335 Starting thread on core 3 00:17:52.335 Starting thread on core 1 00:17:52.335 00:23:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:17:52.335 [2024-11-18 00:23:16.015786] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:55.616 [2024-11-18 00:23:19.082591] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:55.616 Initializing NVMe Controllers 00:17:55.616 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:55.616 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:55.616 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:17:55.616 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:17:55.616 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:17:55.616 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:17:55.616 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:17:55.616 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:17:55.616 Initialization complete. Launching workers. 00:17:55.616 Starting thread on core 1 with urgent priority queue 00:17:55.616 Starting thread on core 2 with urgent priority queue 00:17:55.616 Starting thread on core 3 with urgent priority queue 00:17:55.616 Starting thread on core 0 with urgent priority queue 00:17:55.616 SPDK bdev Controller (SPDK1 ) core 0: 2657.00 IO/s 37.64 secs/100000 ios 00:17:55.616 SPDK bdev Controller (SPDK1 ) core 1: 3016.67 IO/s 33.15 secs/100000 ios 00:17:55.616 SPDK bdev Controller (SPDK1 ) core 2: 2896.33 IO/s 34.53 secs/100000 ios 00:17:55.616 SPDK bdev Controller (SPDK1 ) core 3: 3192.00 IO/s 31.33 secs/100000 ios 00:17:55.616 ======================================================== 00:17:55.616 00:17:55.616 00:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:17:55.616 [2024-11-18 00:23:19.405823] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:55.616 Initializing NVMe Controllers 00:17:55.616 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:55.616 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:55.616 Namespace ID: 1 size: 0GB 00:17:55.616 Initialization complete. 00:17:55.616 INFO: using host memory buffer for IO 00:17:55.616 Hello world! 00:17:55.873 [2024-11-18 00:23:19.439380] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:55.873 00:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:17:56.131 [2024-11-18 00:23:19.742803] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:57.064 Initializing NVMe Controllers 00:17:57.064 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:57.064 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:57.064 Initialization complete. Launching workers. 00:17:57.064 submit (in ns) avg, min, max = 9072.5, 3552.2, 4023588.9 00:17:57.064 complete (in ns) avg, min, max = 29488.4, 2080.0, 4998621.1 00:17:57.064 00:17:57.064 Submit histogram 00:17:57.064 ================ 00:17:57.064 Range in us Cumulative Count 00:17:57.064 3.532 - 3.556: 0.0078% ( 1) 00:17:57.064 3.556 - 3.579: 0.2501% ( 31) 00:17:57.064 3.579 - 3.603: 0.9221% ( 86) 00:17:57.064 3.603 - 3.627: 2.9069% ( 254) 00:17:57.064 3.627 - 3.650: 7.0329% ( 528) 00:17:57.064 3.650 - 3.674: 13.1984% ( 789) 00:17:57.064 3.674 - 3.698: 20.8721% ( 982) 00:17:57.064 3.698 - 3.721: 28.9130% ( 1029) 00:17:57.064 3.721 - 3.745: 36.1882% ( 931) 00:17:57.064 3.745 - 3.769: 42.2443% ( 775) 00:17:57.064 3.769 - 3.793: 47.4330% ( 664) 00:17:57.064 3.793 - 3.816: 52.5905% ( 660) 00:17:57.064 3.816 - 3.840: 57.4744% ( 625) 00:17:57.064 3.840 - 3.864: 61.7879% ( 552) 00:17:57.064 3.864 - 3.887: 65.9530% ( 533) 00:17:57.064 3.887 - 3.911: 69.9695% ( 514) 00:17:57.064 3.911 - 3.935: 74.2440% ( 547) 00:17:57.064 3.935 - 3.959: 77.9401% ( 473) 00:17:57.064 3.959 - 3.982: 81.3941% ( 442) 00:17:57.064 3.982 - 4.006: 84.3401% ( 377) 00:17:57.064 4.006 - 4.030: 86.3015% ( 251) 00:17:57.064 4.030 - 4.053: 88.0753% ( 227) 00:17:57.064 4.053 - 4.077: 89.7789% ( 218) 00:17:57.064 4.077 - 4.101: 91.2558% ( 189) 00:17:57.064 4.101 - 4.124: 92.2873% ( 132) 00:17:57.064 4.124 - 4.148: 93.2406% ( 122) 00:17:57.065 4.148 - 4.172: 93.9517% ( 91) 00:17:57.065 4.172 - 4.196: 94.6863% ( 94) 00:17:57.065 4.196 - 4.219: 95.2098% ( 67) 00:17:57.065 4.219 - 4.243: 95.6083% ( 51) 00:17:57.065 4.243 - 4.267: 95.9365% ( 42) 00:17:57.065 4.267 - 4.290: 96.1241% ( 24) 00:17:57.065 4.290 - 4.314: 96.2882% ( 21) 00:17:57.065 4.314 - 4.338: 96.4523% ( 21) 00:17:57.065 4.338 - 4.361: 96.5304% ( 10) 00:17:57.065 4.361 - 4.385: 96.6086% ( 10) 00:17:57.065 4.385 - 4.409: 96.6711% ( 8) 00:17:57.065 4.409 - 4.433: 96.7180% ( 6) 00:17:57.065 4.433 - 4.456: 96.7492% ( 4) 00:17:57.065 4.456 - 4.480: 96.7727% ( 3) 00:17:57.065 4.480 - 4.504: 96.8118% ( 5) 00:17:57.065 4.504 - 4.527: 96.8274% ( 2) 00:17:57.065 4.527 - 4.551: 96.8352% ( 1) 00:17:57.065 4.551 - 4.575: 96.8508% ( 2) 00:17:57.065 4.575 - 4.599: 96.8743% ( 3) 00:17:57.065 4.622 - 4.646: 96.9055% ( 4) 00:17:57.065 4.646 - 4.670: 96.9368% ( 4) 00:17:57.065 4.670 - 4.693: 96.9524% ( 2) 00:17:57.065 4.693 - 4.717: 97.0071% ( 7) 00:17:57.065 4.717 - 4.741: 97.0462% ( 5) 00:17:57.065 4.741 - 4.764: 97.0774% ( 4) 00:17:57.065 4.764 - 4.788: 97.1165% ( 5) 00:17:57.065 4.788 - 4.812: 97.1400% ( 3) 00:17:57.065 4.812 - 4.836: 97.1556% ( 2) 00:17:57.065 4.836 - 4.859: 97.1790% ( 3) 00:17:57.065 4.859 - 4.883: 97.2259% ( 6) 00:17:57.065 4.883 - 4.907: 97.2728% ( 6) 00:17:57.065 4.907 - 4.930: 97.3353% ( 8) 00:17:57.065 4.930 - 4.954: 97.3900% ( 7) 00:17:57.065 4.954 - 4.978: 97.4291% ( 5) 00:17:57.065 4.978 - 5.001: 97.4525% ( 3) 00:17:57.065 5.001 - 5.025: 97.4760% ( 3) 00:17:57.065 5.025 - 5.049: 97.5150% ( 5) 00:17:57.065 5.049 - 5.073: 97.5541% ( 5) 00:17:57.065 5.073 - 5.096: 97.5619% ( 1) 00:17:57.065 5.096 - 5.120: 97.5776% ( 2) 00:17:57.065 5.120 - 5.144: 97.5854% ( 1) 00:17:57.065 5.144 - 5.167: 97.6088% ( 3) 00:17:57.065 5.191 - 5.215: 97.6166% ( 1) 00:17:57.065 5.239 - 5.262: 97.6323% ( 2) 00:17:57.065 5.262 - 5.286: 97.6401% ( 1) 00:17:57.065 5.286 - 5.310: 97.6479% ( 1) 00:17:57.065 5.333 - 5.357: 97.6635% ( 2) 00:17:57.065 5.381 - 5.404: 97.6713% ( 1) 00:17:57.065 5.476 - 5.499: 97.6791% ( 1) 00:17:57.065 5.950 - 5.973: 97.6948% ( 2) 00:17:57.065 5.973 - 5.997: 97.7104% ( 2) 00:17:57.065 5.997 - 6.021: 97.7182% ( 1) 00:17:57.065 6.116 - 6.163: 97.7260% ( 1) 00:17:57.065 6.163 - 6.210: 97.7495% ( 3) 00:17:57.065 6.210 - 6.258: 97.7573% ( 1) 00:17:57.065 6.353 - 6.400: 97.7651% ( 1) 00:17:57.065 6.590 - 6.637: 97.7729% ( 1) 00:17:57.065 6.732 - 6.779: 97.7807% ( 1) 00:17:57.065 6.779 - 6.827: 97.7964% ( 2) 00:17:57.065 6.827 - 6.874: 97.8120% ( 2) 00:17:57.065 6.921 - 6.969: 97.8198% ( 1) 00:17:57.065 6.969 - 7.016: 97.8276% ( 1) 00:17:57.065 7.016 - 7.064: 97.8354% ( 1) 00:17:57.065 7.064 - 7.111: 97.8432% ( 1) 00:17:57.065 7.111 - 7.159: 97.8511% ( 1) 00:17:57.065 7.253 - 7.301: 97.8589% ( 1) 00:17:57.065 7.348 - 7.396: 97.8823% ( 3) 00:17:57.065 7.443 - 7.490: 97.8901% ( 1) 00:17:57.065 7.538 - 7.585: 97.9136% ( 3) 00:17:57.065 7.680 - 7.727: 97.9370% ( 3) 00:17:57.065 7.775 - 7.822: 97.9526% ( 2) 00:17:57.065 7.870 - 7.917: 97.9605% ( 1) 00:17:57.065 7.917 - 7.964: 97.9761% ( 2) 00:17:57.065 7.964 - 8.012: 97.9917% ( 2) 00:17:57.065 8.059 - 8.107: 98.0230% ( 4) 00:17:57.065 8.201 - 8.249: 98.0308% ( 1) 00:17:57.065 8.249 - 8.296: 98.0464% ( 2) 00:17:57.065 8.296 - 8.344: 98.0620% ( 2) 00:17:57.065 8.344 - 8.391: 98.0699% ( 1) 00:17:57.065 8.391 - 8.439: 98.0777% ( 1) 00:17:57.065 8.439 - 8.486: 98.0855% ( 1) 00:17:57.065 8.533 - 8.581: 98.0933% ( 1) 00:17:57.065 8.581 - 8.628: 98.1167% ( 3) 00:17:57.065 8.628 - 8.676: 98.1324% ( 2) 00:17:57.065 8.676 - 8.723: 98.1402% ( 1) 00:17:57.065 8.723 - 8.770: 98.1558% ( 2) 00:17:57.065 8.865 - 8.913: 98.1714% ( 2) 00:17:57.065 8.913 - 8.960: 98.1793% ( 1) 00:17:57.065 8.960 - 9.007: 98.1871% ( 1) 00:17:57.065 9.292 - 9.339: 98.1949% ( 1) 00:17:57.065 9.387 - 9.434: 98.2105% ( 2) 00:17:57.065 9.529 - 9.576: 98.2183% ( 1) 00:17:57.065 9.766 - 9.813: 98.2340% ( 2) 00:17:57.065 9.813 - 9.861: 98.2418% ( 1) 00:17:57.065 9.861 - 9.908: 98.2574% ( 2) 00:17:57.065 10.003 - 10.050: 98.2652% ( 1) 00:17:57.065 10.098 - 10.145: 98.2730% ( 1) 00:17:57.065 10.145 - 10.193: 98.2887% ( 2) 00:17:57.065 10.240 - 10.287: 98.2965% ( 1) 00:17:57.065 10.382 - 10.430: 98.3043% ( 1) 00:17:57.065 10.430 - 10.477: 98.3121% ( 1) 00:17:57.065 10.477 - 10.524: 98.3199% ( 1) 00:17:57.065 10.524 - 10.572: 98.3277% ( 1) 00:17:57.065 10.572 - 10.619: 98.3434% ( 2) 00:17:57.065 10.714 - 10.761: 98.3590% ( 2) 00:17:57.065 10.809 - 10.856: 98.3746% ( 2) 00:17:57.065 10.856 - 10.904: 98.3824% ( 1) 00:17:57.065 10.904 - 10.951: 98.3902% ( 1) 00:17:57.065 10.951 - 10.999: 98.4059% ( 2) 00:17:57.065 11.046 - 11.093: 98.4215% ( 2) 00:17:57.065 11.473 - 11.520: 98.4371% ( 2) 00:17:57.065 11.899 - 11.947: 98.4449% ( 1) 00:17:57.065 11.994 - 12.041: 98.4528% ( 1) 00:17:57.065 12.041 - 12.089: 98.4684% ( 2) 00:17:57.065 12.089 - 12.136: 98.4762% ( 1) 00:17:57.065 12.136 - 12.231: 98.4840% ( 1) 00:17:57.065 12.326 - 12.421: 98.4918% ( 1) 00:17:57.065 12.421 - 12.516: 98.4996% ( 1) 00:17:57.065 12.516 - 12.610: 98.5075% ( 1) 00:17:57.065 12.610 - 12.705: 98.5309% ( 3) 00:17:57.065 12.895 - 12.990: 98.5387% ( 1) 00:17:57.065 12.990 - 13.084: 98.5622% ( 3) 00:17:57.065 13.084 - 13.179: 98.5700% ( 1) 00:17:57.065 13.369 - 13.464: 98.5778% ( 1) 00:17:57.065 13.653 - 13.748: 98.5856% ( 1) 00:17:57.065 13.938 - 14.033: 98.5934% ( 1) 00:17:57.065 14.033 - 14.127: 98.6012% ( 1) 00:17:57.065 14.222 - 14.317: 98.6090% ( 1) 00:17:57.065 14.317 - 14.412: 98.6169% ( 1) 00:17:57.065 14.507 - 14.601: 98.6247% ( 1) 00:17:57.065 14.791 - 14.886: 98.6325% ( 1) 00:17:57.065 14.886 - 14.981: 98.6403% ( 1) 00:17:57.065 14.981 - 15.076: 98.6481% ( 1) 00:17:57.065 15.170 - 15.265: 98.6559% ( 1) 00:17:57.065 17.161 - 17.256: 98.6637% ( 1) 00:17:57.065 17.256 - 17.351: 98.6794% ( 2) 00:17:57.065 17.351 - 17.446: 98.7184% ( 5) 00:17:57.065 17.446 - 17.541: 98.7341% ( 2) 00:17:57.065 17.541 - 17.636: 98.7810% ( 6) 00:17:57.065 17.636 - 17.730: 98.8279% ( 6) 00:17:57.065 17.730 - 17.825: 98.8747% ( 6) 00:17:57.065 17.825 - 17.920: 98.9373% ( 8) 00:17:57.065 17.920 - 18.015: 98.9763% ( 5) 00:17:57.065 18.015 - 18.110: 99.0545% ( 10) 00:17:57.065 18.110 - 18.204: 99.1717% ( 15) 00:17:57.065 18.204 - 18.299: 99.2576% ( 11) 00:17:57.065 18.299 - 18.394: 99.3436% ( 11) 00:17:57.065 18.394 - 18.489: 99.4061% ( 8) 00:17:57.065 18.489 - 18.584: 99.4921% ( 11) 00:17:57.065 18.584 - 18.679: 99.5311% ( 5) 00:17:57.065 18.679 - 18.773: 99.5780% ( 6) 00:17:57.065 18.773 - 18.868: 99.6015% ( 3) 00:17:57.065 18.868 - 18.963: 99.6718% ( 9) 00:17:57.065 18.963 - 19.058: 99.7031% ( 4) 00:17:57.065 19.058 - 19.153: 99.7109% ( 1) 00:17:57.065 19.153 - 19.247: 99.7343% ( 3) 00:17:57.065 19.247 - 19.342: 99.7578% ( 3) 00:17:57.065 19.342 - 19.437: 99.7656% ( 1) 00:17:57.065 19.437 - 19.532: 99.7812% ( 2) 00:17:57.065 19.532 - 19.627: 99.7890% ( 1) 00:17:57.065 19.627 - 19.721: 99.8125% ( 3) 00:17:57.065 19.721 - 19.816: 99.8203% ( 1) 00:17:57.065 20.006 - 20.101: 99.8281% ( 1) 00:17:57.065 20.196 - 20.290: 99.8359% ( 1) 00:17:57.065 20.670 - 20.764: 99.8437% ( 1) 00:17:57.065 21.239 - 21.333: 99.8515% ( 1) 00:17:57.065 21.997 - 22.092: 99.8593% ( 1) 00:17:57.065 24.083 - 24.178: 99.8672% ( 1) 00:17:57.065 27.307 - 27.496: 99.8750% ( 1) 00:17:57.065 3980.705 - 4004.978: 99.9844% ( 14) 00:17:57.065 4004.978 - 4029.250: 100.0000% ( 2) 00:17:57.065 00:17:57.065 Complete histogram 00:17:57.065 ================== 00:17:57.065 Range in us Cumulative Count 00:17:57.065 2.074 - 2.086: 0.2891% ( 37) 00:17:57.065 2.086 - 2.098: 5.2200% ( 631) 00:17:57.065 2.098 - 2.110: 33.6329% ( 3636) 00:17:57.065 2.110 - 2.121: 40.7049% ( 905) 00:17:57.065 2.121 - 2.133: 43.3774% ( 342) 00:17:57.065 2.133 - 2.145: 48.3238% ( 633) 00:17:57.065 2.145 - 2.157: 50.1680% ( 236) 00:17:57.065 2.157 - 2.169: 57.0290% ( 878) 00:17:57.065 2.169 - 2.181: 70.8994% ( 1775) 00:17:57.065 2.181 - 2.193: 73.7673% ( 367) 00:17:57.065 2.193 - 2.204: 75.7912% ( 259) 00:17:57.065 2.204 - 2.216: 78.3465% ( 327) 00:17:57.065 2.216 - 2.228: 79.6984% ( 173) 00:17:57.065 2.228 - 2.240: 83.4336% ( 478) 00:17:57.065 2.240 - 2.252: 89.3178% ( 753) 00:17:57.065 2.252 - 2.264: 90.5759% ( 161) 00:17:57.065 2.264 - 2.276: 91.2167% ( 82) 00:17:57.065 2.276 - 2.287: 92.3341% ( 143) 00:17:57.065 2.287 - 2.299: 92.9671% ( 81) 00:17:57.065 2.299 - 2.311: 94.0455% ( 138) 00:17:57.065 2.311 - 2.323: 95.1786% ( 145) 00:17:57.065 2.323 - 2.335: 95.3661% ( 24) 00:17:57.065 2.335 - 2.347: 95.4286% ( 8) 00:17:57.066 2.347 - 2.359: 95.5615% ( 17) 00:17:57.066 2.359 - 2.370: 95.7803% ( 28) 00:17:57.066 2.370 - 2.382: 96.0225% ( 31) 00:17:57.066 2.382 - 2.394: 96.4210% ( 51) 00:17:57.066 2.394 - 2.406: 96.7258% ( 39) 00:17:57.066 2.406 - 2.418: 96.9290% ( 26) 00:17:57.066 2.418 - 2.430: 97.1243% ( 25) 00:17:57.066 2.430 - 2.441: 97.3041% ( 23) 00:17:57.066 2.441 - 2.453: 97.4525% ( 19) 00:17:57.066 2.453 - 2.465: 97.6323% ( 23) 00:17:57.066 2.465 - 2.477: 97.7417% ( 14) 00:17:57.066 2.477 - 2.489: 97.8745% ( 17) 00:17:57.066 2.489 - 2.501: 97.9448% ( 9) 00:17:57.066 2.501 - 2.513: 98.0152% ( 9) 00:17:57.066 2.513 - 2.524: 98.0620% ( 6) 00:17:57.066 2.524 - 2.536: 98.1402% ( 10) 00:17:57.066 2.536 - 2.548: 98.1558% ( 2) 00:17:57.066 2.548 - 2.560: 98.1871% ( 4) 00:17:57.066 2.560 - 2.572: 98.2105% ( 3) 00:17:57.066 2.572 - 2.584: 98.2261% ( 2) 00:17:57.066 2.584 - 2.596: 98.2340% ( 1) 00:17:57.066 2.596 - 2.607: 98.2496% ( 2) 00:17:57.066 2.607 - 2.619: 98.2574% ( 1) 00:17:57.066 2.619 - 2.631: 98.2652% ( 1) 00:17:57.066 2.631 - 2.643: 98.2730% ( 1) 00:17:57.066 2.643 - 2.655: 98.2887% ( 2) 00:17:57.066 2.667 - 2.679: 98.3043% ( 2) 00:17:57.066 2.679 - 2.690: 98.3121% ( 1) 00:17:57.066 2.690 - 2.702: 98.3199% ( 1) 00:17:57.066 2.714 - 2.726: 98.3277% ( 1) 00:17:57.066 2.726 - 2.738: 98.3355% ( 1) 00:17:57.066 2.785 - 2.797: 9[2024-11-18 00:23:20.767975] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:57.066 8.3434% ( 1) 00:17:57.066 2.844 - 2.856: 98.3512% ( 1) 00:17:57.066 2.856 - 2.868: 98.3590% ( 1) 00:17:57.066 2.868 - 2.880: 98.3668% ( 1) 00:17:57.066 2.880 - 2.892: 98.3746% ( 1) 00:17:57.066 3.081 - 3.105: 98.3824% ( 1) 00:17:57.066 3.271 - 3.295: 98.3902% ( 1) 00:17:57.066 3.437 - 3.461: 98.4059% ( 2) 00:17:57.066 3.461 - 3.484: 98.4215% ( 2) 00:17:57.066 3.484 - 3.508: 98.4293% ( 1) 00:17:57.066 3.532 - 3.556: 98.4371% ( 1) 00:17:57.066 3.579 - 3.603: 98.4528% ( 2) 00:17:57.066 3.721 - 3.745: 98.4684% ( 2) 00:17:57.066 3.745 - 3.769: 98.4840% ( 2) 00:17:57.066 3.887 - 3.911: 98.4918% ( 1) 00:17:57.066 3.911 - 3.935: 98.4996% ( 1) 00:17:57.066 3.959 - 3.982: 98.5153% ( 2) 00:17:57.066 4.006 - 4.030: 98.5231% ( 1) 00:17:57.066 4.030 - 4.053: 98.5309% ( 1) 00:17:57.066 4.148 - 4.172: 98.5387% ( 1) 00:17:57.066 4.196 - 4.219: 98.5465% ( 1) 00:17:57.066 4.646 - 4.670: 98.5543% ( 1) 00:17:57.066 5.357 - 5.381: 98.5622% ( 1) 00:17:57.066 5.618 - 5.641: 98.5700% ( 1) 00:17:57.066 5.641 - 5.665: 98.5778% ( 1) 00:17:57.066 5.665 - 5.689: 98.5934% ( 2) 00:17:57.066 6.044 - 6.068: 98.6012% ( 1) 00:17:57.066 6.068 - 6.116: 98.6090% ( 1) 00:17:57.066 6.258 - 6.305: 98.6247% ( 2) 00:17:57.066 6.305 - 6.353: 98.6325% ( 1) 00:17:57.066 6.353 - 6.400: 98.6403% ( 1) 00:17:57.066 6.447 - 6.495: 98.6481% ( 1) 00:17:57.066 6.542 - 6.590: 98.6559% ( 1) 00:17:57.066 6.684 - 6.732: 98.6637% ( 1) 00:17:57.066 7.064 - 7.111: 98.6794% ( 2) 00:17:57.066 7.159 - 7.206: 98.6872% ( 1) 00:17:57.066 7.538 - 7.585: 98.6950% ( 1) 00:17:57.066 7.870 - 7.917: 98.7028% ( 1) 00:17:57.066 15.739 - 15.834: 98.7184% ( 2) 00:17:57.066 15.834 - 15.929: 98.7263% ( 1) 00:17:57.066 15.929 - 16.024: 98.7653% ( 5) 00:17:57.066 16.024 - 16.119: 98.7966% ( 4) 00:17:57.066 16.119 - 16.213: 98.8200% ( 3) 00:17:57.066 16.213 - 16.308: 98.8279% ( 1) 00:17:57.066 16.308 - 16.403: 98.8591% ( 4) 00:17:57.066 16.403 - 16.498: 98.8904% ( 4) 00:17:57.066 16.498 - 16.593: 98.9451% ( 7) 00:17:57.066 16.593 - 16.687: 99.0310% ( 11) 00:17:57.066 16.687 - 16.782: 99.0935% ( 8) 00:17:57.066 16.782 - 16.877: 99.1482% ( 7) 00:17:57.066 16.877 - 16.972: 99.1951% ( 6) 00:17:57.066 16.972 - 17.067: 99.2029% ( 1) 00:17:57.066 17.067 - 17.161: 99.2186% ( 2) 00:17:57.066 17.161 - 17.256: 99.2498% ( 4) 00:17:57.066 17.256 - 17.351: 99.2576% ( 1) 00:17:57.066 17.446 - 17.541: 99.2733% ( 2) 00:17:57.066 17.541 - 17.636: 99.2889% ( 2) 00:17:57.066 17.730 - 17.825: 99.2967% ( 1) 00:17:57.066 17.920 - 18.015: 99.3045% ( 1) 00:17:57.066 18.489 - 18.584: 99.3123% ( 1) 00:17:57.066 18.679 - 18.773: 99.3202% ( 1) 00:17:57.066 3422.436 - 3446.708: 99.3280% ( 1) 00:17:57.066 3932.160 - 3956.433: 99.3358% ( 1) 00:17:57.066 3980.705 - 4004.978: 99.8593% ( 67) 00:17:57.066 4004.978 - 4029.250: 99.9922% ( 17) 00:17:57.066 4975.881 - 5000.154: 100.0000% ( 1) 00:17:57.066 00:17:57.066 00:23:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:17:57.066 00:23:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:17:57.066 00:23:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:17:57.066 00:23:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:17:57.066 00:23:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:57.326 [ 00:17:57.326 { 00:17:57.326 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:57.326 "subtype": "Discovery", 00:17:57.326 "listen_addresses": [], 00:17:57.326 "allow_any_host": true, 00:17:57.326 "hosts": [] 00:17:57.326 }, 00:17:57.326 { 00:17:57.326 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:57.326 "subtype": "NVMe", 00:17:57.326 "listen_addresses": [ 00:17:57.326 { 00:17:57.326 "trtype": "VFIOUSER", 00:17:57.326 "adrfam": "IPv4", 00:17:57.326 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:57.326 "trsvcid": "0" 00:17:57.326 } 00:17:57.326 ], 00:17:57.326 "allow_any_host": true, 00:17:57.326 "hosts": [], 00:17:57.326 "serial_number": "SPDK1", 00:17:57.326 "model_number": "SPDK bdev Controller", 00:17:57.326 "max_namespaces": 32, 00:17:57.326 "min_cntlid": 1, 00:17:57.326 "max_cntlid": 65519, 00:17:57.326 "namespaces": [ 00:17:57.326 { 00:17:57.326 "nsid": 1, 00:17:57.326 "bdev_name": "Malloc1", 00:17:57.326 "name": "Malloc1", 00:17:57.326 "nguid": "6E81247102DA4856B2BF53D67B77D7EC", 00:17:57.326 "uuid": "6e812471-02da-4856-b2bf-53d67b77d7ec" 00:17:57.326 } 00:17:57.326 ] 00:17:57.326 }, 00:17:57.326 { 00:17:57.326 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:57.326 "subtype": "NVMe", 00:17:57.326 "listen_addresses": [ 00:17:57.326 { 00:17:57.326 "trtype": "VFIOUSER", 00:17:57.326 "adrfam": "IPv4", 00:17:57.326 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:57.326 "trsvcid": "0" 00:17:57.326 } 00:17:57.326 ], 00:17:57.326 "allow_any_host": true, 00:17:57.326 "hosts": [], 00:17:57.326 "serial_number": "SPDK2", 00:17:57.326 "model_number": "SPDK bdev Controller", 00:17:57.326 "max_namespaces": 32, 00:17:57.326 "min_cntlid": 1, 00:17:57.326 "max_cntlid": 65519, 00:17:57.326 "namespaces": [ 00:17:57.326 { 00:17:57.326 "nsid": 1, 00:17:57.326 "bdev_name": "Malloc2", 00:17:57.326 "name": "Malloc2", 00:17:57.326 "nguid": "49285C2BB76848BF913AD9A790D6ED80", 00:17:57.326 "uuid": "49285c2b-b768-48bf-913a-d9a790d6ed80" 00:17:57.326 } 00:17:57.326 ] 00:17:57.326 } 00:17:57.326 ] 00:17:57.326 00:23:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:57.326 00:23:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=231313 00:17:57.326 00:23:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:17:57.326 00:23:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:17:57.326 00:23:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:17:57.326 00:23:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:57.326 00:23:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:17:57.326 00:23:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=1 00:17:57.326 00:23:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:17:57.585 00:23:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:57.585 00:23:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:17:57.585 00:23:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=2 00:17:57.585 00:23:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:17:57.585 [2024-11-18 00:23:21.258806] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:57.585 00:23:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:57.585 00:23:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:57.585 00:23:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:17:57.585 00:23:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:17:57.585 00:23:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:17:57.843 Malloc3 00:17:57.843 00:23:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:17:58.102 [2024-11-18 00:23:21.862198] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:58.102 00:23:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:58.102 Asynchronous Event Request test 00:17:58.102 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:58.102 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:58.102 Registering asynchronous event callbacks... 00:17:58.102 Starting namespace attribute notice tests for all controllers... 00:17:58.102 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:58.102 aer_cb - Changed Namespace 00:17:58.102 Cleaning up... 00:17:58.360 [ 00:17:58.360 { 00:17:58.360 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:58.360 "subtype": "Discovery", 00:17:58.360 "listen_addresses": [], 00:17:58.360 "allow_any_host": true, 00:17:58.360 "hosts": [] 00:17:58.360 }, 00:17:58.360 { 00:17:58.360 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:58.360 "subtype": "NVMe", 00:17:58.360 "listen_addresses": [ 00:17:58.360 { 00:17:58.360 "trtype": "VFIOUSER", 00:17:58.361 "adrfam": "IPv4", 00:17:58.361 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:58.361 "trsvcid": "0" 00:17:58.361 } 00:17:58.361 ], 00:17:58.361 "allow_any_host": true, 00:17:58.361 "hosts": [], 00:17:58.361 "serial_number": "SPDK1", 00:17:58.361 "model_number": "SPDK bdev Controller", 00:17:58.361 "max_namespaces": 32, 00:17:58.361 "min_cntlid": 1, 00:17:58.361 "max_cntlid": 65519, 00:17:58.361 "namespaces": [ 00:17:58.361 { 00:17:58.361 "nsid": 1, 00:17:58.361 "bdev_name": "Malloc1", 00:17:58.361 "name": "Malloc1", 00:17:58.361 "nguid": "6E81247102DA4856B2BF53D67B77D7EC", 00:17:58.361 "uuid": "6e812471-02da-4856-b2bf-53d67b77d7ec" 00:17:58.361 }, 00:17:58.361 { 00:17:58.361 "nsid": 2, 00:17:58.361 "bdev_name": "Malloc3", 00:17:58.361 "name": "Malloc3", 00:17:58.361 "nguid": "115EA993EE4E4C4581CB9EC050E31EC4", 00:17:58.361 "uuid": "115ea993-ee4e-4c45-81cb-9ec050e31ec4" 00:17:58.361 } 00:17:58.361 ] 00:17:58.361 }, 00:17:58.361 { 00:17:58.361 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:58.361 "subtype": "NVMe", 00:17:58.361 "listen_addresses": [ 00:17:58.361 { 00:17:58.361 "trtype": "VFIOUSER", 00:17:58.361 "adrfam": "IPv4", 00:17:58.361 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:58.361 "trsvcid": "0" 00:17:58.361 } 00:17:58.361 ], 00:17:58.361 "allow_any_host": true, 00:17:58.361 "hosts": [], 00:17:58.361 "serial_number": "SPDK2", 00:17:58.361 "model_number": "SPDK bdev Controller", 00:17:58.361 "max_namespaces": 32, 00:17:58.361 "min_cntlid": 1, 00:17:58.361 "max_cntlid": 65519, 00:17:58.361 "namespaces": [ 00:17:58.361 { 00:17:58.361 "nsid": 1, 00:17:58.361 "bdev_name": "Malloc2", 00:17:58.361 "name": "Malloc2", 00:17:58.361 "nguid": "49285C2BB76848BF913AD9A790D6ED80", 00:17:58.361 "uuid": "49285c2b-b768-48bf-913a-d9a790d6ed80" 00:17:58.361 } 00:17:58.361 ] 00:17:58.361 } 00:17:58.361 ] 00:17:58.361 00:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 231313 00:17:58.361 00:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:58.361 00:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:17:58.361 00:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:17:58.361 00:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:17:58.361 [2024-11-18 00:23:22.169164] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:17:58.361 [2024-11-18 00:23:22.169206] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid231448 ] 00:17:58.627 [2024-11-18 00:23:22.220370] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:17:58.627 [2024-11-18 00:23:22.227666] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:58.627 [2024-11-18 00:23:22.227695] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fba1daf7000 00:17:58.627 [2024-11-18 00:23:22.228658] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:58.627 [2024-11-18 00:23:22.229659] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:58.627 [2024-11-18 00:23:22.230668] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:58.627 [2024-11-18 00:23:22.231671] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:58.627 [2024-11-18 00:23:22.232674] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:58.627 [2024-11-18 00:23:22.233682] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:58.627 [2024-11-18 00:23:22.234689] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:58.627 [2024-11-18 00:23:22.235713] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:58.627 [2024-11-18 00:23:22.236711] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:58.627 [2024-11-18 00:23:22.236732] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fba1c7ef000 00:17:58.627 [2024-11-18 00:23:22.237847] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:58.627 [2024-11-18 00:23:22.250932] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:17:58.627 [2024-11-18 00:23:22.250971] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:17:58.627 [2024-11-18 00:23:22.260111] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:17:58.627 [2024-11-18 00:23:22.260172] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:17:58.627 [2024-11-18 00:23:22.260262] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:17:58.627 [2024-11-18 00:23:22.260286] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:17:58.627 [2024-11-18 00:23:22.260320] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:17:58.627 [2024-11-18 00:23:22.261114] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:17:58.627 [2024-11-18 00:23:22.261134] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:17:58.627 [2024-11-18 00:23:22.261146] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:17:58.627 [2024-11-18 00:23:22.262125] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:17:58.627 [2024-11-18 00:23:22.262146] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:17:58.627 [2024-11-18 00:23:22.262159] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:17:58.627 [2024-11-18 00:23:22.263131] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:17:58.627 [2024-11-18 00:23:22.263151] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:58.627 [2024-11-18 00:23:22.264139] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:17:58.627 [2024-11-18 00:23:22.264159] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:17:58.627 [2024-11-18 00:23:22.264167] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:17:58.627 [2024-11-18 00:23:22.264179] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:58.627 [2024-11-18 00:23:22.264288] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:17:58.627 [2024-11-18 00:23:22.264296] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:58.627 [2024-11-18 00:23:22.264305] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:17:58.627 [2024-11-18 00:23:22.265145] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:17:58.627 [2024-11-18 00:23:22.266156] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:17:58.627 [2024-11-18 00:23:22.267159] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:17:58.627 [2024-11-18 00:23:22.268157] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:58.627 [2024-11-18 00:23:22.268240] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:58.627 [2024-11-18 00:23:22.269176] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:17:58.627 [2024-11-18 00:23:22.269196] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:58.627 [2024-11-18 00:23:22.269206] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:17:58.627 [2024-11-18 00:23:22.269230] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:17:58.627 [2024-11-18 00:23:22.269243] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:17:58.627 [2024-11-18 00:23:22.269267] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:58.627 [2024-11-18 00:23:22.269276] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:58.627 [2024-11-18 00:23:22.269283] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:58.627 [2024-11-18 00:23:22.269325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:58.627 [2024-11-18 00:23:22.281331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:17:58.627 [2024-11-18 00:23:22.281373] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:17:58.627 [2024-11-18 00:23:22.281382] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:17:58.627 [2024-11-18 00:23:22.281390] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:17:58.627 [2024-11-18 00:23:22.281404] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:17:58.627 [2024-11-18 00:23:22.281417] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:17:58.627 [2024-11-18 00:23:22.281427] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:17:58.627 [2024-11-18 00:23:22.281435] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:17:58.627 [2024-11-18 00:23:22.281452] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:17:58.627 [2024-11-18 00:23:22.281470] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:17:58.627 [2024-11-18 00:23:22.289320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:17:58.627 [2024-11-18 00:23:22.289347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:58.627 [2024-11-18 00:23:22.289377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:58.627 [2024-11-18 00:23:22.289389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:58.627 [2024-11-18 00:23:22.289402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:58.627 [2024-11-18 00:23:22.289411] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:17:58.627 [2024-11-18 00:23:22.289424] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:58.627 [2024-11-18 00:23:22.289438] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:17:58.627 [2024-11-18 00:23:22.297323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:17:58.627 [2024-11-18 00:23:22.297347] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:17:58.627 [2024-11-18 00:23:22.297358] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:58.627 [2024-11-18 00:23:22.297370] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:17:58.627 [2024-11-18 00:23:22.297381] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:17:58.627 [2024-11-18 00:23:22.297395] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:58.627 [2024-11-18 00:23:22.305321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:17:58.627 [2024-11-18 00:23:22.305401] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:17:58.627 [2024-11-18 00:23:22.305419] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:17:58.627 [2024-11-18 00:23:22.305433] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:17:58.627 [2024-11-18 00:23:22.305445] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:17:58.627 [2024-11-18 00:23:22.305451] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:58.627 [2024-11-18 00:23:22.305461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:17:58.627 [2024-11-18 00:23:22.313325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:17:58.627 [2024-11-18 00:23:22.313363] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:17:58.627 [2024-11-18 00:23:22.313379] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:17:58.627 [2024-11-18 00:23:22.313394] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:17:58.627 [2024-11-18 00:23:22.313407] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:58.627 [2024-11-18 00:23:22.313415] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:58.627 [2024-11-18 00:23:22.313421] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:58.627 [2024-11-18 00:23:22.313430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:58.627 [2024-11-18 00:23:22.321323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:17:58.627 [2024-11-18 00:23:22.321352] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:58.627 [2024-11-18 00:23:22.321369] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:58.627 [2024-11-18 00:23:22.321382] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:58.627 [2024-11-18 00:23:22.321391] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:58.628 [2024-11-18 00:23:22.321397] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:58.628 [2024-11-18 00:23:22.321406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:58.628 [2024-11-18 00:23:22.329337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:17:58.628 [2024-11-18 00:23:22.329360] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:58.628 [2024-11-18 00:23:22.329373] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:17:58.628 [2024-11-18 00:23:22.329388] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:17:58.628 [2024-11-18 00:23:22.329399] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:17:58.628 [2024-11-18 00:23:22.329408] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:58.628 [2024-11-18 00:23:22.329417] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:17:58.628 [2024-11-18 00:23:22.329425] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:17:58.628 [2024-11-18 00:23:22.329437] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:17:58.628 [2024-11-18 00:23:22.329446] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:17:58.628 [2024-11-18 00:23:22.329472] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:17:58.628 [2024-11-18 00:23:22.337322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:17:58.628 [2024-11-18 00:23:22.337348] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:17:58.628 [2024-11-18 00:23:22.345325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:17:58.628 [2024-11-18 00:23:22.345350] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:17:58.628 [2024-11-18 00:23:22.353320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:17:58.628 [2024-11-18 00:23:22.353346] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:58.628 [2024-11-18 00:23:22.361324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:17:58.628 [2024-11-18 00:23:22.361362] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:17:58.628 [2024-11-18 00:23:22.361373] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:17:58.628 [2024-11-18 00:23:22.361380] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:17:58.628 [2024-11-18 00:23:22.361385] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:17:58.628 [2024-11-18 00:23:22.361391] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:17:58.628 [2024-11-18 00:23:22.361401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:17:58.628 [2024-11-18 00:23:22.361412] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:17:58.628 [2024-11-18 00:23:22.361420] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:17:58.628 [2024-11-18 00:23:22.361426] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:58.628 [2024-11-18 00:23:22.361435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:17:58.628 [2024-11-18 00:23:22.361446] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:17:58.628 [2024-11-18 00:23:22.361454] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:58.628 [2024-11-18 00:23:22.361459] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:58.628 [2024-11-18 00:23:22.361468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:58.628 [2024-11-18 00:23:22.361480] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:17:58.628 [2024-11-18 00:23:22.361488] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:17:58.628 [2024-11-18 00:23:22.361493] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:58.628 [2024-11-18 00:23:22.361502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:17:58.628 [2024-11-18 00:23:22.369323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:17:58.628 [2024-11-18 00:23:22.369351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:17:58.628 [2024-11-18 00:23:22.369368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:17:58.628 [2024-11-18 00:23:22.369380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:17:58.628 ===================================================== 00:17:58.628 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:58.628 ===================================================== 00:17:58.628 Controller Capabilities/Features 00:17:58.628 ================================ 00:17:58.628 Vendor ID: 4e58 00:17:58.628 Subsystem Vendor ID: 4e58 00:17:58.628 Serial Number: SPDK2 00:17:58.628 Model Number: SPDK bdev Controller 00:17:58.628 Firmware Version: 25.01 00:17:58.628 Recommended Arb Burst: 6 00:17:58.628 IEEE OUI Identifier: 8d 6b 50 00:17:58.628 Multi-path I/O 00:17:58.628 May have multiple subsystem ports: Yes 00:17:58.628 May have multiple controllers: Yes 00:17:58.628 Associated with SR-IOV VF: No 00:17:58.628 Max Data Transfer Size: 131072 00:17:58.628 Max Number of Namespaces: 32 00:17:58.628 Max Number of I/O Queues: 127 00:17:58.628 NVMe Specification Version (VS): 1.3 00:17:58.628 NVMe Specification Version (Identify): 1.3 00:17:58.628 Maximum Queue Entries: 256 00:17:58.628 Contiguous Queues Required: Yes 00:17:58.628 Arbitration Mechanisms Supported 00:17:58.628 Weighted Round Robin: Not Supported 00:17:58.628 Vendor Specific: Not Supported 00:17:58.628 Reset Timeout: 15000 ms 00:17:58.628 Doorbell Stride: 4 bytes 00:17:58.628 NVM Subsystem Reset: Not Supported 00:17:58.628 Command Sets Supported 00:17:58.628 NVM Command Set: Supported 00:17:58.628 Boot Partition: Not Supported 00:17:58.628 Memory Page Size Minimum: 4096 bytes 00:17:58.628 Memory Page Size Maximum: 4096 bytes 00:17:58.628 Persistent Memory Region: Not Supported 00:17:58.628 Optional Asynchronous Events Supported 00:17:58.628 Namespace Attribute Notices: Supported 00:17:58.628 Firmware Activation Notices: Not Supported 00:17:58.628 ANA Change Notices: Not Supported 00:17:58.628 PLE Aggregate Log Change Notices: Not Supported 00:17:58.628 LBA Status Info Alert Notices: Not Supported 00:17:58.628 EGE Aggregate Log Change Notices: Not Supported 00:17:58.628 Normal NVM Subsystem Shutdown event: Not Supported 00:17:58.628 Zone Descriptor Change Notices: Not Supported 00:17:58.628 Discovery Log Change Notices: Not Supported 00:17:58.628 Controller Attributes 00:17:58.628 128-bit Host Identifier: Supported 00:17:58.628 Non-Operational Permissive Mode: Not Supported 00:17:58.628 NVM Sets: Not Supported 00:17:58.628 Read Recovery Levels: Not Supported 00:17:58.628 Endurance Groups: Not Supported 00:17:58.628 Predictable Latency Mode: Not Supported 00:17:58.628 Traffic Based Keep ALive: Not Supported 00:17:58.628 Namespace Granularity: Not Supported 00:17:58.628 SQ Associations: Not Supported 00:17:58.628 UUID List: Not Supported 00:17:58.628 Multi-Domain Subsystem: Not Supported 00:17:58.628 Fixed Capacity Management: Not Supported 00:17:58.628 Variable Capacity Management: Not Supported 00:17:58.628 Delete Endurance Group: Not Supported 00:17:58.628 Delete NVM Set: Not Supported 00:17:58.628 Extended LBA Formats Supported: Not Supported 00:17:58.628 Flexible Data Placement Supported: Not Supported 00:17:58.628 00:17:58.628 Controller Memory Buffer Support 00:17:58.628 ================================ 00:17:58.628 Supported: No 00:17:58.628 00:17:58.628 Persistent Memory Region Support 00:17:58.628 ================================ 00:17:58.628 Supported: No 00:17:58.628 00:17:58.628 Admin Command Set Attributes 00:17:58.628 ============================ 00:17:58.628 Security Send/Receive: Not Supported 00:17:58.628 Format NVM: Not Supported 00:17:58.628 Firmware Activate/Download: Not Supported 00:17:58.628 Namespace Management: Not Supported 00:17:58.628 Device Self-Test: Not Supported 00:17:58.628 Directives: Not Supported 00:17:58.628 NVMe-MI: Not Supported 00:17:58.628 Virtualization Management: Not Supported 00:17:58.628 Doorbell Buffer Config: Not Supported 00:17:58.628 Get LBA Status Capability: Not Supported 00:17:58.628 Command & Feature Lockdown Capability: Not Supported 00:17:58.628 Abort Command Limit: 4 00:17:58.628 Async Event Request Limit: 4 00:17:58.628 Number of Firmware Slots: N/A 00:17:58.628 Firmware Slot 1 Read-Only: N/A 00:17:58.628 Firmware Activation Without Reset: N/A 00:17:58.628 Multiple Update Detection Support: N/A 00:17:58.628 Firmware Update Granularity: No Information Provided 00:17:58.628 Per-Namespace SMART Log: No 00:17:58.628 Asymmetric Namespace Access Log Page: Not Supported 00:17:58.628 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:17:58.628 Command Effects Log Page: Supported 00:17:58.628 Get Log Page Extended Data: Supported 00:17:58.628 Telemetry Log Pages: Not Supported 00:17:58.628 Persistent Event Log Pages: Not Supported 00:17:58.628 Supported Log Pages Log Page: May Support 00:17:58.628 Commands Supported & Effects Log Page: Not Supported 00:17:58.628 Feature Identifiers & Effects Log Page:May Support 00:17:58.628 NVMe-MI Commands & Effects Log Page: May Support 00:17:58.628 Data Area 4 for Telemetry Log: Not Supported 00:17:58.628 Error Log Page Entries Supported: 128 00:17:58.628 Keep Alive: Supported 00:17:58.628 Keep Alive Granularity: 10000 ms 00:17:58.628 00:17:58.628 NVM Command Set Attributes 00:17:58.628 ========================== 00:17:58.628 Submission Queue Entry Size 00:17:58.628 Max: 64 00:17:58.628 Min: 64 00:17:58.628 Completion Queue Entry Size 00:17:58.628 Max: 16 00:17:58.628 Min: 16 00:17:58.628 Number of Namespaces: 32 00:17:58.628 Compare Command: Supported 00:17:58.628 Write Uncorrectable Command: Not Supported 00:17:58.628 Dataset Management Command: Supported 00:17:58.628 Write Zeroes Command: Supported 00:17:58.628 Set Features Save Field: Not Supported 00:17:58.628 Reservations: Not Supported 00:17:58.628 Timestamp: Not Supported 00:17:58.628 Copy: Supported 00:17:58.628 Volatile Write Cache: Present 00:17:58.628 Atomic Write Unit (Normal): 1 00:17:58.628 Atomic Write Unit (PFail): 1 00:17:58.628 Atomic Compare & Write Unit: 1 00:17:58.628 Fused Compare & Write: Supported 00:17:58.628 Scatter-Gather List 00:17:58.628 SGL Command Set: Supported (Dword aligned) 00:17:58.628 SGL Keyed: Not Supported 00:17:58.628 SGL Bit Bucket Descriptor: Not Supported 00:17:58.628 SGL Metadata Pointer: Not Supported 00:17:58.628 Oversized SGL: Not Supported 00:17:58.628 SGL Metadata Address: Not Supported 00:17:58.628 SGL Offset: Not Supported 00:17:58.628 Transport SGL Data Block: Not Supported 00:17:58.628 Replay Protected Memory Block: Not Supported 00:17:58.628 00:17:58.628 Firmware Slot Information 00:17:58.628 ========================= 00:17:58.628 Active slot: 1 00:17:58.628 Slot 1 Firmware Revision: 25.01 00:17:58.628 00:17:58.628 00:17:58.628 Commands Supported and Effects 00:17:58.628 ============================== 00:17:58.628 Admin Commands 00:17:58.628 -------------- 00:17:58.628 Get Log Page (02h): Supported 00:17:58.628 Identify (06h): Supported 00:17:58.628 Abort (08h): Supported 00:17:58.628 Set Features (09h): Supported 00:17:58.628 Get Features (0Ah): Supported 00:17:58.628 Asynchronous Event Request (0Ch): Supported 00:17:58.628 Keep Alive (18h): Supported 00:17:58.628 I/O Commands 00:17:58.628 ------------ 00:17:58.628 Flush (00h): Supported LBA-Change 00:17:58.628 Write (01h): Supported LBA-Change 00:17:58.628 Read (02h): Supported 00:17:58.628 Compare (05h): Supported 00:17:58.628 Write Zeroes (08h): Supported LBA-Change 00:17:58.628 Dataset Management (09h): Supported LBA-Change 00:17:58.628 Copy (19h): Supported LBA-Change 00:17:58.628 00:17:58.628 Error Log 00:17:58.628 ========= 00:17:58.628 00:17:58.628 Arbitration 00:17:58.628 =========== 00:17:58.628 Arbitration Burst: 1 00:17:58.628 00:17:58.629 Power Management 00:17:58.629 ================ 00:17:58.629 Number of Power States: 1 00:17:58.629 Current Power State: Power State #0 00:17:58.629 Power State #0: 00:17:58.629 Max Power: 0.00 W 00:17:58.629 Non-Operational State: Operational 00:17:58.629 Entry Latency: Not Reported 00:17:58.629 Exit Latency: Not Reported 00:17:58.629 Relative Read Throughput: 0 00:17:58.629 Relative Read Latency: 0 00:17:58.629 Relative Write Throughput: 0 00:17:58.629 Relative Write Latency: 0 00:17:58.629 Idle Power: Not Reported 00:17:58.629 Active Power: Not Reported 00:17:58.629 Non-Operational Permissive Mode: Not Supported 00:17:58.629 00:17:58.629 Health Information 00:17:58.629 ================== 00:17:58.629 Critical Warnings: 00:17:58.629 Available Spare Space: OK 00:17:58.629 Temperature: OK 00:17:58.629 Device Reliability: OK 00:17:58.629 Read Only: No 00:17:58.629 Volatile Memory Backup: OK 00:17:58.629 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:58.629 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:58.629 Available Spare: 0% 00:17:58.629 Available Sp[2024-11-18 00:23:22.369509] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:17:58.629 [2024-11-18 00:23:22.377320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:17:58.629 [2024-11-18 00:23:22.377384] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:17:58.629 [2024-11-18 00:23:22.377403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.629 [2024-11-18 00:23:22.377414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.629 [2024-11-18 00:23:22.377424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.629 [2024-11-18 00:23:22.377433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:58.629 [2024-11-18 00:23:22.377519] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:17:58.629 [2024-11-18 00:23:22.377541] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:17:58.629 [2024-11-18 00:23:22.378524] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:58.629 [2024-11-18 00:23:22.378597] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:17:58.629 [2024-11-18 00:23:22.378628] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:17:58.629 [2024-11-18 00:23:22.379531] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:17:58.629 [2024-11-18 00:23:22.379554] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:17:58.629 [2024-11-18 00:23:22.379621] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:17:58.629 [2024-11-18 00:23:22.382322] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:58.629 are Threshold: 0% 00:17:58.629 Life Percentage Used: 0% 00:17:58.629 Data Units Read: 0 00:17:58.629 Data Units Written: 0 00:17:58.629 Host Read Commands: 0 00:17:58.629 Host Write Commands: 0 00:17:58.629 Controller Busy Time: 0 minutes 00:17:58.629 Power Cycles: 0 00:17:58.629 Power On Hours: 0 hours 00:17:58.629 Unsafe Shutdowns: 0 00:17:58.629 Unrecoverable Media Errors: 0 00:17:58.629 Lifetime Error Log Entries: 0 00:17:58.629 Warning Temperature Time: 0 minutes 00:17:58.629 Critical Temperature Time: 0 minutes 00:17:58.629 00:17:58.629 Number of Queues 00:17:58.629 ================ 00:17:58.629 Number of I/O Submission Queues: 127 00:17:58.629 Number of I/O Completion Queues: 127 00:17:58.629 00:17:58.629 Active Namespaces 00:17:58.629 ================= 00:17:58.629 Namespace ID:1 00:17:58.629 Error Recovery Timeout: Unlimited 00:17:58.629 Command Set Identifier: NVM (00h) 00:17:58.629 Deallocate: Supported 00:17:58.629 Deallocated/Unwritten Error: Not Supported 00:17:58.629 Deallocated Read Value: Unknown 00:17:58.629 Deallocate in Write Zeroes: Not Supported 00:17:58.629 Deallocated Guard Field: 0xFFFF 00:17:58.629 Flush: Supported 00:17:58.629 Reservation: Supported 00:17:58.629 Namespace Sharing Capabilities: Multiple Controllers 00:17:58.629 Size (in LBAs): 131072 (0GiB) 00:17:58.629 Capacity (in LBAs): 131072 (0GiB) 00:17:58.629 Utilization (in LBAs): 131072 (0GiB) 00:17:58.629 NGUID: 49285C2BB76848BF913AD9A790D6ED80 00:17:58.629 UUID: 49285c2b-b768-48bf-913a-d9a790d6ed80 00:17:58.629 Thin Provisioning: Not Supported 00:17:58.629 Per-NS Atomic Units: Yes 00:17:58.629 Atomic Boundary Size (Normal): 0 00:17:58.629 Atomic Boundary Size (PFail): 0 00:17:58.629 Atomic Boundary Offset: 0 00:17:58.629 Maximum Single Source Range Length: 65535 00:17:58.629 Maximum Copy Length: 65535 00:17:58.629 Maximum Source Range Count: 1 00:17:58.629 NGUID/EUI64 Never Reused: No 00:17:58.629 Namespace Write Protected: No 00:17:58.629 Number of LBA Formats: 1 00:17:58.629 Current LBA Format: LBA Format #00 00:17:58.629 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:58.629 00:17:58.629 00:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:17:58.887 [2024-11-18 00:23:22.620113] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:04.150 Initializing NVMe Controllers 00:18:04.150 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:04.150 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:04.150 Initialization complete. Launching workers. 00:18:04.150 ======================================================== 00:18:04.150 Latency(us) 00:18:04.151 Device Information : IOPS MiB/s Average min max 00:18:04.151 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34912.28 136.38 3665.63 1164.25 10258.98 00:18:04.151 ======================================================== 00:18:04.151 Total : 34912.28 136.38 3665.63 1164.25 10258.98 00:18:04.151 00:18:04.151 [2024-11-18 00:23:27.728665] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:04.151 00:23:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:04.410 [2024-11-18 00:23:27.984442] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:09.671 Initializing NVMe Controllers 00:18:09.671 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:09.671 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:09.671 Initialization complete. Launching workers. 00:18:09.672 ======================================================== 00:18:09.672 Latency(us) 00:18:09.672 Device Information : IOPS MiB/s Average min max 00:18:09.672 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31985.19 124.94 4003.08 1214.08 8791.47 00:18:09.672 ======================================================== 00:18:09.672 Total : 31985.19 124.94 4003.08 1214.08 8791.47 00:18:09.672 00:18:09.672 [2024-11-18 00:23:33.007017] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:09.672 00:23:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:09.672 [2024-11-18 00:23:33.237997] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:14.946 [2024-11-18 00:23:38.371455] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:14.946 Initializing NVMe Controllers 00:18:14.946 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:14.946 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:14.946 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:18:14.946 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:18:14.946 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:18:14.946 Initialization complete. Launching workers. 00:18:14.946 Starting thread on core 2 00:18:14.946 Starting thread on core 3 00:18:14.946 Starting thread on core 1 00:18:14.946 00:23:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:18:14.946 [2024-11-18 00:23:38.684761] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:18.252 [2024-11-18 00:23:41.774841] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:18.253 Initializing NVMe Controllers 00:18:18.253 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:18.253 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:18.253 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:18:18.253 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:18:18.253 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:18:18.253 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:18:18.253 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:18.253 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:18.253 Initialization complete. Launching workers. 00:18:18.253 Starting thread on core 1 with urgent priority queue 00:18:18.253 Starting thread on core 2 with urgent priority queue 00:18:18.253 Starting thread on core 3 with urgent priority queue 00:18:18.253 Starting thread on core 0 with urgent priority queue 00:18:18.253 SPDK bdev Controller (SPDK2 ) core 0: 4949.33 IO/s 20.20 secs/100000 ios 00:18:18.253 SPDK bdev Controller (SPDK2 ) core 1: 4830.67 IO/s 20.70 secs/100000 ios 00:18:18.253 SPDK bdev Controller (SPDK2 ) core 2: 4965.00 IO/s 20.14 secs/100000 ios 00:18:18.253 SPDK bdev Controller (SPDK2 ) core 3: 5044.67 IO/s 19.82 secs/100000 ios 00:18:18.253 ======================================================== 00:18:18.253 00:18:18.253 00:23:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:18.511 [2024-11-18 00:23:42.075308] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:18.511 Initializing NVMe Controllers 00:18:18.511 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:18.511 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:18.511 Namespace ID: 1 size: 0GB 00:18:18.511 Initialization complete. 00:18:18.511 INFO: using host memory buffer for IO 00:18:18.511 Hello world! 00:18:18.511 [2024-11-18 00:23:42.089564] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:18.511 00:23:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:18.769 [2024-11-18 00:23:42.392703] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:19.704 Initializing NVMe Controllers 00:18:19.704 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:19.704 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:19.704 Initialization complete. Launching workers. 00:18:19.704 submit (in ns) avg, min, max = 8539.1, 3508.9, 4017293.3 00:18:19.704 complete (in ns) avg, min, max = 25792.2, 2067.8, 6008956.7 00:18:19.704 00:18:19.704 Submit histogram 00:18:19.704 ================ 00:18:19.704 Range in us Cumulative Count 00:18:19.704 3.508 - 3.532: 0.3597% ( 47) 00:18:19.704 3.532 - 3.556: 0.8342% ( 62) 00:18:19.704 3.556 - 3.579: 2.9004% ( 270) 00:18:19.704 3.579 - 3.603: 7.0100% ( 537) 00:18:19.704 3.603 - 3.627: 13.0405% ( 788) 00:18:19.704 3.627 - 3.650: 20.5862% ( 986) 00:18:19.704 3.650 - 3.674: 28.2620% ( 1003) 00:18:19.704 3.674 - 3.698: 35.8843% ( 996) 00:18:19.704 3.698 - 3.721: 44.3101% ( 1101) 00:18:19.704 3.721 - 3.745: 51.6186% ( 955) 00:18:19.704 3.745 - 3.769: 57.5036% ( 769) 00:18:19.704 3.769 - 3.793: 62.4474% ( 646) 00:18:19.704 3.793 - 3.816: 66.2279% ( 494) 00:18:19.704 3.816 - 3.840: 70.0237% ( 496) 00:18:19.704 3.840 - 3.864: 73.8425% ( 499) 00:18:19.704 3.864 - 3.887: 77.4547% ( 472) 00:18:19.704 3.887 - 3.911: 80.6536% ( 418) 00:18:19.704 3.911 - 3.935: 83.6917% ( 397) 00:18:19.704 3.935 - 3.959: 86.3932% ( 353) 00:18:19.704 3.959 - 3.982: 88.7732% ( 311) 00:18:19.704 3.982 - 4.006: 90.6941% ( 251) 00:18:19.704 4.006 - 4.030: 92.0716% ( 180) 00:18:19.704 4.030 - 4.053: 93.3956% ( 173) 00:18:19.704 4.053 - 4.077: 94.4440% ( 137) 00:18:19.704 4.077 - 4.101: 95.0869% ( 84) 00:18:19.704 4.101 - 4.124: 95.7144% ( 82) 00:18:19.704 4.124 - 4.148: 96.0358% ( 42) 00:18:19.704 4.148 - 4.172: 96.3113% ( 36) 00:18:19.704 4.172 - 4.196: 96.5639% ( 33) 00:18:19.704 4.196 - 4.219: 96.7322% ( 22) 00:18:19.704 4.219 - 4.243: 96.8547% ( 16) 00:18:19.704 4.243 - 4.267: 96.9771% ( 16) 00:18:19.704 4.267 - 4.290: 97.0460% ( 9) 00:18:19.704 4.290 - 4.314: 97.1149% ( 9) 00:18:19.704 4.314 - 4.338: 97.1914% ( 10) 00:18:19.704 4.338 - 4.361: 97.2603% ( 9) 00:18:19.704 4.361 - 4.385: 97.2985% ( 5) 00:18:19.704 4.385 - 4.409: 97.3215% ( 3) 00:18:19.704 4.409 - 4.433: 97.3368% ( 2) 00:18:19.704 4.456 - 4.480: 97.3751% ( 5) 00:18:19.704 4.480 - 4.504: 97.4057% ( 4) 00:18:19.704 4.504 - 4.527: 97.4210% ( 2) 00:18:19.704 4.551 - 4.575: 97.4286% ( 1) 00:18:19.704 4.599 - 4.622: 97.4363% ( 1) 00:18:19.704 4.622 - 4.646: 97.4516% ( 2) 00:18:19.704 4.646 - 4.670: 97.4975% ( 6) 00:18:19.704 4.670 - 4.693: 97.5281% ( 4) 00:18:19.704 4.717 - 4.741: 97.5740% ( 6) 00:18:19.704 4.741 - 4.764: 97.6047% ( 4) 00:18:19.704 4.764 - 4.788: 97.6123% ( 1) 00:18:19.704 4.788 - 4.812: 97.6200% ( 1) 00:18:19.704 4.812 - 4.836: 97.6659% ( 6) 00:18:19.704 4.836 - 4.859: 97.6965% ( 4) 00:18:19.704 4.859 - 4.883: 97.7271% ( 4) 00:18:19.704 4.883 - 4.907: 97.7654% ( 5) 00:18:19.704 4.907 - 4.930: 97.8036% ( 5) 00:18:19.704 4.930 - 4.954: 97.8342% ( 4) 00:18:19.704 4.954 - 4.978: 97.8649% ( 4) 00:18:19.704 4.978 - 5.001: 97.8878% ( 3) 00:18:19.704 5.001 - 5.025: 97.9184% ( 4) 00:18:19.704 5.025 - 5.049: 97.9490% ( 4) 00:18:19.704 5.049 - 5.073: 97.9720% ( 3) 00:18:19.704 5.073 - 5.096: 97.9873% ( 2) 00:18:19.704 5.096 - 5.120: 98.0256% ( 5) 00:18:19.704 5.120 - 5.144: 98.0638% ( 5) 00:18:19.704 5.144 - 5.167: 98.1097% ( 6) 00:18:19.704 5.167 - 5.191: 98.1404% ( 4) 00:18:19.704 5.191 - 5.215: 98.1633% ( 3) 00:18:19.704 5.215 - 5.239: 98.1710% ( 1) 00:18:19.705 5.239 - 5.262: 98.2016% ( 4) 00:18:19.705 5.286 - 5.310: 98.2245% ( 3) 00:18:19.705 5.357 - 5.381: 98.2322% ( 1) 00:18:19.705 5.381 - 5.404: 98.2398% ( 1) 00:18:19.705 5.404 - 5.428: 98.2475% ( 1) 00:18:19.705 5.523 - 5.547: 98.2551% ( 1) 00:18:19.705 5.547 - 5.570: 98.2628% ( 1) 00:18:19.705 5.713 - 5.736: 98.2705% ( 1) 00:18:19.705 5.807 - 5.831: 98.2858% ( 2) 00:18:19.705 5.855 - 5.879: 98.3011% ( 2) 00:18:19.705 5.879 - 5.902: 98.3087% ( 1) 00:18:19.705 5.902 - 5.926: 98.3240% ( 2) 00:18:19.705 5.926 - 5.950: 98.3317% ( 1) 00:18:19.705 5.973 - 5.997: 98.3393% ( 1) 00:18:19.705 5.997 - 6.021: 98.3470% ( 1) 00:18:19.705 6.021 - 6.044: 98.3546% ( 1) 00:18:19.705 6.068 - 6.116: 98.3623% ( 1) 00:18:19.705 6.116 - 6.163: 98.3699% ( 1) 00:18:19.705 6.305 - 6.353: 98.3776% ( 1) 00:18:19.705 6.353 - 6.400: 98.3852% ( 1) 00:18:19.705 6.447 - 6.495: 98.3929% ( 1) 00:18:19.705 6.495 - 6.542: 98.4006% ( 1) 00:18:19.705 6.542 - 6.590: 98.4082% ( 1) 00:18:19.705 6.684 - 6.732: 98.4159% ( 1) 00:18:19.705 6.732 - 6.779: 98.4312% ( 2) 00:18:19.705 6.921 - 6.969: 98.4388% ( 1) 00:18:19.705 7.016 - 7.064: 98.4465% ( 1) 00:18:19.705 7.064 - 7.111: 98.4541% ( 1) 00:18:19.705 7.111 - 7.159: 98.4694% ( 2) 00:18:19.705 7.159 - 7.206: 98.4771% ( 1) 00:18:19.705 7.206 - 7.253: 98.4847% ( 1) 00:18:19.705 7.253 - 7.301: 98.5000% ( 2) 00:18:19.705 7.348 - 7.396: 98.5230% ( 3) 00:18:19.705 7.396 - 7.443: 98.5306% ( 1) 00:18:19.705 7.443 - 7.490: 98.5383% ( 1) 00:18:19.705 7.585 - 7.633: 98.5536% ( 2) 00:18:19.705 7.680 - 7.727: 98.5689% ( 2) 00:18:19.705 7.727 - 7.775: 98.5766% ( 1) 00:18:19.705 7.917 - 7.964: 98.5995% ( 3) 00:18:19.705 7.964 - 8.012: 98.6072% ( 1) 00:18:19.705 8.059 - 8.107: 98.6301% ( 3) 00:18:19.705 8.107 - 8.154: 98.6378% ( 1) 00:18:19.705 8.249 - 8.296: 98.6454% ( 1) 00:18:19.705 8.296 - 8.344: 98.6607% ( 2) 00:18:19.705 8.344 - 8.391: 98.6684% ( 1) 00:18:19.705 8.391 - 8.439: 98.6761% ( 1) 00:18:19.705 8.439 - 8.486: 98.6837% ( 1) 00:18:19.705 8.628 - 8.676: 98.6914% ( 1) 00:18:19.705 8.723 - 8.770: 98.6990% ( 1) 00:18:19.705 9.007 - 9.055: 98.7067% ( 1) 00:18:19.705 9.102 - 9.150: 98.7143% ( 1) 00:18:19.705 9.244 - 9.292: 98.7220% ( 1) 00:18:19.705 9.387 - 9.434: 98.7296% ( 1) 00:18:19.705 9.576 - 9.624: 98.7449% ( 2) 00:18:19.705 9.719 - 9.766: 98.7526% ( 1) 00:18:19.705 10.003 - 10.050: 98.7602% ( 1) 00:18:19.705 10.145 - 10.193: 98.7679% ( 1) 00:18:19.705 10.193 - 10.240: 98.7832% ( 2) 00:18:19.705 10.287 - 10.335: 98.7908% ( 1) 00:18:19.705 10.430 - 10.477: 98.7985% ( 1) 00:18:19.705 10.524 - 10.572: 98.8062% ( 1) 00:18:19.705 11.093 - 11.141: 98.8138% ( 1) 00:18:19.705 11.425 - 11.473: 98.8215% ( 1) 00:18:19.705 11.473 - 11.520: 98.8291% ( 1) 00:18:19.705 11.615 - 11.662: 98.8444% ( 2) 00:18:19.705 11.994 - 12.041: 98.8521% ( 1) 00:18:19.705 12.231 - 12.326: 98.8674% ( 2) 00:18:19.705 12.516 - 12.610: 98.8750% ( 1) 00:18:19.705 12.610 - 12.705: 98.8827% ( 1) 00:18:19.705 12.800 - 12.895: 98.8980% ( 2) 00:18:19.705 13.179 - 13.274: 98.9133% ( 2) 00:18:19.705 13.274 - 13.369: 98.9516% ( 5) 00:18:19.705 13.369 - 13.464: 98.9669% ( 2) 00:18:19.705 13.559 - 13.653: 98.9745% ( 1) 00:18:19.705 13.653 - 13.748: 98.9822% ( 1) 00:18:19.705 13.843 - 13.938: 98.9898% ( 1) 00:18:19.705 14.033 - 14.127: 99.0051% ( 2) 00:18:19.705 14.127 - 14.222: 99.0204% ( 2) 00:18:19.705 14.222 - 14.317: 99.0357% ( 2) 00:18:19.705 14.412 - 14.507: 99.0434% ( 1) 00:18:19.705 14.601 - 14.696: 99.0510% ( 1) 00:18:19.705 14.696 - 14.791: 99.0664% ( 2) 00:18:19.705 14.791 - 14.886: 99.0740% ( 1) 00:18:19.705 17.161 - 17.256: 99.0970% ( 3) 00:18:19.705 17.351 - 17.446: 99.1352% ( 5) 00:18:19.705 17.446 - 17.541: 99.1811% ( 6) 00:18:19.705 17.541 - 17.636: 99.2194% ( 5) 00:18:19.705 17.636 - 17.730: 99.2653% ( 6) 00:18:19.705 17.730 - 17.825: 99.3265% ( 8) 00:18:19.705 17.825 - 17.920: 99.3878% ( 8) 00:18:19.705 17.920 - 18.015: 99.4413% ( 7) 00:18:19.705 18.015 - 18.110: 99.4720% ( 4) 00:18:19.705 18.110 - 18.204: 99.5179% ( 6) 00:18:19.705 18.204 - 18.299: 99.5714% ( 7) 00:18:19.705 18.299 - 18.394: 99.6174% ( 6) 00:18:19.705 18.394 - 18.489: 99.6556% ( 5) 00:18:19.705 18.489 - 18.584: 99.6709% ( 2) 00:18:19.705 18.584 - 18.679: 99.7168% ( 6) 00:18:19.705 18.679 - 18.773: 99.7321% ( 2) 00:18:19.705 18.773 - 18.868: 99.7475% ( 2) 00:18:19.705 18.868 - 18.963: 99.7781% ( 4) 00:18:19.705 18.963 - 19.058: 99.8010% ( 3) 00:18:19.705 19.058 - 19.153: 99.8087% ( 1) 00:18:19.705 19.342 - 19.437: 99.8163% ( 1) 00:18:19.705 19.437 - 19.532: 99.8240% ( 1) 00:18:19.705 19.532 - 19.627: 99.8316% ( 1) 00:18:19.705 19.627 - 19.721: 99.8469% ( 2) 00:18:19.705 19.816 - 19.911: 99.8546% ( 1) 00:18:19.705 21.997 - 22.092: 99.8622% ( 1) 00:18:19.705 22.850 - 22.945: 99.8699% ( 1) 00:18:19.705 23.135 - 23.230: 99.8776% ( 1) 00:18:19.705 32.237 - 32.427: 99.8852% ( 1) 00:18:19.705 3980.705 - 4004.978: 99.9617% ( 10) 00:18:19.705 4004.978 - 4029.250: 100.0000% ( 5) 00:18:19.705 00:18:19.705 Complete histogram 00:18:19.705 ================== 00:18:19.705 Range in us Cumulative Count 00:18:19.705 2.062 - 2.074: 0.9796% ( 128) 00:18:19.705 2.074 - 2.086: 33.3818% ( 4234) 00:18:19.705 2.086 - 2.098: 47.2871% ( 1817) 00:18:19.705 2.098 - 2.110: 50.2411% ( 386) 00:18:19.705 2.110 - 2.121: 58.4832% ( 1077) 00:18:19.705 2.121 - 2.133: 61.0086% ( 330) 00:18:19.705 2.133 - 2.145: 65.1182% ( 537) 00:18:19.705 2.145 - 2.157: 79.8117% ( 1920) 00:18:19.705 2.157 - 2.169: 82.5132% ( 353) 00:18:19.705 2.169 - 2.181: 84.5565% ( 267) 00:18:19.705 2.181 - 2.193: 87.5182% ( 387) 00:18:19.705 2.193 - 2.204: 88.4365% ( 120) 00:18:19.705 2.204 - 2.216: 89.2477% ( 106) 00:18:19.705 2.216 - 2.228: 91.2528% ( 262) 00:18:19.705 2.228 - 2.240: 92.7604% ( 197) 00:18:19.705 2.240 - 2.252: 94.1762% ( 185) 00:18:19.705 2.252 - 2.264: 94.6966% ( 68) 00:18:19.705 2.264 - 2.276: 94.8267% ( 17) 00:18:19.705 2.276 - 2.287: 94.9797% ( 20) 00:18:19.705 2.287 - 2.299: 95.1634% ( 24) 00:18:19.705 2.299 - 2.311: 95.4619% ( 39) 00:18:19.705 2.311 - 2.323: 95.8981% ( 57) 00:18:19.705 2.323 - 2.335: 95.9669% ( 9) 00:18:19.705 2.335 - 2.347: 96.0358% ( 9) 00:18:19.705 2.347 - 2.359: 96.1659% ( 17) 00:18:19.705 2.359 - 2.370: 96.3955% ( 30) 00:18:19.705 2.370 - 2.382: 96.6251% ( 30) 00:18:19.705 2.382 - 2.394: 97.0154% ( 51) 00:18:19.705 2.394 - 2.406: 97.3674% ( 46) 00:18:19.705 2.406 - 2.418: 97.6200% ( 33) 00:18:19.705 2.418 - 2.430: 97.8495% ( 30) 00:18:19.705 2.430 - 2.441: 98.0332% ( 24) 00:18:19.705 2.441 - 2.453: 98.1557% ( 16) 00:18:19.705 2.453 - 2.465: 98.2322% ( 10) 00:18:19.705 2.465 - 2.477: 98.2934% ( 8) 00:18:19.705 2.477 - 2.489: 98.3470% ( 7) 00:18:19.705 2.489 - 2.501: 98.4235% ( 10) 00:18:19.705 2.501 - 2.513: 98.4694% ( 6) 00:18:19.705 2.513 - 2.524: 98.4847% ( 2) 00:18:19.705 2.536 - 2.548: 98.5230% ( 5) 00:18:19.705 2.548 - 2.560: 98.5383% ( 2) 00:18:19.705 2.560 - 2.572: 98.5536% ( 2) 00:18:19.705 2.572 - 2.584: 98.5919% ( 5) 00:18:19.705 2.596 - 2.607: 98.6072% ( 2) 00:18:19.705 2.643 - 2.655: 98.6148% ( 1) 00:18:19.705 2.679 - 2.690: 98.6225% ( 1) 00:18:19.705 2.702 - 2.714: 98.6301% ( 1) 00:18:19.705 2.726 - 2.738: 98.6378% ( 1) 00:18:19.705 2.738 - 2.750: 98.6454% ( 1) 00:18:19.705 2.785 - 2.797: 98.6531% ( 1) 00:18:19.705 2.821 - 2.833: 98.6607% ( 1) 00:18:19.705 2.987 - 2.999: 98.6684% ( 1) 00:18:19.705 3.034 - 3.058: 98.6761% ( 1) 00:18:19.705 3.081 - 3.105: 98.6837% ( 1) 00:18:19.705 3.153 - 3.176: 98.6914% ( 1) 00:18:19.705 3.271 - 3.295: 98.6990% ( 1) 00:18:19.705 3.461 - 3.484: 98.7067% ( 1) 00:18:19.705 3.484 - 3.508: 98.7143% ( 1) 00:18:19.705 3.579 - 3.603: 98.7220% ( 1) 00:18:19.705 3.650 - 3.674: 98.7296% ( 1) 00:18:19.705 3.674 - 3.698: 98.7373% ( 1) 00:18:19.705 3.745 - 3.769: 98.7449% ( 1) 00:18:19.705 3.769 - 3.793: 98.7526% ( 1) 00:18:19.705 3.793 - 3.816: 98.7602% ( 1) 00:18:19.705 3.816 - 3.840: 98.7679% ( 1) 00:18:19.706 3.840 - 3.864: 98.7755% ( 1) 00:18:19.706 3.864 - 3.887: 98.7908% ( 2) 00:18:19.706 4.101 - 4.124: 98.7985% ( 1) 00:18:19.706 4.243 - 4.267: 9[2024-11-18 00:23:43.489018] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:19.964 8.8062% ( 1) 00:18:19.964 4.314 - 4.338: 98.8138% ( 1) 00:18:19.964 4.812 - 4.836: 98.8291% ( 2) 00:18:19.964 4.836 - 4.859: 98.8368% ( 1) 00:18:19.964 5.096 - 5.120: 98.8444% ( 1) 00:18:19.964 5.120 - 5.144: 98.8521% ( 1) 00:18:19.964 5.144 - 5.167: 98.8597% ( 1) 00:18:19.964 5.215 - 5.239: 98.8674% ( 1) 00:18:19.964 5.239 - 5.262: 98.8750% ( 1) 00:18:19.964 5.404 - 5.428: 98.8827% ( 1) 00:18:19.964 5.570 - 5.594: 98.8903% ( 1) 00:18:19.964 5.594 - 5.618: 98.8980% ( 1) 00:18:19.964 5.665 - 5.689: 98.9056% ( 1) 00:18:19.964 5.831 - 5.855: 98.9133% ( 1) 00:18:19.964 5.950 - 5.973: 98.9209% ( 1) 00:18:19.964 6.116 - 6.163: 98.9286% ( 1) 00:18:19.964 6.163 - 6.210: 98.9516% ( 3) 00:18:19.964 6.353 - 6.400: 98.9592% ( 1) 00:18:19.964 6.732 - 6.779: 98.9669% ( 1) 00:18:19.964 8.486 - 8.533: 98.9745% ( 1) 00:18:19.964 9.007 - 9.055: 98.9822% ( 1) 00:18:19.964 10.382 - 10.430: 98.9898% ( 1) 00:18:19.964 15.644 - 15.739: 99.0051% ( 2) 00:18:19.964 15.834 - 15.929: 99.0128% ( 1) 00:18:19.964 15.929 - 16.024: 99.0204% ( 1) 00:18:19.964 16.024 - 16.119: 99.0434% ( 3) 00:18:19.964 16.119 - 16.213: 99.0510% ( 1) 00:18:19.964 16.213 - 16.308: 99.1046% ( 7) 00:18:19.964 16.308 - 16.403: 99.1199% ( 2) 00:18:19.964 16.403 - 16.498: 99.1352% ( 2) 00:18:19.964 16.498 - 16.593: 99.1505% ( 2) 00:18:19.964 16.593 - 16.687: 99.1811% ( 4) 00:18:19.964 16.687 - 16.782: 99.2271% ( 6) 00:18:19.964 16.782 - 16.877: 99.2347% ( 1) 00:18:19.964 16.877 - 16.972: 99.2577% ( 3) 00:18:19.964 16.972 - 17.067: 99.2653% ( 1) 00:18:19.964 17.067 - 17.161: 99.2730% ( 1) 00:18:19.964 17.351 - 17.446: 99.2806% ( 1) 00:18:19.964 17.446 - 17.541: 99.2959% ( 2) 00:18:19.964 17.541 - 17.636: 99.3036% ( 1) 00:18:19.964 17.730 - 17.825: 99.3189% ( 2) 00:18:19.964 17.825 - 17.920: 99.3265% ( 1) 00:18:19.964 17.920 - 18.015: 99.3342% ( 1) 00:18:19.964 18.015 - 18.110: 99.3495% ( 2) 00:18:19.964 18.204 - 18.299: 99.3648% ( 2) 00:18:19.964 18.299 - 18.394: 99.3801% ( 2) 00:18:19.964 18.394 - 18.489: 99.3954% ( 2) 00:18:19.964 18.679 - 18.773: 99.4031% ( 1) 00:18:19.964 18.773 - 18.868: 99.4107% ( 1) 00:18:19.964 2014.625 - 2026.761: 99.4184% ( 1) 00:18:19.964 3980.705 - 4004.978: 99.7628% ( 45) 00:18:19.964 4004.978 - 4029.250: 99.9923% ( 30) 00:18:19.964 5995.330 - 6019.603: 100.0000% ( 1) 00:18:19.964 00:18:19.964 00:23:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:18:19.964 00:23:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:19.964 00:23:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:18:19.964 00:23:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:18:19.964 00:23:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:19.964 [ 00:18:19.964 { 00:18:19.964 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:19.964 "subtype": "Discovery", 00:18:19.964 "listen_addresses": [], 00:18:19.964 "allow_any_host": true, 00:18:19.964 "hosts": [] 00:18:19.964 }, 00:18:19.964 { 00:18:19.964 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:19.964 "subtype": "NVMe", 00:18:19.964 "listen_addresses": [ 00:18:19.964 { 00:18:19.964 "trtype": "VFIOUSER", 00:18:19.964 "adrfam": "IPv4", 00:18:19.964 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:19.964 "trsvcid": "0" 00:18:19.964 } 00:18:19.964 ], 00:18:19.964 "allow_any_host": true, 00:18:19.964 "hosts": [], 00:18:19.964 "serial_number": "SPDK1", 00:18:19.964 "model_number": "SPDK bdev Controller", 00:18:19.964 "max_namespaces": 32, 00:18:19.964 "min_cntlid": 1, 00:18:19.964 "max_cntlid": 65519, 00:18:19.964 "namespaces": [ 00:18:19.964 { 00:18:19.964 "nsid": 1, 00:18:19.964 "bdev_name": "Malloc1", 00:18:19.964 "name": "Malloc1", 00:18:19.964 "nguid": "6E81247102DA4856B2BF53D67B77D7EC", 00:18:19.964 "uuid": "6e812471-02da-4856-b2bf-53d67b77d7ec" 00:18:19.964 }, 00:18:19.964 { 00:18:19.964 "nsid": 2, 00:18:19.964 "bdev_name": "Malloc3", 00:18:19.964 "name": "Malloc3", 00:18:19.964 "nguid": "115EA993EE4E4C4581CB9EC050E31EC4", 00:18:19.965 "uuid": "115ea993-ee4e-4c45-81cb-9ec050e31ec4" 00:18:19.965 } 00:18:19.965 ] 00:18:19.965 }, 00:18:19.965 { 00:18:19.965 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:19.965 "subtype": "NVMe", 00:18:19.965 "listen_addresses": [ 00:18:19.965 { 00:18:19.965 "trtype": "VFIOUSER", 00:18:19.965 "adrfam": "IPv4", 00:18:19.965 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:19.965 "trsvcid": "0" 00:18:19.965 } 00:18:19.965 ], 00:18:19.965 "allow_any_host": true, 00:18:19.965 "hosts": [], 00:18:19.965 "serial_number": "SPDK2", 00:18:19.965 "model_number": "SPDK bdev Controller", 00:18:19.965 "max_namespaces": 32, 00:18:19.965 "min_cntlid": 1, 00:18:19.965 "max_cntlid": 65519, 00:18:19.965 "namespaces": [ 00:18:19.965 { 00:18:19.965 "nsid": 1, 00:18:19.965 "bdev_name": "Malloc2", 00:18:19.965 "name": "Malloc2", 00:18:19.965 "nguid": "49285C2BB76848BF913AD9A790D6ED80", 00:18:19.965 "uuid": "49285c2b-b768-48bf-913a-d9a790d6ed80" 00:18:19.965 } 00:18:19.965 ] 00:18:19.965 } 00:18:19.965 ] 00:18:20.223 00:23:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:20.223 00:23:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=233966 00:18:20.223 00:23:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:18:20.223 00:23:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:20.223 00:23:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:18:20.223 00:23:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:20.223 00:23:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:18:20.223 00:23:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=1 00:18:20.223 00:23:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:18:20.223 00:23:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:20.223 00:23:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:18:20.223 00:23:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=2 00:18:20.223 00:23:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:18:20.223 [2024-11-18 00:23:43.965922] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:20.223 00:23:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:20.223 00:23:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:20.223 00:23:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:18:20.223 00:23:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:20.223 00:23:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:18:20.790 Malloc4 00:18:20.790 00:23:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:18:20.790 [2024-11-18 00:23:44.599596] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:21.048 00:23:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:21.048 Asynchronous Event Request test 00:18:21.048 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:21.048 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:21.049 Registering asynchronous event callbacks... 00:18:21.049 Starting namespace attribute notice tests for all controllers... 00:18:21.049 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:21.049 aer_cb - Changed Namespace 00:18:21.049 Cleaning up... 00:18:21.309 [ 00:18:21.309 { 00:18:21.309 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:21.309 "subtype": "Discovery", 00:18:21.309 "listen_addresses": [], 00:18:21.309 "allow_any_host": true, 00:18:21.309 "hosts": [] 00:18:21.309 }, 00:18:21.309 { 00:18:21.309 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:21.309 "subtype": "NVMe", 00:18:21.309 "listen_addresses": [ 00:18:21.309 { 00:18:21.309 "trtype": "VFIOUSER", 00:18:21.309 "adrfam": "IPv4", 00:18:21.309 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:21.309 "trsvcid": "0" 00:18:21.309 } 00:18:21.309 ], 00:18:21.309 "allow_any_host": true, 00:18:21.309 "hosts": [], 00:18:21.309 "serial_number": "SPDK1", 00:18:21.309 "model_number": "SPDK bdev Controller", 00:18:21.309 "max_namespaces": 32, 00:18:21.309 "min_cntlid": 1, 00:18:21.309 "max_cntlid": 65519, 00:18:21.309 "namespaces": [ 00:18:21.309 { 00:18:21.309 "nsid": 1, 00:18:21.309 "bdev_name": "Malloc1", 00:18:21.309 "name": "Malloc1", 00:18:21.309 "nguid": "6E81247102DA4856B2BF53D67B77D7EC", 00:18:21.309 "uuid": "6e812471-02da-4856-b2bf-53d67b77d7ec" 00:18:21.309 }, 00:18:21.309 { 00:18:21.309 "nsid": 2, 00:18:21.309 "bdev_name": "Malloc3", 00:18:21.309 "name": "Malloc3", 00:18:21.309 "nguid": "115EA993EE4E4C4581CB9EC050E31EC4", 00:18:21.309 "uuid": "115ea993-ee4e-4c45-81cb-9ec050e31ec4" 00:18:21.309 } 00:18:21.309 ] 00:18:21.310 }, 00:18:21.310 { 00:18:21.310 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:21.310 "subtype": "NVMe", 00:18:21.310 "listen_addresses": [ 00:18:21.310 { 00:18:21.310 "trtype": "VFIOUSER", 00:18:21.310 "adrfam": "IPv4", 00:18:21.310 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:21.310 "trsvcid": "0" 00:18:21.310 } 00:18:21.310 ], 00:18:21.310 "allow_any_host": true, 00:18:21.310 "hosts": [], 00:18:21.310 "serial_number": "SPDK2", 00:18:21.310 "model_number": "SPDK bdev Controller", 00:18:21.310 "max_namespaces": 32, 00:18:21.310 "min_cntlid": 1, 00:18:21.310 "max_cntlid": 65519, 00:18:21.310 "namespaces": [ 00:18:21.310 { 00:18:21.310 "nsid": 1, 00:18:21.310 "bdev_name": "Malloc2", 00:18:21.310 "name": "Malloc2", 00:18:21.310 "nguid": "49285C2BB76848BF913AD9A790D6ED80", 00:18:21.310 "uuid": "49285c2b-b768-48bf-913a-d9a790d6ed80" 00:18:21.310 }, 00:18:21.310 { 00:18:21.310 "nsid": 2, 00:18:21.310 "bdev_name": "Malloc4", 00:18:21.310 "name": "Malloc4", 00:18:21.310 "nguid": "39365A09FA254FECA855A7B218FFEAF5", 00:18:21.310 "uuid": "39365a09-fa25-4fec-a855-a7b218ffeaf5" 00:18:21.310 } 00:18:21.310 ] 00:18:21.310 } 00:18:21.310 ] 00:18:21.310 00:23:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 233966 00:18:21.310 00:23:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:18:21.310 00:23:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 228375 00:18:21.310 00:23:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 228375 ']' 00:18:21.310 00:23:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 228375 00:18:21.310 00:23:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:18:21.310 00:23:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:21.310 00:23:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 228375 00:18:21.310 00:23:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:21.310 00:23:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:21.310 00:23:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 228375' 00:18:21.310 killing process with pid 228375 00:18:21.310 00:23:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 228375 00:18:21.310 00:23:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 228375 00:18:21.570 00:23:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:21.570 00:23:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:21.570 00:23:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:18:21.570 00:23:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:18:21.570 00:23:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:18:21.570 00:23:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=234115 00:18:21.570 00:23:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:18:21.570 00:23:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 234115' 00:18:21.570 Process pid: 234115 00:18:21.570 00:23:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:21.570 00:23:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 234115 00:18:21.570 00:23:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 234115 ']' 00:18:21.570 00:23:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:21.570 00:23:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:21.570 00:23:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:21.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:21.570 00:23:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:21.570 00:23:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:21.570 [2024-11-18 00:23:45.254023] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:18:21.570 [2024-11-18 00:23:45.255047] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:18:21.570 [2024-11-18 00:23:45.255108] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:21.570 [2024-11-18 00:23:45.325397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:21.570 [2024-11-18 00:23:45.375866] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:21.570 [2024-11-18 00:23:45.375923] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:21.570 [2024-11-18 00:23:45.375951] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:21.570 [2024-11-18 00:23:45.375963] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:21.570 [2024-11-18 00:23:45.375972] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:21.570 [2024-11-18 00:23:45.377506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:21.570 [2024-11-18 00:23:45.377566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:21.570 [2024-11-18 00:23:45.379332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:21.570 [2024-11-18 00:23:45.379344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:21.830 [2024-11-18 00:23:45.475702] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:18:21.830 [2024-11-18 00:23:45.475962] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:18:21.830 [2024-11-18 00:23:45.476215] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:18:21.830 [2024-11-18 00:23:45.476796] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:18:21.830 [2024-11-18 00:23:45.477022] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:18:21.830 00:23:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:21.830 00:23:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:18:21.830 00:23:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:22.767 00:23:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:18:23.025 00:23:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:23.025 00:23:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:23.025 00:23:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:23.025 00:23:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:23.025 00:23:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:23.595 Malloc1 00:18:23.595 00:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:23.854 00:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:24.113 00:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:24.371 00:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:24.371 00:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:24.371 00:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:24.630 Malloc2 00:18:24.630 00:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:24.889 00:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:25.147 00:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:25.406 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:18:25.406 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 234115 00:18:25.406 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 234115 ']' 00:18:25.406 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 234115 00:18:25.406 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:18:25.406 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:25.406 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 234115 00:18:25.406 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:25.406 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:25.406 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 234115' 00:18:25.406 killing process with pid 234115 00:18:25.406 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 234115 00:18:25.406 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 234115 00:18:25.664 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:25.664 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:25.664 00:18:25.664 real 0m53.739s 00:18:25.664 user 3m27.543s 00:18:25.664 sys 0m3.973s 00:18:25.664 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:25.664 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:25.664 ************************************ 00:18:25.664 END TEST nvmf_vfio_user 00:18:25.664 ************************************ 00:18:25.923 00:23:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:25.923 00:23:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:25.923 00:23:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:25.923 00:23:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:25.923 ************************************ 00:18:25.923 START TEST nvmf_vfio_user_nvme_compliance 00:18:25.923 ************************************ 00:18:25.923 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:25.923 * Looking for test storage... 00:18:25.923 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:18:25.923 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:25.923 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:18:25.923 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:25.923 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:25.923 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:25.923 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:25.923 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:25.923 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:18:25.923 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:18:25.923 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:18:25.923 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:18:25.923 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:18:25.923 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:18:25.923 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:18:25.923 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:25.923 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:18:25.923 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:18:25.923 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:25.923 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:25.923 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:18:25.923 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:18:25.923 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:25.923 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:18:25.923 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:18:25.923 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:18:25.923 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:18:25.923 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:25.923 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:18:25.923 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:18:25.923 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:25.923 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:25.923 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:18:25.923 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:25.923 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:25.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.923 --rc genhtml_branch_coverage=1 00:18:25.923 --rc genhtml_function_coverage=1 00:18:25.923 --rc genhtml_legend=1 00:18:25.923 --rc geninfo_all_blocks=1 00:18:25.923 --rc geninfo_unexecuted_blocks=1 00:18:25.923 00:18:25.923 ' 00:18:25.923 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:25.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.923 --rc genhtml_branch_coverage=1 00:18:25.923 --rc genhtml_function_coverage=1 00:18:25.923 --rc genhtml_legend=1 00:18:25.923 --rc geninfo_all_blocks=1 00:18:25.923 --rc geninfo_unexecuted_blocks=1 00:18:25.923 00:18:25.923 ' 00:18:25.923 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:25.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.923 --rc genhtml_branch_coverage=1 00:18:25.923 --rc genhtml_function_coverage=1 00:18:25.923 --rc genhtml_legend=1 00:18:25.923 --rc geninfo_all_blocks=1 00:18:25.923 --rc geninfo_unexecuted_blocks=1 00:18:25.923 00:18:25.923 ' 00:18:25.923 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:25.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.923 --rc genhtml_branch_coverage=1 00:18:25.923 --rc genhtml_function_coverage=1 00:18:25.923 --rc genhtml_legend=1 00:18:25.923 --rc geninfo_all_blocks=1 00:18:25.923 --rc geninfo_unexecuted_blocks=1 00:18:25.923 00:18:25.923 ' 00:18:25.923 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:25.923 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:18:25.923 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:25.924 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:25.924 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:25.924 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:25.924 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:25.924 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:25.924 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:25.924 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:25.924 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:25.924 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:25.924 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:25.924 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:25.924 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:25.924 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:25.924 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:25.924 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:25.924 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:25.924 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:18:25.924 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:25.924 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:25.924 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:25.924 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.924 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.924 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.924 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:18:25.924 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.924 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:18:25.924 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:25.924 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:25.924 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:25.924 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:25.924 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:25.924 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:25.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:25.924 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:25.924 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:25.924 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:25.924 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:25.924 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:25.924 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:18:25.924 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:18:25.924 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:18:25.924 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=234721 00:18:25.924 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:18:25.924 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 234721' 00:18:25.924 Process pid: 234721 00:18:25.924 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:25.924 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 234721 00:18:25.924 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 234721 ']' 00:18:25.924 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:25.924 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:25.924 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:25.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:25.924 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:25.924 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:25.924 [2024-11-18 00:23:49.717188] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:18:25.924 [2024-11-18 00:23:49.717263] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:26.183 [2024-11-18 00:23:49.784546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:26.183 [2024-11-18 00:23:49.829719] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:26.183 [2024-11-18 00:23:49.829769] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:26.183 [2024-11-18 00:23:49.829783] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:26.183 [2024-11-18 00:23:49.829794] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:26.183 [2024-11-18 00:23:49.829804] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:26.183 [2024-11-18 00:23:49.831130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:26.183 [2024-11-18 00:23:49.831196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:26.183 [2024-11-18 00:23:49.831200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:26.183 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:26.183 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:18:26.183 00:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:18:27.555 00:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:27.555 00:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:18:27.555 00:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:27.555 00:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.555 00:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:27.555 00:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.555 00:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:18:27.555 00:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:27.555 00:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.555 00:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:27.555 malloc0 00:18:27.555 00:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.555 00:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:18:27.555 00:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.555 00:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:27.555 00:23:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.555 00:23:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:27.555 00:23:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.555 00:23:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:27.555 00:23:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.555 00:23:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:27.555 00:23:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.556 00:23:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:27.556 00:23:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.556 00:23:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:18:27.556 00:18:27.556 00:18:27.556 CUnit - A unit testing framework for C - Version 2.1-3 00:18:27.556 http://cunit.sourceforge.net/ 00:18:27.556 00:18:27.556 00:18:27.556 Suite: nvme_compliance 00:18:27.556 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-18 00:23:51.190095] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:27.556 [2024-11-18 00:23:51.191585] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:18:27.556 [2024-11-18 00:23:51.191634] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:18:27.556 [2024-11-18 00:23:51.191646] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:18:27.556 [2024-11-18 00:23:51.193110] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:27.556 passed 00:18:27.556 Test: admin_identify_ctrlr_verify_fused ...[2024-11-18 00:23:51.278700] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:27.556 [2024-11-18 00:23:51.281721] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:27.556 passed 00:18:27.556 Test: admin_identify_ns ...[2024-11-18 00:23:51.368146] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:27.814 [2024-11-18 00:23:51.427329] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:18:27.814 [2024-11-18 00:23:51.435331] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:18:27.814 [2024-11-18 00:23:51.456452] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:27.814 passed 00:18:27.814 Test: admin_get_features_mandatory_features ...[2024-11-18 00:23:51.538945] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:27.814 [2024-11-18 00:23:51.541966] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:27.814 passed 00:18:27.814 Test: admin_get_features_optional_features ...[2024-11-18 00:23:51.624498] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:27.814 [2024-11-18 00:23:51.627519] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:28.072 passed 00:18:28.072 Test: admin_set_features_number_of_queues ...[2024-11-18 00:23:51.711635] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:28.072 [2024-11-18 00:23:51.820439] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:28.072 passed 00:18:28.331 Test: admin_get_log_page_mandatory_logs ...[2024-11-18 00:23:51.901091] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:28.331 [2024-11-18 00:23:51.904112] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:28.331 passed 00:18:28.331 Test: admin_get_log_page_with_lpo ...[2024-11-18 00:23:51.987249] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:28.331 [2024-11-18 00:23:52.056326] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:18:28.331 [2024-11-18 00:23:52.069410] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:28.331 passed 00:18:28.331 Test: fabric_property_get ...[2024-11-18 00:23:52.151920] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:28.331 [2024-11-18 00:23:52.153217] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:18:28.589 [2024-11-18 00:23:52.154953] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:28.589 passed 00:18:28.590 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-18 00:23:52.237464] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:28.590 [2024-11-18 00:23:52.238790] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:18:28.590 [2024-11-18 00:23:52.240484] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:28.590 passed 00:18:28.590 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-18 00:23:52.323688] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:28.590 [2024-11-18 00:23:52.407318] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:28.848 [2024-11-18 00:23:52.423322] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:28.848 [2024-11-18 00:23:52.428432] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:28.848 passed 00:18:28.848 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-18 00:23:52.514581] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:28.848 [2024-11-18 00:23:52.515882] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:18:28.848 [2024-11-18 00:23:52.517619] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:28.848 passed 00:18:28.848 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-18 00:23:52.600739] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:29.106 [2024-11-18 00:23:52.676324] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:29.106 [2024-11-18 00:23:52.700321] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:29.106 [2024-11-18 00:23:52.705437] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:29.106 passed 00:18:29.106 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-18 00:23:52.788026] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:29.106 [2024-11-18 00:23:52.789346] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:18:29.106 [2024-11-18 00:23:52.789403] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:18:29.106 [2024-11-18 00:23:52.791048] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:29.106 passed 00:18:29.106 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-18 00:23:52.876324] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:29.364 [2024-11-18 00:23:52.966320] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:18:29.364 [2024-11-18 00:23:52.974323] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:18:29.364 [2024-11-18 00:23:52.982338] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:18:29.364 [2024-11-18 00:23:52.990336] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:18:29.364 [2024-11-18 00:23:53.019431] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:29.364 passed 00:18:29.364 Test: admin_create_io_sq_verify_pc ...[2024-11-18 00:23:53.104835] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:29.364 [2024-11-18 00:23:53.122333] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:18:29.364 [2024-11-18 00:23:53.140137] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:29.364 passed 00:18:29.622 Test: admin_create_io_qp_max_qps ...[2024-11-18 00:23:53.220710] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:30.564 [2024-11-18 00:23:54.334330] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:18:31.147 [2024-11-18 00:23:54.719875] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:31.147 passed 00:18:31.147 Test: admin_create_io_sq_shared_cq ...[2024-11-18 00:23:54.800215] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:31.147 [2024-11-18 00:23:54.933322] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:31.405 [2024-11-18 00:23:54.970410] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:31.405 passed 00:18:31.405 00:18:31.405 Run Summary: Type Total Ran Passed Failed Inactive 00:18:31.405 suites 1 1 n/a 0 0 00:18:31.405 tests 18 18 18 0 0 00:18:31.405 asserts 360 360 360 0 n/a 00:18:31.405 00:18:31.405 Elapsed time = 1.567 seconds 00:18:31.405 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 234721 00:18:31.405 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 234721 ']' 00:18:31.405 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 234721 00:18:31.405 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:18:31.405 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:31.405 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 234721 00:18:31.405 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:31.405 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:31.405 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 234721' 00:18:31.405 killing process with pid 234721 00:18:31.405 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 234721 00:18:31.405 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 234721 00:18:31.663 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:18:31.664 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:18:31.664 00:18:31.664 real 0m5.778s 00:18:31.664 user 0m16.233s 00:18:31.664 sys 0m0.559s 00:18:31.664 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:31.664 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:31.664 ************************************ 00:18:31.664 END TEST nvmf_vfio_user_nvme_compliance 00:18:31.664 ************************************ 00:18:31.664 00:23:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:31.664 00:23:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:31.664 00:23:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:31.664 00:23:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:31.664 ************************************ 00:18:31.664 START TEST nvmf_vfio_user_fuzz 00:18:31.664 ************************************ 00:18:31.664 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:31.664 * Looking for test storage... 00:18:31.664 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:31.664 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:31.664 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:18:31.664 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:31.923 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:31.923 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:31.923 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:31.923 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:31.923 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:18:31.923 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:18:31.923 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:18:31.923 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:18:31.923 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:18:31.923 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:18:31.923 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:18:31.923 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:31.923 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:18:31.923 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:18:31.923 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:31.923 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:31.923 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:18:31.923 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:18:31.923 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:31.923 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:18:31.923 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:18:31.923 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:18:31.923 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:18:31.923 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:31.923 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:18:31.923 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:18:31.923 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:31.923 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:31.923 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:18:31.923 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:31.923 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:31.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.923 --rc genhtml_branch_coverage=1 00:18:31.923 --rc genhtml_function_coverage=1 00:18:31.923 --rc genhtml_legend=1 00:18:31.923 --rc geninfo_all_blocks=1 00:18:31.923 --rc geninfo_unexecuted_blocks=1 00:18:31.923 00:18:31.923 ' 00:18:31.923 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:31.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.923 --rc genhtml_branch_coverage=1 00:18:31.923 --rc genhtml_function_coverage=1 00:18:31.923 --rc genhtml_legend=1 00:18:31.923 --rc geninfo_all_blocks=1 00:18:31.923 --rc geninfo_unexecuted_blocks=1 00:18:31.923 00:18:31.923 ' 00:18:31.923 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:31.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.923 --rc genhtml_branch_coverage=1 00:18:31.923 --rc genhtml_function_coverage=1 00:18:31.923 --rc genhtml_legend=1 00:18:31.923 --rc geninfo_all_blocks=1 00:18:31.923 --rc geninfo_unexecuted_blocks=1 00:18:31.923 00:18:31.923 ' 00:18:31.923 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:31.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.923 --rc genhtml_branch_coverage=1 00:18:31.923 --rc genhtml_function_coverage=1 00:18:31.923 --rc genhtml_legend=1 00:18:31.923 --rc geninfo_all_blocks=1 00:18:31.923 --rc geninfo_unexecuted_blocks=1 00:18:31.923 00:18:31.923 ' 00:18:31.923 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:31.923 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:18:31.923 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:31.923 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:31.923 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:31.923 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:31.923 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:31.923 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:31.923 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:31.923 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:31.923 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:31.923 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:31.923 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:31.923 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:31.923 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:31.923 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:31.923 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:31.923 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:31.923 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:31.923 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:18:31.923 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:31.923 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:31.924 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:31.924 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.924 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.924 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.924 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:18:31.924 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.924 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:18:31.924 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:31.924 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:31.924 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:31.924 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:31.924 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:31.924 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:31.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:31.924 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:31.924 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:31.924 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:31.924 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:31.924 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:31.924 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:31.924 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:18:31.924 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:31.924 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:31.924 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:18:31.924 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=235452 00:18:31.924 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:31.924 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 235452' 00:18:31.924 Process pid: 235452 00:18:31.924 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:31.924 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 235452 00:18:31.924 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 235452 ']' 00:18:31.924 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.924 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:31.924 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.924 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:31.924 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:32.183 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:32.183 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:18:32.183 00:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:18:33.119 00:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:33.119 00:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.119 00:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:33.119 00:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.119 00:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:18:33.119 00:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:33.119 00:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.119 00:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:33.119 malloc0 00:18:33.119 00:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.119 00:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:18:33.119 00:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.119 00:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:33.119 00:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.119 00:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:33.119 00:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.119 00:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:33.119 00:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.119 00:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:33.119 00:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.119 00:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:33.119 00:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.119 00:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:18:33.119 00:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:19:05.194 Fuzzing completed. Shutting down the fuzz application 00:19:05.194 00:19:05.194 Dumping successful admin opcodes: 00:19:05.194 8, 9, 10, 24, 00:19:05.194 Dumping successful io opcodes: 00:19:05.194 0, 00:19:05.194 NS: 0x20000081ef00 I/O qp, Total commands completed: 687018, total successful commands: 2677, random_seed: 2068351168 00:19:05.194 NS: 0x20000081ef00 admin qp, Total commands completed: 88704, total successful commands: 710, random_seed: 655956928 00:19:05.194 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:19:05.194 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.194 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:05.194 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.194 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 235452 00:19:05.194 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 235452 ']' 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 235452 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 235452 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 235452' 00:19:05.195 killing process with pid 235452 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 235452 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 235452 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:19:05.195 00:19:05.195 real 0m32.144s 00:19:05.195 user 0m33.680s 00:19:05.195 sys 0m25.531s 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:05.195 ************************************ 00:19:05.195 END TEST nvmf_vfio_user_fuzz 00:19:05.195 ************************************ 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:05.195 ************************************ 00:19:05.195 START TEST nvmf_auth_target 00:19:05.195 ************************************ 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:05.195 * Looking for test storage... 00:19:05.195 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:05.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.195 --rc genhtml_branch_coverage=1 00:19:05.195 --rc genhtml_function_coverage=1 00:19:05.195 --rc genhtml_legend=1 00:19:05.195 --rc geninfo_all_blocks=1 00:19:05.195 --rc geninfo_unexecuted_blocks=1 00:19:05.195 00:19:05.195 ' 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:05.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.195 --rc genhtml_branch_coverage=1 00:19:05.195 --rc genhtml_function_coverage=1 00:19:05.195 --rc genhtml_legend=1 00:19:05.195 --rc geninfo_all_blocks=1 00:19:05.195 --rc geninfo_unexecuted_blocks=1 00:19:05.195 00:19:05.195 ' 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:05.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.195 --rc genhtml_branch_coverage=1 00:19:05.195 --rc genhtml_function_coverage=1 00:19:05.195 --rc genhtml_legend=1 00:19:05.195 --rc geninfo_all_blocks=1 00:19:05.195 --rc geninfo_unexecuted_blocks=1 00:19:05.195 00:19:05.195 ' 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:05.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.195 --rc genhtml_branch_coverage=1 00:19:05.195 --rc genhtml_function_coverage=1 00:19:05.195 --rc genhtml_legend=1 00:19:05.195 --rc geninfo_all_blocks=1 00:19:05.195 --rc geninfo_unexecuted_blocks=1 00:19:05.195 00:19:05.195 ' 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:05.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:19:05.195 00:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:06.144 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:06.144 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:06.144 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:06.144 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:06.144 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:06.145 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:06.145 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:06.145 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:06.403 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:06.403 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:06.403 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:06.404 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:06.404 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:06.404 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.150 ms 00:19:06.404 00:19:06.404 --- 10.0.0.2 ping statistics --- 00:19:06.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:06.404 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:19:06.404 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:06.404 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:06.404 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:19:06.404 00:19:06.404 --- 10.0.0.1 ping statistics --- 00:19:06.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:06.404 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:19:06.404 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:06.404 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:19:06.404 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:06.404 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:06.404 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:06.404 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:06.404 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:06.404 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:06.404 00:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:06.404 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:19:06.404 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:06.404 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:06.404 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.404 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=241504 00:19:06.404 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:06.404 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 241504 00:19:06.404 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 241504 ']' 00:19:06.404 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:06.404 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:06.404 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:06.404 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:06.404 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.663 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:06.663 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:06.663 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:06.663 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:06.663 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.663 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:06.663 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=241532 00:19:06.663 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:06.663 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:06.663 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:19:06.663 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:06.663 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:06.663 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:06.663 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:19:06.663 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:06.663 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:06.663 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=1d4dff7ecfe5ad489b2fd6930ef41c63ea6745df4f4e7033 00:19:06.663 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:06.663 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.UTS 00:19:06.663 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 1d4dff7ecfe5ad489b2fd6930ef41c63ea6745df4f4e7033 0 00:19:06.663 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 1d4dff7ecfe5ad489b2fd6930ef41c63ea6745df4f4e7033 0 00:19:06.663 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:06.663 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:06.663 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=1d4dff7ecfe5ad489b2fd6930ef41c63ea6745df4f4e7033 00:19:06.663 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:19:06.663 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:06.663 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.UTS 00:19:06.663 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.UTS 00:19:06.663 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.UTS 00:19:06.663 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:19:06.663 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:06.663 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:06.663 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:06.663 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:19:06.663 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:19:06.663 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:06.663 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=3f37063a482cf996a9f91857d51e97c825778b86ca02b15556e3f977a04077b5 00:19:06.663 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:06.663 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.m3S 00:19:06.663 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 3f37063a482cf996a9f91857d51e97c825778b86ca02b15556e3f977a04077b5 3 00:19:06.663 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 3f37063a482cf996a9f91857d51e97c825778b86ca02b15556e3f977a04077b5 3 00:19:06.663 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:06.663 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:06.663 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=3f37063a482cf996a9f91857d51e97c825778b86ca02b15556e3f977a04077b5 00:19:06.663 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:19:06.663 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:06.663 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.m3S 00:19:06.663 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.m3S 00:19:06.663 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.m3S 00:19:06.663 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:19:06.663 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:06.663 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:06.663 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:06.663 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:19:06.663 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:19:06.663 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:06.663 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=bd9dba29c52acc166b1dcf821dd75f1d 00:19:06.663 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:06.663 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.34U 00:19:06.663 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key bd9dba29c52acc166b1dcf821dd75f1d 1 00:19:06.663 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 bd9dba29c52acc166b1dcf821dd75f1d 1 00:19:06.663 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:06.663 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:06.663 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=bd9dba29c52acc166b1dcf821dd75f1d 00:19:06.663 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:19:06.663 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:06.922 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.34U 00:19:06.922 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.34U 00:19:06.922 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.34U 00:19:06.922 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:19:06.922 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:06.922 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:06.922 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:06.922 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:19:06.922 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:06.922 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:06.922 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f4bb96f37005532f19fe2eb9a7d851bf94a86251db3d63b3 00:19:06.922 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:06.922 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.F0B 00:19:06.922 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f4bb96f37005532f19fe2eb9a7d851bf94a86251db3d63b3 2 00:19:06.922 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f4bb96f37005532f19fe2eb9a7d851bf94a86251db3d63b3 2 00:19:06.922 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:06.922 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:06.922 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f4bb96f37005532f19fe2eb9a7d851bf94a86251db3d63b3 00:19:06.922 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:19:06.922 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:06.922 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.F0B 00:19:06.922 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.F0B 00:19:06.922 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.F0B 00:19:06.922 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:19:06.922 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:06.922 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:06.923 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:06.923 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:19:06.923 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:06.923 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:06.923 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=1d4bf2e9249f7dcf388c366b9cc05231b9a7bd9684447e17 00:19:06.923 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:06.923 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Tn2 00:19:06.923 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 1d4bf2e9249f7dcf388c366b9cc05231b9a7bd9684447e17 2 00:19:06.923 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 1d4bf2e9249f7dcf388c366b9cc05231b9a7bd9684447e17 2 00:19:06.923 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:06.923 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:06.923 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=1d4bf2e9249f7dcf388c366b9cc05231b9a7bd9684447e17 00:19:06.923 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:19:06.923 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:06.923 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Tn2 00:19:06.923 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Tn2 00:19:06.923 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.Tn2 00:19:06.923 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:19:06.923 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:06.923 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:06.923 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:06.923 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:19:06.923 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:19:06.923 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:06.923 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=0352c167edae164db7027103349d59db 00:19:06.923 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:06.923 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.wtG 00:19:06.923 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 0352c167edae164db7027103349d59db 1 00:19:06.923 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 0352c167edae164db7027103349d59db 1 00:19:06.923 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:06.923 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:06.923 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=0352c167edae164db7027103349d59db 00:19:06.923 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:19:06.923 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:06.923 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.wtG 00:19:06.923 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.wtG 00:19:06.923 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.wtG 00:19:06.923 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:19:06.923 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:06.923 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:06.923 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:06.923 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:19:06.923 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:19:06.923 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:06.923 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2c10f2bc5cd2feb11eea1a6891cc1dcb59e8733ded53902a1e2944f7138345cb 00:19:06.923 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:06.923 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.0dg 00:19:06.923 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2c10f2bc5cd2feb11eea1a6891cc1dcb59e8733ded53902a1e2944f7138345cb 3 00:19:06.923 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2c10f2bc5cd2feb11eea1a6891cc1dcb59e8733ded53902a1e2944f7138345cb 3 00:19:06.923 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:06.923 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:06.923 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2c10f2bc5cd2feb11eea1a6891cc1dcb59e8733ded53902a1e2944f7138345cb 00:19:06.923 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:19:06.923 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:06.923 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.0dg 00:19:06.923 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.0dg 00:19:06.923 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.0dg 00:19:06.923 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:19:06.923 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 241504 00:19:06.923 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 241504 ']' 00:19:06.923 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:06.923 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:06.923 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:06.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:06.923 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:06.923 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.181 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:07.181 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:07.181 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 241532 /var/tmp/host.sock 00:19:07.181 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 241532 ']' 00:19:07.181 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:19:07.181 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:07.181 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:07.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:07.181 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:07.181 00:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.439 00:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:07.439 00:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:07.439 00:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:19:07.439 00:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.439 00:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.697 00:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.697 00:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:07.697 00:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.UTS 00:19:07.697 00:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.697 00:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.697 00:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.697 00:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.UTS 00:19:07.697 00:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.UTS 00:19:07.955 00:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.m3S ]] 00:19:07.955 00:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.m3S 00:19:07.955 00:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.955 00:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.955 00:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.955 00:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.m3S 00:19:07.955 00:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.m3S 00:19:08.214 00:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:08.214 00:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.34U 00:19:08.214 00:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.214 00:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.214 00:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.214 00:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.34U 00:19:08.214 00:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.34U 00:19:08.472 00:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.F0B ]] 00:19:08.472 00:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.F0B 00:19:08.472 00:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.472 00:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.472 00:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.472 00:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.F0B 00:19:08.472 00:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.F0B 00:19:08.730 00:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:08.730 00:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Tn2 00:19:08.730 00:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.730 00:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.730 00:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.730 00:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Tn2 00:19:08.730 00:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Tn2 00:19:09.015 00:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.wtG ]] 00:19:09.015 00:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.wtG 00:19:09.015 00:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.015 00:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.015 00:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.016 00:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.wtG 00:19:09.016 00:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.wtG 00:19:09.280 00:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:09.280 00:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.0dg 00:19:09.280 00:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.280 00:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.280 00:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.280 00:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.0dg 00:19:09.280 00:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.0dg 00:19:09.538 00:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:19:09.538 00:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:09.538 00:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:09.538 00:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:09.538 00:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:09.538 00:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:09.796 00:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:19:09.796 00:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:09.796 00:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:09.796 00:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:09.796 00:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:09.796 00:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.796 00:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.796 00:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.796 00:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.796 00:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.796 00:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.796 00:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.796 00:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.055 00:19:10.055 00:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:10.055 00:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.055 00:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:10.314 00:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.314 00:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.314 00:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.314 00:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.314 00:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.314 00:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:10.314 { 00:19:10.314 "cntlid": 1, 00:19:10.314 "qid": 0, 00:19:10.314 "state": "enabled", 00:19:10.314 "thread": "nvmf_tgt_poll_group_000", 00:19:10.314 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:10.314 "listen_address": { 00:19:10.314 "trtype": "TCP", 00:19:10.314 "adrfam": "IPv4", 00:19:10.314 "traddr": "10.0.0.2", 00:19:10.314 "trsvcid": "4420" 00:19:10.314 }, 00:19:10.314 "peer_address": { 00:19:10.314 "trtype": "TCP", 00:19:10.314 "adrfam": "IPv4", 00:19:10.314 "traddr": "10.0.0.1", 00:19:10.314 "trsvcid": "51158" 00:19:10.314 }, 00:19:10.314 "auth": { 00:19:10.314 "state": "completed", 00:19:10.314 "digest": "sha256", 00:19:10.314 "dhgroup": "null" 00:19:10.314 } 00:19:10.314 } 00:19:10.314 ]' 00:19:10.314 00:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:10.314 00:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:10.314 00:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:10.572 00:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:10.572 00:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:10.572 00:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.572 00:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.572 00:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.830 00:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWQ0ZGZmN2VjZmU1YWQ0ODliMmZkNjkzMGVmNDFjNjNlYTY3NDVkZjRmNGU3MDMzy95nmg==: --dhchap-ctrl-secret DHHC-1:03:M2YzNzA2M2E0ODJjZjk5NmE5ZjkxODU3ZDUxZTk3YzgyNTc3OGI4NmNhMDJiMTU1NTZlM2Y5NzdhMDQwNzdiNZjIPvk=: 00:19:10.830 00:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MWQ0ZGZmN2VjZmU1YWQ0ODliMmZkNjkzMGVmNDFjNjNlYTY3NDVkZjRmNGU3MDMzy95nmg==: --dhchap-ctrl-secret DHHC-1:03:M2YzNzA2M2E0ODJjZjk5NmE5ZjkxODU3ZDUxZTk3YzgyNTc3OGI4NmNhMDJiMTU1NTZlM2Y5NzdhMDQwNzdiNZjIPvk=: 00:19:16.100 00:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.100 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.100 00:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:16.100 00:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.100 00:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.100 00:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.100 00:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:16.100 00:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:16.100 00:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:16.100 00:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:19:16.100 00:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:16.100 00:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:16.100 00:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:16.100 00:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:16.100 00:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.100 00:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:16.100 00:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.100 00:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.100 00:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.100 00:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:16.100 00:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:16.100 00:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:16.100 00:19:16.100 00:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:16.100 00:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:16.100 00:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.100 00:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.100 00:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.100 00:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.100 00:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.100 00:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.100 00:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:16.100 { 00:19:16.100 "cntlid": 3, 00:19:16.100 "qid": 0, 00:19:16.100 "state": "enabled", 00:19:16.100 "thread": "nvmf_tgt_poll_group_000", 00:19:16.100 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:16.100 "listen_address": { 00:19:16.100 "trtype": "TCP", 00:19:16.100 "adrfam": "IPv4", 00:19:16.100 "traddr": "10.0.0.2", 00:19:16.100 "trsvcid": "4420" 00:19:16.100 }, 00:19:16.100 "peer_address": { 00:19:16.100 "trtype": "TCP", 00:19:16.100 "adrfam": "IPv4", 00:19:16.100 "traddr": "10.0.0.1", 00:19:16.100 "trsvcid": "60410" 00:19:16.100 }, 00:19:16.100 "auth": { 00:19:16.100 "state": "completed", 00:19:16.100 "digest": "sha256", 00:19:16.100 "dhgroup": "null" 00:19:16.100 } 00:19:16.100 } 00:19:16.100 ]' 00:19:16.100 00:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:16.358 00:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:16.358 00:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:16.359 00:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:16.359 00:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:16.359 00:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.359 00:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.359 00:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.617 00:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmQ5ZGJhMjljNTJhY2MxNjZiMWRjZjgyMWRkNzVmMWR+AU16: --dhchap-ctrl-secret DHHC-1:02:ZjRiYjk2ZjM3MDA1NTMyZjE5ZmUyZWI5YTdkODUxYmY5NGE4NjI1MWRiM2Q2M2IzCitAaw==: 00:19:16.617 00:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmQ5ZGJhMjljNTJhY2MxNjZiMWRjZjgyMWRkNzVmMWR+AU16: --dhchap-ctrl-secret DHHC-1:02:ZjRiYjk2ZjM3MDA1NTMyZjE5ZmUyZWI5YTdkODUxYmY5NGE4NjI1MWRiM2Q2M2IzCitAaw==: 00:19:17.551 00:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.551 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.551 00:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:17.551 00:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.551 00:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.551 00:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.551 00:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:17.551 00:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:17.551 00:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:17.810 00:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:19:17.810 00:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:17.810 00:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:17.810 00:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:17.810 00:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:17.810 00:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.810 00:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:17.810 00:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.810 00:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.810 00:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.810 00:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:17.810 00:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:17.810 00:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:18.068 00:19:18.068 00:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:18.068 00:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:18.068 00:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.327 00:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.327 00:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.327 00:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.327 00:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.327 00:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.328 00:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:18.328 { 00:19:18.328 "cntlid": 5, 00:19:18.328 "qid": 0, 00:19:18.328 "state": "enabled", 00:19:18.328 "thread": "nvmf_tgt_poll_group_000", 00:19:18.328 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:18.328 "listen_address": { 00:19:18.328 "trtype": "TCP", 00:19:18.328 "adrfam": "IPv4", 00:19:18.328 "traddr": "10.0.0.2", 00:19:18.328 "trsvcid": "4420" 00:19:18.328 }, 00:19:18.328 "peer_address": { 00:19:18.328 "trtype": "TCP", 00:19:18.328 "adrfam": "IPv4", 00:19:18.328 "traddr": "10.0.0.1", 00:19:18.328 "trsvcid": "60436" 00:19:18.328 }, 00:19:18.328 "auth": { 00:19:18.328 "state": "completed", 00:19:18.328 "digest": "sha256", 00:19:18.328 "dhgroup": "null" 00:19:18.328 } 00:19:18.328 } 00:19:18.328 ]' 00:19:18.328 00:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:18.328 00:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:18.328 00:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:18.586 00:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:18.586 00:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:18.586 00:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.586 00:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.586 00:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.844 00:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWQ0YmYyZTkyNDlmN2RjZjM4OGMzNjZiOWNjMDUyMzFiOWE3YmQ5Njg0NDQ3ZTE36pLhlg==: --dhchap-ctrl-secret DHHC-1:01:MDM1MmMxNjdlZGFlMTY0ZGI3MDI3MTAzMzQ5ZDU5ZGKCUd+h: 00:19:18.844 00:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MWQ0YmYyZTkyNDlmN2RjZjM4OGMzNjZiOWNjMDUyMzFiOWE3YmQ5Njg0NDQ3ZTE36pLhlg==: --dhchap-ctrl-secret DHHC-1:01:MDM1MmMxNjdlZGFlMTY0ZGI3MDI3MTAzMzQ5ZDU5ZGKCUd+h: 00:19:19.779 00:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.780 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.780 00:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:19.780 00:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.780 00:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.780 00:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.780 00:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:19.780 00:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:19.780 00:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:20.038 00:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:19:20.038 00:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:20.038 00:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:20.038 00:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:20.038 00:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:20.038 00:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.038 00:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:20.038 00:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.038 00:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.038 00:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.038 00:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:20.038 00:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:20.038 00:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:20.295 00:19:20.295 00:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:20.295 00:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:20.295 00:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.553 00:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.553 00:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.553 00:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.553 00:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.553 00:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.553 00:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:20.553 { 00:19:20.553 "cntlid": 7, 00:19:20.553 "qid": 0, 00:19:20.553 "state": "enabled", 00:19:20.553 "thread": "nvmf_tgt_poll_group_000", 00:19:20.553 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:20.553 "listen_address": { 00:19:20.553 "trtype": "TCP", 00:19:20.553 "adrfam": "IPv4", 00:19:20.554 "traddr": "10.0.0.2", 00:19:20.554 "trsvcid": "4420" 00:19:20.554 }, 00:19:20.554 "peer_address": { 00:19:20.554 "trtype": "TCP", 00:19:20.554 "adrfam": "IPv4", 00:19:20.554 "traddr": "10.0.0.1", 00:19:20.554 "trsvcid": "60460" 00:19:20.554 }, 00:19:20.554 "auth": { 00:19:20.554 "state": "completed", 00:19:20.554 "digest": "sha256", 00:19:20.554 "dhgroup": "null" 00:19:20.554 } 00:19:20.554 } 00:19:20.554 ]' 00:19:20.554 00:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:20.554 00:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:20.812 00:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:20.812 00:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:20.812 00:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:20.812 00:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.812 00:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.812 00:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.069 00:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmMxMGYyYmM1Y2QyZmViMTFlZWExYTY4OTFjYzFkY2I1OWU4NzMzZGVkNTM5MDJhMWUyOTQ0ZjcxMzgzNDVjYmBdiwU=: 00:19:21.069 00:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MmMxMGYyYmM1Y2QyZmViMTFlZWExYTY4OTFjYzFkY2I1OWU4NzMzZGVkNTM5MDJhMWUyOTQ0ZjcxMzgzNDVjYmBdiwU=: 00:19:22.003 00:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.003 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.003 00:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:22.003 00:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.003 00:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.003 00:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.003 00:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:22.003 00:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:22.003 00:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:22.003 00:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:22.262 00:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:19:22.262 00:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:22.262 00:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:22.262 00:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:22.262 00:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:22.262 00:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.262 00:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.262 00:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.262 00:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.262 00:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.262 00:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.262 00:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.262 00:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.519 00:19:22.519 00:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:22.519 00:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.519 00:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:22.777 00:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.777 00:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.777 00:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.777 00:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.777 00:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.777 00:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:22.777 { 00:19:22.777 "cntlid": 9, 00:19:22.777 "qid": 0, 00:19:22.777 "state": "enabled", 00:19:22.777 "thread": "nvmf_tgt_poll_group_000", 00:19:22.777 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:22.777 "listen_address": { 00:19:22.777 "trtype": "TCP", 00:19:22.777 "adrfam": "IPv4", 00:19:22.777 "traddr": "10.0.0.2", 00:19:22.777 "trsvcid": "4420" 00:19:22.777 }, 00:19:22.777 "peer_address": { 00:19:22.777 "trtype": "TCP", 00:19:22.777 "adrfam": "IPv4", 00:19:22.777 "traddr": "10.0.0.1", 00:19:22.777 "trsvcid": "38228" 00:19:22.777 }, 00:19:22.777 "auth": { 00:19:22.777 "state": "completed", 00:19:22.777 "digest": "sha256", 00:19:22.777 "dhgroup": "ffdhe2048" 00:19:22.777 } 00:19:22.777 } 00:19:22.777 ]' 00:19:22.777 00:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:22.777 00:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:22.778 00:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:22.778 00:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:22.778 00:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:23.036 00:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.036 00:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.036 00:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.295 00:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWQ0ZGZmN2VjZmU1YWQ0ODliMmZkNjkzMGVmNDFjNjNlYTY3NDVkZjRmNGU3MDMzy95nmg==: --dhchap-ctrl-secret DHHC-1:03:M2YzNzA2M2E0ODJjZjk5NmE5ZjkxODU3ZDUxZTk3YzgyNTc3OGI4NmNhMDJiMTU1NTZlM2Y5NzdhMDQwNzdiNZjIPvk=: 00:19:23.295 00:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MWQ0ZGZmN2VjZmU1YWQ0ODliMmZkNjkzMGVmNDFjNjNlYTY3NDVkZjRmNGU3MDMzy95nmg==: --dhchap-ctrl-secret DHHC-1:03:M2YzNzA2M2E0ODJjZjk5NmE5ZjkxODU3ZDUxZTk3YzgyNTc3OGI4NmNhMDJiMTU1NTZlM2Y5NzdhMDQwNzdiNZjIPvk=: 00:19:24.230 00:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.230 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.230 00:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:24.230 00:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.230 00:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.230 00:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.230 00:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:24.230 00:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:24.230 00:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:24.489 00:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:19:24.489 00:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:24.489 00:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:24.489 00:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:24.489 00:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:24.489 00:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.489 00:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.489 00:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.489 00:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.489 00:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.489 00:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.489 00:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.489 00:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.746 00:19:24.746 00:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:24.746 00:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:24.746 00:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.004 00:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.004 00:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.004 00:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.004 00:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.004 00:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.004 00:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:25.004 { 00:19:25.004 "cntlid": 11, 00:19:25.004 "qid": 0, 00:19:25.004 "state": "enabled", 00:19:25.004 "thread": "nvmf_tgt_poll_group_000", 00:19:25.004 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:25.004 "listen_address": { 00:19:25.004 "trtype": "TCP", 00:19:25.004 "adrfam": "IPv4", 00:19:25.004 "traddr": "10.0.0.2", 00:19:25.004 "trsvcid": "4420" 00:19:25.004 }, 00:19:25.004 "peer_address": { 00:19:25.004 "trtype": "TCP", 00:19:25.004 "adrfam": "IPv4", 00:19:25.004 "traddr": "10.0.0.1", 00:19:25.004 "trsvcid": "38258" 00:19:25.004 }, 00:19:25.004 "auth": { 00:19:25.004 "state": "completed", 00:19:25.004 "digest": "sha256", 00:19:25.004 "dhgroup": "ffdhe2048" 00:19:25.004 } 00:19:25.004 } 00:19:25.004 ]' 00:19:25.004 00:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:25.004 00:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:25.004 00:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:25.004 00:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:25.004 00:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:25.262 00:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.262 00:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.262 00:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.520 00:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmQ5ZGJhMjljNTJhY2MxNjZiMWRjZjgyMWRkNzVmMWR+AU16: --dhchap-ctrl-secret DHHC-1:02:ZjRiYjk2ZjM3MDA1NTMyZjE5ZmUyZWI5YTdkODUxYmY5NGE4NjI1MWRiM2Q2M2IzCitAaw==: 00:19:25.520 00:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmQ5ZGJhMjljNTJhY2MxNjZiMWRjZjgyMWRkNzVmMWR+AU16: --dhchap-ctrl-secret DHHC-1:02:ZjRiYjk2ZjM3MDA1NTMyZjE5ZmUyZWI5YTdkODUxYmY5NGE4NjI1MWRiM2Q2M2IzCitAaw==: 00:19:26.454 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.454 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.454 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:26.454 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.454 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.454 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.454 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:26.454 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:26.454 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:26.712 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:19:26.713 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:26.713 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:26.713 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:26.713 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:26.713 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.713 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.713 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.713 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.713 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.713 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.713 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.713 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.971 00:19:26.971 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:26.971 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:26.971 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.230 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.230 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.230 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.230 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.230 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.230 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:27.230 { 00:19:27.230 "cntlid": 13, 00:19:27.230 "qid": 0, 00:19:27.230 "state": "enabled", 00:19:27.230 "thread": "nvmf_tgt_poll_group_000", 00:19:27.230 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:27.230 "listen_address": { 00:19:27.230 "trtype": "TCP", 00:19:27.230 "adrfam": "IPv4", 00:19:27.230 "traddr": "10.0.0.2", 00:19:27.230 "trsvcid": "4420" 00:19:27.230 }, 00:19:27.230 "peer_address": { 00:19:27.230 "trtype": "TCP", 00:19:27.230 "adrfam": "IPv4", 00:19:27.230 "traddr": "10.0.0.1", 00:19:27.230 "trsvcid": "38276" 00:19:27.230 }, 00:19:27.230 "auth": { 00:19:27.230 "state": "completed", 00:19:27.230 "digest": "sha256", 00:19:27.230 "dhgroup": "ffdhe2048" 00:19:27.230 } 00:19:27.230 } 00:19:27.230 ]' 00:19:27.230 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:27.230 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:27.230 00:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:27.230 00:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:27.230 00:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:27.488 00:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.488 00:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.488 00:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.746 00:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWQ0YmYyZTkyNDlmN2RjZjM4OGMzNjZiOWNjMDUyMzFiOWE3YmQ5Njg0NDQ3ZTE36pLhlg==: --dhchap-ctrl-secret DHHC-1:01:MDM1MmMxNjdlZGFlMTY0ZGI3MDI3MTAzMzQ5ZDU5ZGKCUd+h: 00:19:27.746 00:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MWQ0YmYyZTkyNDlmN2RjZjM4OGMzNjZiOWNjMDUyMzFiOWE3YmQ5Njg0NDQ3ZTE36pLhlg==: --dhchap-ctrl-secret DHHC-1:01:MDM1MmMxNjdlZGFlMTY0ZGI3MDI3MTAzMzQ5ZDU5ZGKCUd+h: 00:19:28.680 00:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.680 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.680 00:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:28.680 00:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.680 00:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.680 00:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.680 00:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:28.680 00:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:28.680 00:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:28.680 00:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:19:28.680 00:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:28.680 00:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:28.680 00:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:28.680 00:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:28.680 00:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.680 00:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:28.680 00:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.680 00:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.680 00:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.680 00:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:28.680 00:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:28.680 00:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:29.245 00:19:29.245 00:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:29.245 00:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.245 00:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:29.504 00:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.504 00:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.504 00:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.504 00:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.504 00:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.504 00:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:29.504 { 00:19:29.504 "cntlid": 15, 00:19:29.504 "qid": 0, 00:19:29.504 "state": "enabled", 00:19:29.504 "thread": "nvmf_tgt_poll_group_000", 00:19:29.504 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:29.504 "listen_address": { 00:19:29.504 "trtype": "TCP", 00:19:29.504 "adrfam": "IPv4", 00:19:29.504 "traddr": "10.0.0.2", 00:19:29.504 "trsvcid": "4420" 00:19:29.504 }, 00:19:29.504 "peer_address": { 00:19:29.504 "trtype": "TCP", 00:19:29.504 "adrfam": "IPv4", 00:19:29.504 "traddr": "10.0.0.1", 00:19:29.504 "trsvcid": "38306" 00:19:29.504 }, 00:19:29.504 "auth": { 00:19:29.504 "state": "completed", 00:19:29.504 "digest": "sha256", 00:19:29.504 "dhgroup": "ffdhe2048" 00:19:29.504 } 00:19:29.504 } 00:19:29.504 ]' 00:19:29.504 00:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:29.504 00:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:29.504 00:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:29.504 00:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:29.504 00:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:29.504 00:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.504 00:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.504 00:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.763 00:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmMxMGYyYmM1Y2QyZmViMTFlZWExYTY4OTFjYzFkY2I1OWU4NzMzZGVkNTM5MDJhMWUyOTQ0ZjcxMzgzNDVjYmBdiwU=: 00:19:29.763 00:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MmMxMGYyYmM1Y2QyZmViMTFlZWExYTY4OTFjYzFkY2I1OWU4NzMzZGVkNTM5MDJhMWUyOTQ0ZjcxMzgzNDVjYmBdiwU=: 00:19:30.705 00:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.705 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.705 00:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:30.705 00:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.705 00:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.705 00:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.705 00:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:30.705 00:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:30.705 00:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:30.705 00:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:30.964 00:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:19:30.964 00:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:30.964 00:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:30.964 00:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:30.964 00:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:30.964 00:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.964 00:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.964 00:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.964 00:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.964 00:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.964 00:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.964 00:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.964 00:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.531 00:19:31.531 00:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:31.531 00:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:31.531 00:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.789 00:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.789 00:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.789 00:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.789 00:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.789 00:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.789 00:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:31.789 { 00:19:31.789 "cntlid": 17, 00:19:31.789 "qid": 0, 00:19:31.789 "state": "enabled", 00:19:31.789 "thread": "nvmf_tgt_poll_group_000", 00:19:31.789 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:31.789 "listen_address": { 00:19:31.789 "trtype": "TCP", 00:19:31.789 "adrfam": "IPv4", 00:19:31.789 "traddr": "10.0.0.2", 00:19:31.789 "trsvcid": "4420" 00:19:31.789 }, 00:19:31.789 "peer_address": { 00:19:31.789 "trtype": "TCP", 00:19:31.789 "adrfam": "IPv4", 00:19:31.789 "traddr": "10.0.0.1", 00:19:31.789 "trsvcid": "36548" 00:19:31.789 }, 00:19:31.789 "auth": { 00:19:31.789 "state": "completed", 00:19:31.789 "digest": "sha256", 00:19:31.789 "dhgroup": "ffdhe3072" 00:19:31.789 } 00:19:31.789 } 00:19:31.789 ]' 00:19:31.789 00:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:31.789 00:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:31.789 00:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:31.790 00:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:31.790 00:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:31.790 00:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.790 00:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.790 00:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.047 00:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWQ0ZGZmN2VjZmU1YWQ0ODliMmZkNjkzMGVmNDFjNjNlYTY3NDVkZjRmNGU3MDMzy95nmg==: --dhchap-ctrl-secret DHHC-1:03:M2YzNzA2M2E0ODJjZjk5NmE5ZjkxODU3ZDUxZTk3YzgyNTc3OGI4NmNhMDJiMTU1NTZlM2Y5NzdhMDQwNzdiNZjIPvk=: 00:19:32.047 00:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MWQ0ZGZmN2VjZmU1YWQ0ODliMmZkNjkzMGVmNDFjNjNlYTY3NDVkZjRmNGU3MDMzy95nmg==: --dhchap-ctrl-secret DHHC-1:03:M2YzNzA2M2E0ODJjZjk5NmE5ZjkxODU3ZDUxZTk3YzgyNTc3OGI4NmNhMDJiMTU1NTZlM2Y5NzdhMDQwNzdiNZjIPvk=: 00:19:32.982 00:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.982 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.982 00:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:32.982 00:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.982 00:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.982 00:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.982 00:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:32.982 00:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:32.982 00:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:33.240 00:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:19:33.240 00:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:33.240 00:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:33.240 00:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:33.240 00:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:33.240 00:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.240 00:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.240 00:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.240 00:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.240 00:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.240 00:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.240 00:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.240 00:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.804 00:19:33.804 00:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:33.804 00:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.804 00:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:33.804 00:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.804 00:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.804 00:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.805 00:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.063 00:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.063 00:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:34.063 { 00:19:34.063 "cntlid": 19, 00:19:34.063 "qid": 0, 00:19:34.063 "state": "enabled", 00:19:34.063 "thread": "nvmf_tgt_poll_group_000", 00:19:34.063 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:34.063 "listen_address": { 00:19:34.063 "trtype": "TCP", 00:19:34.063 "adrfam": "IPv4", 00:19:34.063 "traddr": "10.0.0.2", 00:19:34.063 "trsvcid": "4420" 00:19:34.063 }, 00:19:34.063 "peer_address": { 00:19:34.063 "trtype": "TCP", 00:19:34.063 "adrfam": "IPv4", 00:19:34.063 "traddr": "10.0.0.1", 00:19:34.063 "trsvcid": "36578" 00:19:34.063 }, 00:19:34.063 "auth": { 00:19:34.063 "state": "completed", 00:19:34.063 "digest": "sha256", 00:19:34.063 "dhgroup": "ffdhe3072" 00:19:34.063 } 00:19:34.063 } 00:19:34.063 ]' 00:19:34.063 00:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:34.063 00:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:34.063 00:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:34.063 00:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:34.063 00:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:34.063 00:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.063 00:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.063 00:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.321 00:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmQ5ZGJhMjljNTJhY2MxNjZiMWRjZjgyMWRkNzVmMWR+AU16: --dhchap-ctrl-secret DHHC-1:02:ZjRiYjk2ZjM3MDA1NTMyZjE5ZmUyZWI5YTdkODUxYmY5NGE4NjI1MWRiM2Q2M2IzCitAaw==: 00:19:34.321 00:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmQ5ZGJhMjljNTJhY2MxNjZiMWRjZjgyMWRkNzVmMWR+AU16: --dhchap-ctrl-secret DHHC-1:02:ZjRiYjk2ZjM3MDA1NTMyZjE5ZmUyZWI5YTdkODUxYmY5NGE4NjI1MWRiM2Q2M2IzCitAaw==: 00:19:35.256 00:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.256 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.256 00:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:35.256 00:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.256 00:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.256 00:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.256 00:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:35.257 00:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:35.257 00:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:35.514 00:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:19:35.514 00:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:35.514 00:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:35.514 00:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:35.514 00:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:35.514 00:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.514 00:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.514 00:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.514 00:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.514 00:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.514 00:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.514 00:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.514 00:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.773 00:19:35.773 00:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:35.773 00:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:35.773 00:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.031 00:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.031 00:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.031 00:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.031 00:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.290 00:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.290 00:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:36.290 { 00:19:36.290 "cntlid": 21, 00:19:36.290 "qid": 0, 00:19:36.290 "state": "enabled", 00:19:36.290 "thread": "nvmf_tgt_poll_group_000", 00:19:36.290 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:36.290 "listen_address": { 00:19:36.290 "trtype": "TCP", 00:19:36.290 "adrfam": "IPv4", 00:19:36.290 "traddr": "10.0.0.2", 00:19:36.290 "trsvcid": "4420" 00:19:36.290 }, 00:19:36.290 "peer_address": { 00:19:36.290 "trtype": "TCP", 00:19:36.290 "adrfam": "IPv4", 00:19:36.290 "traddr": "10.0.0.1", 00:19:36.290 "trsvcid": "36614" 00:19:36.290 }, 00:19:36.290 "auth": { 00:19:36.290 "state": "completed", 00:19:36.290 "digest": "sha256", 00:19:36.290 "dhgroup": "ffdhe3072" 00:19:36.290 } 00:19:36.290 } 00:19:36.290 ]' 00:19:36.290 00:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:36.290 00:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:36.290 00:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:36.290 00:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:36.290 00:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:36.290 00:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.290 00:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.290 00:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.549 00:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWQ0YmYyZTkyNDlmN2RjZjM4OGMzNjZiOWNjMDUyMzFiOWE3YmQ5Njg0NDQ3ZTE36pLhlg==: --dhchap-ctrl-secret DHHC-1:01:MDM1MmMxNjdlZGFlMTY0ZGI3MDI3MTAzMzQ5ZDU5ZGKCUd+h: 00:19:36.549 00:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MWQ0YmYyZTkyNDlmN2RjZjM4OGMzNjZiOWNjMDUyMzFiOWE3YmQ5Njg0NDQ3ZTE36pLhlg==: --dhchap-ctrl-secret DHHC-1:01:MDM1MmMxNjdlZGFlMTY0ZGI3MDI3MTAzMzQ5ZDU5ZGKCUd+h: 00:19:37.485 00:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.485 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.485 00:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:37.485 00:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.485 00:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.485 00:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.485 00:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:37.485 00:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:37.485 00:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:37.743 00:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:19:37.743 00:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:37.743 00:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:37.743 00:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:37.743 00:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:37.743 00:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.743 00:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:37.743 00:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.743 00:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.743 00:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.743 00:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:37.744 00:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:37.744 00:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:38.001 00:19:38.001 00:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:38.001 00:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:38.001 00:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.260 00:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.260 00:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.260 00:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.260 00:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.519 00:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.519 00:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:38.519 { 00:19:38.519 "cntlid": 23, 00:19:38.519 "qid": 0, 00:19:38.519 "state": "enabled", 00:19:38.519 "thread": "nvmf_tgt_poll_group_000", 00:19:38.519 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:38.519 "listen_address": { 00:19:38.519 "trtype": "TCP", 00:19:38.519 "adrfam": "IPv4", 00:19:38.519 "traddr": "10.0.0.2", 00:19:38.519 "trsvcid": "4420" 00:19:38.519 }, 00:19:38.519 "peer_address": { 00:19:38.519 "trtype": "TCP", 00:19:38.519 "adrfam": "IPv4", 00:19:38.519 "traddr": "10.0.0.1", 00:19:38.519 "trsvcid": "36654" 00:19:38.519 }, 00:19:38.519 "auth": { 00:19:38.519 "state": "completed", 00:19:38.519 "digest": "sha256", 00:19:38.519 "dhgroup": "ffdhe3072" 00:19:38.519 } 00:19:38.519 } 00:19:38.519 ]' 00:19:38.519 00:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:38.519 00:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:38.519 00:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:38.519 00:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:38.519 00:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:38.519 00:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.519 00:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.519 00:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.778 00:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmMxMGYyYmM1Y2QyZmViMTFlZWExYTY4OTFjYzFkY2I1OWU4NzMzZGVkNTM5MDJhMWUyOTQ0ZjcxMzgzNDVjYmBdiwU=: 00:19:38.778 00:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MmMxMGYyYmM1Y2QyZmViMTFlZWExYTY4OTFjYzFkY2I1OWU4NzMzZGVkNTM5MDJhMWUyOTQ0ZjcxMzgzNDVjYmBdiwU=: 00:19:39.713 00:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.713 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.713 00:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:39.713 00:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.713 00:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.713 00:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.713 00:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:39.713 00:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:39.713 00:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:39.713 00:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:39.972 00:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:19:39.972 00:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:39.972 00:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:39.972 00:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:39.972 00:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:39.972 00:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.972 00:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.972 00:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.972 00:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.972 00:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.972 00:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.972 00:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.972 00:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:40.541 00:19:40.541 00:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:40.541 00:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.541 00:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:40.541 00:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.541 00:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.541 00:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.541 00:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.541 00:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.541 00:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:40.541 { 00:19:40.541 "cntlid": 25, 00:19:40.541 "qid": 0, 00:19:40.541 "state": "enabled", 00:19:40.541 "thread": "nvmf_tgt_poll_group_000", 00:19:40.541 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:40.541 "listen_address": { 00:19:40.541 "trtype": "TCP", 00:19:40.541 "adrfam": "IPv4", 00:19:40.541 "traddr": "10.0.0.2", 00:19:40.541 "trsvcid": "4420" 00:19:40.541 }, 00:19:40.541 "peer_address": { 00:19:40.541 "trtype": "TCP", 00:19:40.541 "adrfam": "IPv4", 00:19:40.541 "traddr": "10.0.0.1", 00:19:40.541 "trsvcid": "36672" 00:19:40.541 }, 00:19:40.541 "auth": { 00:19:40.541 "state": "completed", 00:19:40.541 "digest": "sha256", 00:19:40.541 "dhgroup": "ffdhe4096" 00:19:40.541 } 00:19:40.541 } 00:19:40.541 ]' 00:19:40.541 00:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:40.800 00:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:40.800 00:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:40.800 00:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:40.800 00:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:40.800 00:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.800 00:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.800 00:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.057 00:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWQ0ZGZmN2VjZmU1YWQ0ODliMmZkNjkzMGVmNDFjNjNlYTY3NDVkZjRmNGU3MDMzy95nmg==: --dhchap-ctrl-secret DHHC-1:03:M2YzNzA2M2E0ODJjZjk5NmE5ZjkxODU3ZDUxZTk3YzgyNTc3OGI4NmNhMDJiMTU1NTZlM2Y5NzdhMDQwNzdiNZjIPvk=: 00:19:41.057 00:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MWQ0ZGZmN2VjZmU1YWQ0ODliMmZkNjkzMGVmNDFjNjNlYTY3NDVkZjRmNGU3MDMzy95nmg==: --dhchap-ctrl-secret DHHC-1:03:M2YzNzA2M2E0ODJjZjk5NmE5ZjkxODU3ZDUxZTk3YzgyNTc3OGI4NmNhMDJiMTU1NTZlM2Y5NzdhMDQwNzdiNZjIPvk=: 00:19:41.992 00:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.992 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.992 00:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:41.992 00:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.992 00:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.992 00:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.992 00:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:41.992 00:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:41.992 00:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:42.251 00:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:19:42.251 00:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:42.251 00:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:42.251 00:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:42.251 00:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:42.251 00:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.251 00:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.251 00:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.251 00:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.251 00:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.251 00:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.251 00:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.251 00:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.817 00:19:42.817 00:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:42.817 00:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:42.817 00:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.075 00:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.075 00:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.075 00:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.075 00:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.075 00:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.076 00:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:43.076 { 00:19:43.076 "cntlid": 27, 00:19:43.076 "qid": 0, 00:19:43.076 "state": "enabled", 00:19:43.076 "thread": "nvmf_tgt_poll_group_000", 00:19:43.076 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:43.076 "listen_address": { 00:19:43.076 "trtype": "TCP", 00:19:43.076 "adrfam": "IPv4", 00:19:43.076 "traddr": "10.0.0.2", 00:19:43.076 "trsvcid": "4420" 00:19:43.076 }, 00:19:43.076 "peer_address": { 00:19:43.076 "trtype": "TCP", 00:19:43.076 "adrfam": "IPv4", 00:19:43.076 "traddr": "10.0.0.1", 00:19:43.076 "trsvcid": "33578" 00:19:43.076 }, 00:19:43.076 "auth": { 00:19:43.076 "state": "completed", 00:19:43.076 "digest": "sha256", 00:19:43.076 "dhgroup": "ffdhe4096" 00:19:43.076 } 00:19:43.076 } 00:19:43.076 ]' 00:19:43.076 00:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:43.076 00:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:43.076 00:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:43.076 00:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:43.076 00:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:43.076 00:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.076 00:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.076 00:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.333 00:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmQ5ZGJhMjljNTJhY2MxNjZiMWRjZjgyMWRkNzVmMWR+AU16: --dhchap-ctrl-secret DHHC-1:02:ZjRiYjk2ZjM3MDA1NTMyZjE5ZmUyZWI5YTdkODUxYmY5NGE4NjI1MWRiM2Q2M2IzCitAaw==: 00:19:43.333 00:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmQ5ZGJhMjljNTJhY2MxNjZiMWRjZjgyMWRkNzVmMWR+AU16: --dhchap-ctrl-secret DHHC-1:02:ZjRiYjk2ZjM3MDA1NTMyZjE5ZmUyZWI5YTdkODUxYmY5NGE4NjI1MWRiM2Q2M2IzCitAaw==: 00:19:44.267 00:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.267 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.267 00:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:44.267 00:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.267 00:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.267 00:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.267 00:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:44.267 00:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:44.267 00:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:44.526 00:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:19:44.526 00:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:44.526 00:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:44.526 00:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:44.526 00:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:44.526 00:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.526 00:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.526 00:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.526 00:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.526 00:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.526 00:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.526 00:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.526 00:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.094 00:19:45.094 00:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:45.094 00:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:45.094 00:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.362 00:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.362 00:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.362 00:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.362 00:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.362 00:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.362 00:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:45.362 { 00:19:45.362 "cntlid": 29, 00:19:45.362 "qid": 0, 00:19:45.362 "state": "enabled", 00:19:45.362 "thread": "nvmf_tgt_poll_group_000", 00:19:45.362 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:45.362 "listen_address": { 00:19:45.362 "trtype": "TCP", 00:19:45.362 "adrfam": "IPv4", 00:19:45.362 "traddr": "10.0.0.2", 00:19:45.362 "trsvcid": "4420" 00:19:45.362 }, 00:19:45.362 "peer_address": { 00:19:45.362 "trtype": "TCP", 00:19:45.362 "adrfam": "IPv4", 00:19:45.362 "traddr": "10.0.0.1", 00:19:45.362 "trsvcid": "33606" 00:19:45.362 }, 00:19:45.362 "auth": { 00:19:45.362 "state": "completed", 00:19:45.362 "digest": "sha256", 00:19:45.362 "dhgroup": "ffdhe4096" 00:19:45.362 } 00:19:45.362 } 00:19:45.362 ]' 00:19:45.362 00:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:45.362 00:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:45.362 00:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:45.362 00:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:45.362 00:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:45.362 00:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.362 00:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.362 00:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.619 00:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWQ0YmYyZTkyNDlmN2RjZjM4OGMzNjZiOWNjMDUyMzFiOWE3YmQ5Njg0NDQ3ZTE36pLhlg==: --dhchap-ctrl-secret DHHC-1:01:MDM1MmMxNjdlZGFlMTY0ZGI3MDI3MTAzMzQ5ZDU5ZGKCUd+h: 00:19:45.619 00:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MWQ0YmYyZTkyNDlmN2RjZjM4OGMzNjZiOWNjMDUyMzFiOWE3YmQ5Njg0NDQ3ZTE36pLhlg==: --dhchap-ctrl-secret DHHC-1:01:MDM1MmMxNjdlZGFlMTY0ZGI3MDI3MTAzMzQ5ZDU5ZGKCUd+h: 00:19:46.553 00:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.553 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.553 00:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:46.553 00:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.553 00:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.553 00:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.553 00:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:46.553 00:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:46.553 00:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:47.120 00:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:19:47.120 00:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:47.120 00:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:47.120 00:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:47.120 00:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:47.120 00:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.120 00:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:47.120 00:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.120 00:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.120 00:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.120 00:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:47.120 00:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:47.120 00:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:47.378 00:19:47.378 00:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:47.378 00:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:47.378 00:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.636 00:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.636 00:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.636 00:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.636 00:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.636 00:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.636 00:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:47.636 { 00:19:47.636 "cntlid": 31, 00:19:47.636 "qid": 0, 00:19:47.636 "state": "enabled", 00:19:47.636 "thread": "nvmf_tgt_poll_group_000", 00:19:47.636 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:47.636 "listen_address": { 00:19:47.636 "trtype": "TCP", 00:19:47.636 "adrfam": "IPv4", 00:19:47.636 "traddr": "10.0.0.2", 00:19:47.636 "trsvcid": "4420" 00:19:47.636 }, 00:19:47.636 "peer_address": { 00:19:47.636 "trtype": "TCP", 00:19:47.636 "adrfam": "IPv4", 00:19:47.636 "traddr": "10.0.0.1", 00:19:47.636 "trsvcid": "33638" 00:19:47.636 }, 00:19:47.636 "auth": { 00:19:47.636 "state": "completed", 00:19:47.636 "digest": "sha256", 00:19:47.636 "dhgroup": "ffdhe4096" 00:19:47.636 } 00:19:47.636 } 00:19:47.636 ]' 00:19:47.636 00:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:47.894 00:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:47.894 00:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:47.894 00:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:47.894 00:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:47.894 00:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.894 00:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.894 00:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.152 00:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmMxMGYyYmM1Y2QyZmViMTFlZWExYTY4OTFjYzFkY2I1OWU4NzMzZGVkNTM5MDJhMWUyOTQ0ZjcxMzgzNDVjYmBdiwU=: 00:19:48.152 00:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MmMxMGYyYmM1Y2QyZmViMTFlZWExYTY4OTFjYzFkY2I1OWU4NzMzZGVkNTM5MDJhMWUyOTQ0ZjcxMzgzNDVjYmBdiwU=: 00:19:49.088 00:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.088 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.088 00:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:49.088 00:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.088 00:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.088 00:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.088 00:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:49.088 00:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:49.088 00:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:49.089 00:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:49.347 00:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:19:49.347 00:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:49.347 00:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:49.347 00:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:49.347 00:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:49.347 00:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.347 00:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.347 00:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.347 00:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.347 00:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.347 00:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.347 00:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.347 00:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.911 00:19:49.911 00:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:49.911 00:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.911 00:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:50.169 00:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.169 00:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.169 00:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.169 00:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.169 00:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.169 00:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:50.169 { 00:19:50.169 "cntlid": 33, 00:19:50.169 "qid": 0, 00:19:50.169 "state": "enabled", 00:19:50.169 "thread": "nvmf_tgt_poll_group_000", 00:19:50.169 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:50.169 "listen_address": { 00:19:50.169 "trtype": "TCP", 00:19:50.169 "adrfam": "IPv4", 00:19:50.169 "traddr": "10.0.0.2", 00:19:50.169 "trsvcid": "4420" 00:19:50.169 }, 00:19:50.169 "peer_address": { 00:19:50.169 "trtype": "TCP", 00:19:50.169 "adrfam": "IPv4", 00:19:50.169 "traddr": "10.0.0.1", 00:19:50.169 "trsvcid": "33656" 00:19:50.169 }, 00:19:50.169 "auth": { 00:19:50.169 "state": "completed", 00:19:50.169 "digest": "sha256", 00:19:50.169 "dhgroup": "ffdhe6144" 00:19:50.169 } 00:19:50.169 } 00:19:50.169 ]' 00:19:50.169 00:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:50.169 00:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:50.169 00:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:50.427 00:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:50.427 00:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:50.427 00:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.427 00:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.427 00:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.685 00:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWQ0ZGZmN2VjZmU1YWQ0ODliMmZkNjkzMGVmNDFjNjNlYTY3NDVkZjRmNGU3MDMzy95nmg==: --dhchap-ctrl-secret DHHC-1:03:M2YzNzA2M2E0ODJjZjk5NmE5ZjkxODU3ZDUxZTk3YzgyNTc3OGI4NmNhMDJiMTU1NTZlM2Y5NzdhMDQwNzdiNZjIPvk=: 00:19:50.685 00:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MWQ0ZGZmN2VjZmU1YWQ0ODliMmZkNjkzMGVmNDFjNjNlYTY3NDVkZjRmNGU3MDMzy95nmg==: --dhchap-ctrl-secret DHHC-1:03:M2YzNzA2M2E0ODJjZjk5NmE5ZjkxODU3ZDUxZTk3YzgyNTc3OGI4NmNhMDJiMTU1NTZlM2Y5NzdhMDQwNzdiNZjIPvk=: 00:19:51.623 00:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.623 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.623 00:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:51.623 00:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.623 00:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.623 00:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.623 00:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:51.623 00:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:51.623 00:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:51.882 00:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:19:51.882 00:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:51.882 00:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:51.882 00:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:51.882 00:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:51.882 00:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.882 00:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.882 00:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.882 00:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.882 00:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.882 00:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.882 00:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.882 00:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.449 00:19:52.449 00:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:52.449 00:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:52.449 00:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.716 00:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.716 00:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.716 00:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.716 00:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.716 00:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.716 00:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:52.716 { 00:19:52.716 "cntlid": 35, 00:19:52.716 "qid": 0, 00:19:52.716 "state": "enabled", 00:19:52.716 "thread": "nvmf_tgt_poll_group_000", 00:19:52.716 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:52.716 "listen_address": { 00:19:52.716 "trtype": "TCP", 00:19:52.716 "adrfam": "IPv4", 00:19:52.716 "traddr": "10.0.0.2", 00:19:52.716 "trsvcid": "4420" 00:19:52.716 }, 00:19:52.716 "peer_address": { 00:19:52.716 "trtype": "TCP", 00:19:52.716 "adrfam": "IPv4", 00:19:52.716 "traddr": "10.0.0.1", 00:19:52.716 "trsvcid": "51752" 00:19:52.716 }, 00:19:52.716 "auth": { 00:19:52.716 "state": "completed", 00:19:52.716 "digest": "sha256", 00:19:52.716 "dhgroup": "ffdhe6144" 00:19:52.716 } 00:19:52.716 } 00:19:52.716 ]' 00:19:52.716 00:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:52.716 00:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:52.716 00:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:52.716 00:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:52.716 00:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:52.716 00:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.716 00:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.716 00:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.973 00:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmQ5ZGJhMjljNTJhY2MxNjZiMWRjZjgyMWRkNzVmMWR+AU16: --dhchap-ctrl-secret DHHC-1:02:ZjRiYjk2ZjM3MDA1NTMyZjE5ZmUyZWI5YTdkODUxYmY5NGE4NjI1MWRiM2Q2M2IzCitAaw==: 00:19:52.974 00:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmQ5ZGJhMjljNTJhY2MxNjZiMWRjZjgyMWRkNzVmMWR+AU16: --dhchap-ctrl-secret DHHC-1:02:ZjRiYjk2ZjM3MDA1NTMyZjE5ZmUyZWI5YTdkODUxYmY5NGE4NjI1MWRiM2Q2M2IzCitAaw==: 00:19:53.907 00:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.907 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.907 00:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:53.907 00:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.907 00:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.907 00:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.907 00:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:53.907 00:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:53.907 00:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:54.166 00:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:19:54.166 00:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:54.166 00:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:54.166 00:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:54.166 00:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:54.166 00:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.166 00:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.166 00:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.166 00:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.166 00:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.166 00:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.166 00:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.166 00:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.731 00:19:54.731 00:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:54.731 00:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.731 00:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:54.989 00:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.989 00:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.989 00:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.989 00:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.989 00:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.989 00:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:54.989 { 00:19:54.989 "cntlid": 37, 00:19:54.989 "qid": 0, 00:19:54.989 "state": "enabled", 00:19:54.989 "thread": "nvmf_tgt_poll_group_000", 00:19:54.989 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:54.989 "listen_address": { 00:19:54.989 "trtype": "TCP", 00:19:54.989 "adrfam": "IPv4", 00:19:54.989 "traddr": "10.0.0.2", 00:19:54.989 "trsvcid": "4420" 00:19:54.989 }, 00:19:54.989 "peer_address": { 00:19:54.989 "trtype": "TCP", 00:19:54.989 "adrfam": "IPv4", 00:19:54.989 "traddr": "10.0.0.1", 00:19:54.989 "trsvcid": "51782" 00:19:54.989 }, 00:19:54.989 "auth": { 00:19:54.989 "state": "completed", 00:19:54.989 "digest": "sha256", 00:19:54.989 "dhgroup": "ffdhe6144" 00:19:54.989 } 00:19:54.989 } 00:19:54.989 ]' 00:19:54.989 00:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:54.989 00:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:54.989 00:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:55.248 00:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:55.248 00:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:55.248 00:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.248 00:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.248 00:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.505 00:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWQ0YmYyZTkyNDlmN2RjZjM4OGMzNjZiOWNjMDUyMzFiOWE3YmQ5Njg0NDQ3ZTE36pLhlg==: --dhchap-ctrl-secret DHHC-1:01:MDM1MmMxNjdlZGFlMTY0ZGI3MDI3MTAzMzQ5ZDU5ZGKCUd+h: 00:19:55.505 00:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MWQ0YmYyZTkyNDlmN2RjZjM4OGMzNjZiOWNjMDUyMzFiOWE3YmQ5Njg0NDQ3ZTE36pLhlg==: --dhchap-ctrl-secret DHHC-1:01:MDM1MmMxNjdlZGFlMTY0ZGI3MDI3MTAzMzQ5ZDU5ZGKCUd+h: 00:19:56.444 00:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.444 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.444 00:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:56.444 00:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.444 00:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.444 00:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.444 00:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:56.444 00:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:56.444 00:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:56.702 00:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:19:56.702 00:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:56.702 00:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:56.702 00:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:56.702 00:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:56.702 00:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.702 00:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:56.702 00:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.702 00:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.702 00:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.702 00:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:56.702 00:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:56.702 00:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:57.271 00:19:57.271 00:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:57.271 00:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:57.271 00:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.529 00:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.529 00:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.529 00:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.529 00:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.529 00:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.529 00:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:57.529 { 00:19:57.529 "cntlid": 39, 00:19:57.529 "qid": 0, 00:19:57.529 "state": "enabled", 00:19:57.529 "thread": "nvmf_tgt_poll_group_000", 00:19:57.529 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:57.529 "listen_address": { 00:19:57.529 "trtype": "TCP", 00:19:57.529 "adrfam": "IPv4", 00:19:57.529 "traddr": "10.0.0.2", 00:19:57.530 "trsvcid": "4420" 00:19:57.530 }, 00:19:57.530 "peer_address": { 00:19:57.530 "trtype": "TCP", 00:19:57.530 "adrfam": "IPv4", 00:19:57.530 "traddr": "10.0.0.1", 00:19:57.530 "trsvcid": "51806" 00:19:57.530 }, 00:19:57.530 "auth": { 00:19:57.530 "state": "completed", 00:19:57.530 "digest": "sha256", 00:19:57.530 "dhgroup": "ffdhe6144" 00:19:57.530 } 00:19:57.530 } 00:19:57.530 ]' 00:19:57.530 00:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:57.530 00:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:57.530 00:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:57.530 00:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:57.530 00:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:57.530 00:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.530 00:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.530 00:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.788 00:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmMxMGYyYmM1Y2QyZmViMTFlZWExYTY4OTFjYzFkY2I1OWU4NzMzZGVkNTM5MDJhMWUyOTQ0ZjcxMzgzNDVjYmBdiwU=: 00:19:57.788 00:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MmMxMGYyYmM1Y2QyZmViMTFlZWExYTY4OTFjYzFkY2I1OWU4NzMzZGVkNTM5MDJhMWUyOTQ0ZjcxMzgzNDVjYmBdiwU=: 00:19:58.722 00:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.722 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.722 00:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:58.722 00:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.722 00:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.722 00:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.722 00:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:58.722 00:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:58.722 00:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:58.722 00:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:58.978 00:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:19:58.978 00:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:58.978 00:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:58.978 00:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:58.978 00:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:58.978 00:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.978 00:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.978 00:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.978 00:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.978 00:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.978 00:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.978 00:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.978 00:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:59.910 00:19:59.910 00:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:59.910 00:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:59.910 00:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.169 00:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.169 00:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.169 00:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.169 00:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.169 00:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.169 00:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:00.169 { 00:20:00.169 "cntlid": 41, 00:20:00.169 "qid": 0, 00:20:00.169 "state": "enabled", 00:20:00.169 "thread": "nvmf_tgt_poll_group_000", 00:20:00.169 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:00.169 "listen_address": { 00:20:00.169 "trtype": "TCP", 00:20:00.169 "adrfam": "IPv4", 00:20:00.169 "traddr": "10.0.0.2", 00:20:00.169 "trsvcid": "4420" 00:20:00.169 }, 00:20:00.169 "peer_address": { 00:20:00.169 "trtype": "TCP", 00:20:00.169 "adrfam": "IPv4", 00:20:00.169 "traddr": "10.0.0.1", 00:20:00.169 "trsvcid": "51824" 00:20:00.169 }, 00:20:00.169 "auth": { 00:20:00.169 "state": "completed", 00:20:00.169 "digest": "sha256", 00:20:00.169 "dhgroup": "ffdhe8192" 00:20:00.169 } 00:20:00.169 } 00:20:00.169 ]' 00:20:00.169 00:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:00.169 00:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:00.169 00:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:00.169 00:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:00.169 00:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:00.169 00:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.169 00:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.169 00:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.427 00:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWQ0ZGZmN2VjZmU1YWQ0ODliMmZkNjkzMGVmNDFjNjNlYTY3NDVkZjRmNGU3MDMzy95nmg==: --dhchap-ctrl-secret DHHC-1:03:M2YzNzA2M2E0ODJjZjk5NmE5ZjkxODU3ZDUxZTk3YzgyNTc3OGI4NmNhMDJiMTU1NTZlM2Y5NzdhMDQwNzdiNZjIPvk=: 00:20:00.427 00:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MWQ0ZGZmN2VjZmU1YWQ0ODliMmZkNjkzMGVmNDFjNjNlYTY3NDVkZjRmNGU3MDMzy95nmg==: --dhchap-ctrl-secret DHHC-1:03:M2YzNzA2M2E0ODJjZjk5NmE5ZjkxODU3ZDUxZTk3YzgyNTc3OGI4NmNhMDJiMTU1NTZlM2Y5NzdhMDQwNzdiNZjIPvk=: 00:20:01.362 00:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.362 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.362 00:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:01.362 00:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.362 00:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.362 00:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.362 00:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:01.362 00:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:01.362 00:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:01.620 00:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:20:01.620 00:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:01.620 00:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:01.620 00:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:01.620 00:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:01.620 00:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.620 00:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:01.620 00:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.620 00:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.878 00:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.878 00:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:01.878 00:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:01.878 00:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.445 00:20:02.445 00:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:02.445 00:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:02.445 00:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.011 00:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.011 00:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.011 00:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.011 00:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.011 00:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.011 00:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:03.011 { 00:20:03.011 "cntlid": 43, 00:20:03.011 "qid": 0, 00:20:03.011 "state": "enabled", 00:20:03.011 "thread": "nvmf_tgt_poll_group_000", 00:20:03.011 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:03.011 "listen_address": { 00:20:03.011 "trtype": "TCP", 00:20:03.011 "adrfam": "IPv4", 00:20:03.011 "traddr": "10.0.0.2", 00:20:03.011 "trsvcid": "4420" 00:20:03.011 }, 00:20:03.011 "peer_address": { 00:20:03.011 "trtype": "TCP", 00:20:03.011 "adrfam": "IPv4", 00:20:03.011 "traddr": "10.0.0.1", 00:20:03.011 "trsvcid": "46880" 00:20:03.011 }, 00:20:03.011 "auth": { 00:20:03.011 "state": "completed", 00:20:03.011 "digest": "sha256", 00:20:03.011 "dhgroup": "ffdhe8192" 00:20:03.011 } 00:20:03.011 } 00:20:03.011 ]' 00:20:03.011 00:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:03.011 00:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:03.011 00:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:03.011 00:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:03.011 00:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:03.011 00:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.011 00:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.011 00:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.276 00:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmQ5ZGJhMjljNTJhY2MxNjZiMWRjZjgyMWRkNzVmMWR+AU16: --dhchap-ctrl-secret DHHC-1:02:ZjRiYjk2ZjM3MDA1NTMyZjE5ZmUyZWI5YTdkODUxYmY5NGE4NjI1MWRiM2Q2M2IzCitAaw==: 00:20:03.277 00:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmQ5ZGJhMjljNTJhY2MxNjZiMWRjZjgyMWRkNzVmMWR+AU16: --dhchap-ctrl-secret DHHC-1:02:ZjRiYjk2ZjM3MDA1NTMyZjE5ZmUyZWI5YTdkODUxYmY5NGE4NjI1MWRiM2Q2M2IzCitAaw==: 00:20:04.218 00:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.218 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.218 00:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:04.218 00:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.218 00:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.218 00:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.218 00:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:04.218 00:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:04.218 00:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:04.477 00:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:20:04.477 00:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:04.477 00:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:04.477 00:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:04.477 00:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:04.477 00:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.477 00:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:04.477 00:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.477 00:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.477 00:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.477 00:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:04.477 00:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:04.477 00:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.416 00:20:05.416 00:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:05.416 00:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:05.416 00:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.675 00:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.675 00:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.675 00:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.675 00:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.675 00:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.675 00:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:05.675 { 00:20:05.675 "cntlid": 45, 00:20:05.675 "qid": 0, 00:20:05.675 "state": "enabled", 00:20:05.675 "thread": "nvmf_tgt_poll_group_000", 00:20:05.675 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:05.675 "listen_address": { 00:20:05.675 "trtype": "TCP", 00:20:05.675 "adrfam": "IPv4", 00:20:05.675 "traddr": "10.0.0.2", 00:20:05.675 "trsvcid": "4420" 00:20:05.675 }, 00:20:05.675 "peer_address": { 00:20:05.675 "trtype": "TCP", 00:20:05.675 "adrfam": "IPv4", 00:20:05.675 "traddr": "10.0.0.1", 00:20:05.675 "trsvcid": "46896" 00:20:05.675 }, 00:20:05.675 "auth": { 00:20:05.675 "state": "completed", 00:20:05.675 "digest": "sha256", 00:20:05.675 "dhgroup": "ffdhe8192" 00:20:05.675 } 00:20:05.675 } 00:20:05.675 ]' 00:20:05.675 00:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:05.675 00:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:05.675 00:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:05.675 00:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:05.675 00:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:05.675 00:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.675 00:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.675 00:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.934 00:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWQ0YmYyZTkyNDlmN2RjZjM4OGMzNjZiOWNjMDUyMzFiOWE3YmQ5Njg0NDQ3ZTE36pLhlg==: --dhchap-ctrl-secret DHHC-1:01:MDM1MmMxNjdlZGFlMTY0ZGI3MDI3MTAzMzQ5ZDU5ZGKCUd+h: 00:20:05.934 00:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MWQ0YmYyZTkyNDlmN2RjZjM4OGMzNjZiOWNjMDUyMzFiOWE3YmQ5Njg0NDQ3ZTE36pLhlg==: --dhchap-ctrl-secret DHHC-1:01:MDM1MmMxNjdlZGFlMTY0ZGI3MDI3MTAzMzQ5ZDU5ZGKCUd+h: 00:20:06.868 00:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.868 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.868 00:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:06.868 00:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.868 00:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.868 00:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.868 00:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:06.868 00:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:06.868 00:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:07.126 00:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:20:07.126 00:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:07.126 00:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:07.126 00:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:07.126 00:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:07.126 00:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.126 00:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:07.126 00:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.126 00:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.126 00:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.126 00:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:07.127 00:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:07.127 00:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:08.062 00:20:08.062 00:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:08.062 00:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:08.062 00:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.320 00:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.320 00:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.320 00:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.320 00:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.320 00:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.320 00:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:08.320 { 00:20:08.320 "cntlid": 47, 00:20:08.320 "qid": 0, 00:20:08.320 "state": "enabled", 00:20:08.320 "thread": "nvmf_tgt_poll_group_000", 00:20:08.320 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:08.320 "listen_address": { 00:20:08.320 "trtype": "TCP", 00:20:08.320 "adrfam": "IPv4", 00:20:08.320 "traddr": "10.0.0.2", 00:20:08.320 "trsvcid": "4420" 00:20:08.320 }, 00:20:08.320 "peer_address": { 00:20:08.320 "trtype": "TCP", 00:20:08.320 "adrfam": "IPv4", 00:20:08.320 "traddr": "10.0.0.1", 00:20:08.320 "trsvcid": "46920" 00:20:08.320 }, 00:20:08.320 "auth": { 00:20:08.320 "state": "completed", 00:20:08.320 "digest": "sha256", 00:20:08.320 "dhgroup": "ffdhe8192" 00:20:08.320 } 00:20:08.320 } 00:20:08.320 ]' 00:20:08.320 00:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:08.320 00:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:08.320 00:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:08.320 00:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:08.320 00:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:08.320 00:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.320 00:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.320 00:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.579 00:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmMxMGYyYmM1Y2QyZmViMTFlZWExYTY4OTFjYzFkY2I1OWU4NzMzZGVkNTM5MDJhMWUyOTQ0ZjcxMzgzNDVjYmBdiwU=: 00:20:08.579 00:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MmMxMGYyYmM1Y2QyZmViMTFlZWExYTY4OTFjYzFkY2I1OWU4NzMzZGVkNTM5MDJhMWUyOTQ0ZjcxMzgzNDVjYmBdiwU=: 00:20:09.513 00:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.513 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.513 00:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:09.513 00:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.513 00:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.513 00:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.513 00:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:09.513 00:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:09.513 00:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:09.513 00:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:09.513 00:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:09.771 00:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:20:09.771 00:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:09.771 00:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:09.771 00:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:09.771 00:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:09.771 00:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.771 00:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.771 00:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.771 00:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.771 00:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.771 00:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.771 00:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.771 00:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.351 00:20:10.351 00:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:10.352 00:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:10.352 00:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.352 00:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.352 00:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.352 00:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.352 00:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.615 00:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.615 00:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:10.615 { 00:20:10.615 "cntlid": 49, 00:20:10.615 "qid": 0, 00:20:10.615 "state": "enabled", 00:20:10.615 "thread": "nvmf_tgt_poll_group_000", 00:20:10.615 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:10.615 "listen_address": { 00:20:10.615 "trtype": "TCP", 00:20:10.615 "adrfam": "IPv4", 00:20:10.615 "traddr": "10.0.0.2", 00:20:10.615 "trsvcid": "4420" 00:20:10.615 }, 00:20:10.615 "peer_address": { 00:20:10.615 "trtype": "TCP", 00:20:10.615 "adrfam": "IPv4", 00:20:10.615 "traddr": "10.0.0.1", 00:20:10.615 "trsvcid": "46934" 00:20:10.615 }, 00:20:10.615 "auth": { 00:20:10.615 "state": "completed", 00:20:10.615 "digest": "sha384", 00:20:10.615 "dhgroup": "null" 00:20:10.615 } 00:20:10.615 } 00:20:10.615 ]' 00:20:10.615 00:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:10.615 00:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:10.615 00:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:10.615 00:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:10.615 00:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:10.615 00:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.615 00:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.615 00:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.874 00:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWQ0ZGZmN2VjZmU1YWQ0ODliMmZkNjkzMGVmNDFjNjNlYTY3NDVkZjRmNGU3MDMzy95nmg==: --dhchap-ctrl-secret DHHC-1:03:M2YzNzA2M2E0ODJjZjk5NmE5ZjkxODU3ZDUxZTk3YzgyNTc3OGI4NmNhMDJiMTU1NTZlM2Y5NzdhMDQwNzdiNZjIPvk=: 00:20:10.874 00:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MWQ0ZGZmN2VjZmU1YWQ0ODliMmZkNjkzMGVmNDFjNjNlYTY3NDVkZjRmNGU3MDMzy95nmg==: --dhchap-ctrl-secret DHHC-1:03:M2YzNzA2M2E0ODJjZjk5NmE5ZjkxODU3ZDUxZTk3YzgyNTc3OGI4NmNhMDJiMTU1NTZlM2Y5NzdhMDQwNzdiNZjIPvk=: 00:20:11.819 00:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.819 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.819 00:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:11.819 00:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.819 00:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.819 00:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.819 00:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:11.820 00:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:11.820 00:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:12.078 00:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:20:12.078 00:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:12.078 00:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:12.078 00:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:12.078 00:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:12.078 00:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.078 00:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.078 00:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.078 00:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.078 00:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.078 00:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.078 00:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.078 00:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.337 00:20:12.337 00:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:12.337 00:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:12.337 00:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.595 00:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.595 00:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.595 00:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.595 00:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.595 00:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.595 00:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:12.595 { 00:20:12.595 "cntlid": 51, 00:20:12.595 "qid": 0, 00:20:12.595 "state": "enabled", 00:20:12.595 "thread": "nvmf_tgt_poll_group_000", 00:20:12.595 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:12.595 "listen_address": { 00:20:12.595 "trtype": "TCP", 00:20:12.595 "adrfam": "IPv4", 00:20:12.595 "traddr": "10.0.0.2", 00:20:12.595 "trsvcid": "4420" 00:20:12.595 }, 00:20:12.595 "peer_address": { 00:20:12.595 "trtype": "TCP", 00:20:12.595 "adrfam": "IPv4", 00:20:12.595 "traddr": "10.0.0.1", 00:20:12.595 "trsvcid": "50020" 00:20:12.595 }, 00:20:12.595 "auth": { 00:20:12.595 "state": "completed", 00:20:12.595 "digest": "sha384", 00:20:12.595 "dhgroup": "null" 00:20:12.595 } 00:20:12.595 } 00:20:12.595 ]' 00:20:12.595 00:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:12.595 00:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:12.595 00:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:12.595 00:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:12.595 00:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:12.853 00:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.853 00:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.853 00:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.111 00:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmQ5ZGJhMjljNTJhY2MxNjZiMWRjZjgyMWRkNzVmMWR+AU16: --dhchap-ctrl-secret DHHC-1:02:ZjRiYjk2ZjM3MDA1NTMyZjE5ZmUyZWI5YTdkODUxYmY5NGE4NjI1MWRiM2Q2M2IzCitAaw==: 00:20:13.111 00:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmQ5ZGJhMjljNTJhY2MxNjZiMWRjZjgyMWRkNzVmMWR+AU16: --dhchap-ctrl-secret DHHC-1:02:ZjRiYjk2ZjM3MDA1NTMyZjE5ZmUyZWI5YTdkODUxYmY5NGE4NjI1MWRiM2Q2M2IzCitAaw==: 00:20:14.054 00:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.054 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.054 00:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:14.054 00:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.054 00:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.054 00:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.054 00:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:14.054 00:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:14.054 00:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:14.313 00:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:20:14.313 00:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:14.313 00:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:14.313 00:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:14.313 00:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:14.313 00:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.313 00:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.313 00:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.313 00:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.313 00:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.313 00:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.313 00:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.313 00:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.571 00:20:14.571 00:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:14.571 00:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:14.571 00:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.829 00:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.829 00:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.829 00:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.829 00:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.829 00:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.829 00:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:14.829 { 00:20:14.829 "cntlid": 53, 00:20:14.829 "qid": 0, 00:20:14.829 "state": "enabled", 00:20:14.829 "thread": "nvmf_tgt_poll_group_000", 00:20:14.829 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:14.829 "listen_address": { 00:20:14.829 "trtype": "TCP", 00:20:14.829 "adrfam": "IPv4", 00:20:14.829 "traddr": "10.0.0.2", 00:20:14.829 "trsvcid": "4420" 00:20:14.829 }, 00:20:14.829 "peer_address": { 00:20:14.829 "trtype": "TCP", 00:20:14.829 "adrfam": "IPv4", 00:20:14.829 "traddr": "10.0.0.1", 00:20:14.829 "trsvcid": "50042" 00:20:14.829 }, 00:20:14.829 "auth": { 00:20:14.829 "state": "completed", 00:20:14.829 "digest": "sha384", 00:20:14.829 "dhgroup": "null" 00:20:14.829 } 00:20:14.829 } 00:20:14.829 ]' 00:20:14.829 00:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:14.829 00:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:14.829 00:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:14.829 00:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:14.829 00:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:14.829 00:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.829 00:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.829 00:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.087 00:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWQ0YmYyZTkyNDlmN2RjZjM4OGMzNjZiOWNjMDUyMzFiOWE3YmQ5Njg0NDQ3ZTE36pLhlg==: --dhchap-ctrl-secret DHHC-1:01:MDM1MmMxNjdlZGFlMTY0ZGI3MDI3MTAzMzQ5ZDU5ZGKCUd+h: 00:20:15.087 00:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MWQ0YmYyZTkyNDlmN2RjZjM4OGMzNjZiOWNjMDUyMzFiOWE3YmQ5Njg0NDQ3ZTE36pLhlg==: --dhchap-ctrl-secret DHHC-1:01:MDM1MmMxNjdlZGFlMTY0ZGI3MDI3MTAzMzQ5ZDU5ZGKCUd+h: 00:20:16.021 00:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.021 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.021 00:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:16.021 00:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.021 00:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.021 00:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.021 00:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:16.021 00:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:16.021 00:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:16.279 00:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:20:16.279 00:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:16.279 00:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:16.279 00:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:16.279 00:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:16.279 00:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.279 00:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:16.279 00:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.279 00:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.279 00:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.279 00:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:16.279 00:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:16.279 00:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:16.845 00:20:16.845 00:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:16.845 00:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:16.845 00:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.104 00:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.104 00:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.104 00:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.104 00:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.104 00:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.104 00:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:17.104 { 00:20:17.104 "cntlid": 55, 00:20:17.104 "qid": 0, 00:20:17.104 "state": "enabled", 00:20:17.104 "thread": "nvmf_tgt_poll_group_000", 00:20:17.104 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:17.104 "listen_address": { 00:20:17.104 "trtype": "TCP", 00:20:17.104 "adrfam": "IPv4", 00:20:17.104 "traddr": "10.0.0.2", 00:20:17.104 "trsvcid": "4420" 00:20:17.104 }, 00:20:17.104 "peer_address": { 00:20:17.104 "trtype": "TCP", 00:20:17.104 "adrfam": "IPv4", 00:20:17.104 "traddr": "10.0.0.1", 00:20:17.104 "trsvcid": "50074" 00:20:17.104 }, 00:20:17.104 "auth": { 00:20:17.104 "state": "completed", 00:20:17.104 "digest": "sha384", 00:20:17.104 "dhgroup": "null" 00:20:17.104 } 00:20:17.104 } 00:20:17.104 ]' 00:20:17.104 00:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:17.104 00:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:17.104 00:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:17.104 00:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:17.104 00:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:17.104 00:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.104 00:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.104 00:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.362 00:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmMxMGYyYmM1Y2QyZmViMTFlZWExYTY4OTFjYzFkY2I1OWU4NzMzZGVkNTM5MDJhMWUyOTQ0ZjcxMzgzNDVjYmBdiwU=: 00:20:17.362 00:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MmMxMGYyYmM1Y2QyZmViMTFlZWExYTY4OTFjYzFkY2I1OWU4NzMzZGVkNTM5MDJhMWUyOTQ0ZjcxMzgzNDVjYmBdiwU=: 00:20:18.295 00:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.295 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.295 00:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:18.295 00:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.295 00:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.295 00:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.295 00:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:18.295 00:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:18.296 00:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:18.296 00:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:18.554 00:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:20:18.554 00:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:18.554 00:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:18.554 00:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:18.554 00:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:18.554 00:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.554 00:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.554 00:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.554 00:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.554 00:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.554 00:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.554 00:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.554 00:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.812 00:20:18.812 00:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:18.812 00:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.812 00:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:19.070 00:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.070 00:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.070 00:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.070 00:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.070 00:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.070 00:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:19.070 { 00:20:19.070 "cntlid": 57, 00:20:19.070 "qid": 0, 00:20:19.070 "state": "enabled", 00:20:19.070 "thread": "nvmf_tgt_poll_group_000", 00:20:19.070 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:19.070 "listen_address": { 00:20:19.070 "trtype": "TCP", 00:20:19.070 "adrfam": "IPv4", 00:20:19.070 "traddr": "10.0.0.2", 00:20:19.070 "trsvcid": "4420" 00:20:19.070 }, 00:20:19.070 "peer_address": { 00:20:19.070 "trtype": "TCP", 00:20:19.070 "adrfam": "IPv4", 00:20:19.070 "traddr": "10.0.0.1", 00:20:19.070 "trsvcid": "50110" 00:20:19.070 }, 00:20:19.070 "auth": { 00:20:19.070 "state": "completed", 00:20:19.070 "digest": "sha384", 00:20:19.070 "dhgroup": "ffdhe2048" 00:20:19.070 } 00:20:19.070 } 00:20:19.070 ]' 00:20:19.070 00:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:19.070 00:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:19.070 00:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:19.329 00:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:19.329 00:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:19.329 00:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.329 00:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.329 00:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.588 00:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWQ0ZGZmN2VjZmU1YWQ0ODliMmZkNjkzMGVmNDFjNjNlYTY3NDVkZjRmNGU3MDMzy95nmg==: --dhchap-ctrl-secret DHHC-1:03:M2YzNzA2M2E0ODJjZjk5NmE5ZjkxODU3ZDUxZTk3YzgyNTc3OGI4NmNhMDJiMTU1NTZlM2Y5NzdhMDQwNzdiNZjIPvk=: 00:20:19.588 00:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MWQ0ZGZmN2VjZmU1YWQ0ODliMmZkNjkzMGVmNDFjNjNlYTY3NDVkZjRmNGU3MDMzy95nmg==: --dhchap-ctrl-secret DHHC-1:03:M2YzNzA2M2E0ODJjZjk5NmE5ZjkxODU3ZDUxZTk3YzgyNTc3OGI4NmNhMDJiMTU1NTZlM2Y5NzdhMDQwNzdiNZjIPvk=: 00:20:20.519 00:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.519 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.519 00:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:20.519 00:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.519 00:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.519 00:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.519 00:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:20.519 00:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:20.519 00:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:20.776 00:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:20:20.777 00:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:20.777 00:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:20.777 00:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:20.777 00:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:20.777 00:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.777 00:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.777 00:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.777 00:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.777 00:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.777 00:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.777 00:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.777 00:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.034 00:20:21.035 00:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:21.035 00:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:21.035 00:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.293 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.293 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.293 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.293 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.293 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.293 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:21.293 { 00:20:21.293 "cntlid": 59, 00:20:21.293 "qid": 0, 00:20:21.293 "state": "enabled", 00:20:21.293 "thread": "nvmf_tgt_poll_group_000", 00:20:21.293 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:21.293 "listen_address": { 00:20:21.293 "trtype": "TCP", 00:20:21.293 "adrfam": "IPv4", 00:20:21.293 "traddr": "10.0.0.2", 00:20:21.293 "trsvcid": "4420" 00:20:21.293 }, 00:20:21.293 "peer_address": { 00:20:21.293 "trtype": "TCP", 00:20:21.293 "adrfam": "IPv4", 00:20:21.293 "traddr": "10.0.0.1", 00:20:21.293 "trsvcid": "53844" 00:20:21.293 }, 00:20:21.293 "auth": { 00:20:21.293 "state": "completed", 00:20:21.293 "digest": "sha384", 00:20:21.293 "dhgroup": "ffdhe2048" 00:20:21.293 } 00:20:21.293 } 00:20:21.293 ]' 00:20:21.293 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:21.551 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:21.551 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:21.551 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:21.551 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:21.551 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.551 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.551 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.810 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmQ5ZGJhMjljNTJhY2MxNjZiMWRjZjgyMWRkNzVmMWR+AU16: --dhchap-ctrl-secret DHHC-1:02:ZjRiYjk2ZjM3MDA1NTMyZjE5ZmUyZWI5YTdkODUxYmY5NGE4NjI1MWRiM2Q2M2IzCitAaw==: 00:20:21.810 00:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmQ5ZGJhMjljNTJhY2MxNjZiMWRjZjgyMWRkNzVmMWR+AU16: --dhchap-ctrl-secret DHHC-1:02:ZjRiYjk2ZjM3MDA1NTMyZjE5ZmUyZWI5YTdkODUxYmY5NGE4NjI1MWRiM2Q2M2IzCitAaw==: 00:20:22.744 00:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.744 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.744 00:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:22.744 00:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.744 00:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.744 00:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.744 00:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:22.744 00:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:22.744 00:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:23.002 00:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:20:23.002 00:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:23.002 00:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:23.002 00:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:23.002 00:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:23.002 00:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.002 00:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.002 00:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.002 00:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.002 00:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.002 00:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.002 00:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.002 00:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.264 00:20:23.264 00:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:23.264 00:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:23.264 00:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.523 00:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.523 00:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.523 00:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.523 00:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.523 00:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.523 00:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:23.523 { 00:20:23.523 "cntlid": 61, 00:20:23.523 "qid": 0, 00:20:23.523 "state": "enabled", 00:20:23.523 "thread": "nvmf_tgt_poll_group_000", 00:20:23.523 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:23.523 "listen_address": { 00:20:23.523 "trtype": "TCP", 00:20:23.523 "adrfam": "IPv4", 00:20:23.523 "traddr": "10.0.0.2", 00:20:23.523 "trsvcid": "4420" 00:20:23.523 }, 00:20:23.523 "peer_address": { 00:20:23.523 "trtype": "TCP", 00:20:23.523 "adrfam": "IPv4", 00:20:23.523 "traddr": "10.0.0.1", 00:20:23.523 "trsvcid": "53858" 00:20:23.523 }, 00:20:23.523 "auth": { 00:20:23.523 "state": "completed", 00:20:23.523 "digest": "sha384", 00:20:23.523 "dhgroup": "ffdhe2048" 00:20:23.523 } 00:20:23.523 } 00:20:23.523 ]' 00:20:23.523 00:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:23.782 00:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:23.782 00:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:23.782 00:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:23.782 00:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:23.782 00:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.782 00:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.782 00:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.040 00:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWQ0YmYyZTkyNDlmN2RjZjM4OGMzNjZiOWNjMDUyMzFiOWE3YmQ5Njg0NDQ3ZTE36pLhlg==: --dhchap-ctrl-secret DHHC-1:01:MDM1MmMxNjdlZGFlMTY0ZGI3MDI3MTAzMzQ5ZDU5ZGKCUd+h: 00:20:24.040 00:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MWQ0YmYyZTkyNDlmN2RjZjM4OGMzNjZiOWNjMDUyMzFiOWE3YmQ5Njg0NDQ3ZTE36pLhlg==: --dhchap-ctrl-secret DHHC-1:01:MDM1MmMxNjdlZGFlMTY0ZGI3MDI3MTAzMzQ5ZDU5ZGKCUd+h: 00:20:24.975 00:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.975 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.975 00:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:24.975 00:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.975 00:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.975 00:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.975 00:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:24.975 00:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:24.975 00:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:25.255 00:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:20:25.256 00:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:25.256 00:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:25.256 00:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:25.256 00:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:25.256 00:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.256 00:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:25.256 00:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.256 00:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.256 00:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.256 00:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:25.256 00:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:25.256 00:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:25.531 00:20:25.531 00:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:25.531 00:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:25.531 00:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.801 00:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.801 00:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.801 00:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.801 00:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.801 00:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.801 00:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:25.801 { 00:20:25.801 "cntlid": 63, 00:20:25.801 "qid": 0, 00:20:25.801 "state": "enabled", 00:20:25.801 "thread": "nvmf_tgt_poll_group_000", 00:20:25.801 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:25.801 "listen_address": { 00:20:25.801 "trtype": "TCP", 00:20:25.801 "adrfam": "IPv4", 00:20:25.801 "traddr": "10.0.0.2", 00:20:25.801 "trsvcid": "4420" 00:20:25.801 }, 00:20:25.801 "peer_address": { 00:20:25.801 "trtype": "TCP", 00:20:25.801 "adrfam": "IPv4", 00:20:25.801 "traddr": "10.0.0.1", 00:20:25.801 "trsvcid": "53888" 00:20:25.801 }, 00:20:25.801 "auth": { 00:20:25.801 "state": "completed", 00:20:25.801 "digest": "sha384", 00:20:25.801 "dhgroup": "ffdhe2048" 00:20:25.801 } 00:20:25.801 } 00:20:25.801 ]' 00:20:25.801 00:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:25.801 00:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:25.801 00:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:26.086 00:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:26.086 00:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:26.086 00:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.086 00:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.086 00:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.414 00:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmMxMGYyYmM1Y2QyZmViMTFlZWExYTY4OTFjYzFkY2I1OWU4NzMzZGVkNTM5MDJhMWUyOTQ0ZjcxMzgzNDVjYmBdiwU=: 00:20:26.414 00:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MmMxMGYyYmM1Y2QyZmViMTFlZWExYTY4OTFjYzFkY2I1OWU4NzMzZGVkNTM5MDJhMWUyOTQ0ZjcxMzgzNDVjYmBdiwU=: 00:20:27.418 00:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.418 00:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:27.418 00:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.418 00:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.418 00:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.418 00:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:27.418 00:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:27.418 00:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:27.418 00:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:27.418 00:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:20:27.418 00:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:27.418 00:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:27.418 00:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:27.418 00:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:27.418 00:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.418 00:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.418 00:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.418 00:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.418 00:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.418 00:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.418 00:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.418 00:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:28.043 00:20:28.043 00:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:28.043 00:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:28.043 00:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.043 00:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.043 00:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.043 00:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.043 00:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.043 00:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.043 00:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:28.043 { 00:20:28.043 "cntlid": 65, 00:20:28.043 "qid": 0, 00:20:28.043 "state": "enabled", 00:20:28.043 "thread": "nvmf_tgt_poll_group_000", 00:20:28.043 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:28.043 "listen_address": { 00:20:28.043 "trtype": "TCP", 00:20:28.043 "adrfam": "IPv4", 00:20:28.043 "traddr": "10.0.0.2", 00:20:28.043 "trsvcid": "4420" 00:20:28.043 }, 00:20:28.043 "peer_address": { 00:20:28.043 "trtype": "TCP", 00:20:28.043 "adrfam": "IPv4", 00:20:28.043 "traddr": "10.0.0.1", 00:20:28.043 "trsvcid": "53912" 00:20:28.043 }, 00:20:28.043 "auth": { 00:20:28.043 "state": "completed", 00:20:28.043 "digest": "sha384", 00:20:28.043 "dhgroup": "ffdhe3072" 00:20:28.043 } 00:20:28.043 } 00:20:28.043 ]' 00:20:28.043 00:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:28.301 00:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:28.301 00:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:28.301 00:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:28.301 00:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:28.301 00:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.301 00:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.301 00:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.559 00:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWQ0ZGZmN2VjZmU1YWQ0ODliMmZkNjkzMGVmNDFjNjNlYTY3NDVkZjRmNGU3MDMzy95nmg==: --dhchap-ctrl-secret DHHC-1:03:M2YzNzA2M2E0ODJjZjk5NmE5ZjkxODU3ZDUxZTk3YzgyNTc3OGI4NmNhMDJiMTU1NTZlM2Y5NzdhMDQwNzdiNZjIPvk=: 00:20:28.559 00:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MWQ0ZGZmN2VjZmU1YWQ0ODliMmZkNjkzMGVmNDFjNjNlYTY3NDVkZjRmNGU3MDMzy95nmg==: --dhchap-ctrl-secret DHHC-1:03:M2YzNzA2M2E0ODJjZjk5NmE5ZjkxODU3ZDUxZTk3YzgyNTc3OGI4NmNhMDJiMTU1NTZlM2Y5NzdhMDQwNzdiNZjIPvk=: 00:20:29.493 00:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.493 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.493 00:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:29.493 00:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.493 00:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.493 00:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.493 00:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:29.493 00:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:29.493 00:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:29.751 00:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:20:29.751 00:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:29.751 00:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:29.751 00:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:29.751 00:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:29.751 00:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.751 00:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.751 00:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.751 00:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.751 00:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.751 00:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.752 00:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.752 00:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.316 00:20:30.316 00:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:30.316 00:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.316 00:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:30.574 00:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.574 00:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.574 00:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.574 00:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.574 00:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.574 00:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:30.574 { 00:20:30.574 "cntlid": 67, 00:20:30.574 "qid": 0, 00:20:30.574 "state": "enabled", 00:20:30.574 "thread": "nvmf_tgt_poll_group_000", 00:20:30.574 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:30.574 "listen_address": { 00:20:30.574 "trtype": "TCP", 00:20:30.574 "adrfam": "IPv4", 00:20:30.574 "traddr": "10.0.0.2", 00:20:30.574 "trsvcid": "4420" 00:20:30.574 }, 00:20:30.574 "peer_address": { 00:20:30.574 "trtype": "TCP", 00:20:30.574 "adrfam": "IPv4", 00:20:30.574 "traddr": "10.0.0.1", 00:20:30.574 "trsvcid": "53932" 00:20:30.574 }, 00:20:30.574 "auth": { 00:20:30.574 "state": "completed", 00:20:30.574 "digest": "sha384", 00:20:30.574 "dhgroup": "ffdhe3072" 00:20:30.574 } 00:20:30.574 } 00:20:30.574 ]' 00:20:30.574 00:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:30.574 00:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:30.574 00:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:30.574 00:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:30.574 00:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:30.574 00:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.574 00:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.574 00:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.833 00:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmQ5ZGJhMjljNTJhY2MxNjZiMWRjZjgyMWRkNzVmMWR+AU16: --dhchap-ctrl-secret DHHC-1:02:ZjRiYjk2ZjM3MDA1NTMyZjE5ZmUyZWI5YTdkODUxYmY5NGE4NjI1MWRiM2Q2M2IzCitAaw==: 00:20:30.833 00:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmQ5ZGJhMjljNTJhY2MxNjZiMWRjZjgyMWRkNzVmMWR+AU16: --dhchap-ctrl-secret DHHC-1:02:ZjRiYjk2ZjM3MDA1NTMyZjE5ZmUyZWI5YTdkODUxYmY5NGE4NjI1MWRiM2Q2M2IzCitAaw==: 00:20:31.765 00:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.765 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.765 00:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:31.765 00:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.765 00:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.765 00:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.765 00:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:31.765 00:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:31.765 00:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:32.023 00:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:20:32.023 00:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:32.023 00:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:32.023 00:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:32.023 00:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:32.023 00:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.023 00:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.023 00:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.023 00:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.023 00:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.023 00:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.023 00:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.023 00:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.589 00:20:32.589 00:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:32.589 00:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:32.589 00:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.847 00:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.847 00:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.847 00:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.847 00:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.847 00:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.847 00:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:32.847 { 00:20:32.847 "cntlid": 69, 00:20:32.847 "qid": 0, 00:20:32.847 "state": "enabled", 00:20:32.847 "thread": "nvmf_tgt_poll_group_000", 00:20:32.847 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:32.847 "listen_address": { 00:20:32.847 "trtype": "TCP", 00:20:32.847 "adrfam": "IPv4", 00:20:32.847 "traddr": "10.0.0.2", 00:20:32.847 "trsvcid": "4420" 00:20:32.847 }, 00:20:32.847 "peer_address": { 00:20:32.847 "trtype": "TCP", 00:20:32.847 "adrfam": "IPv4", 00:20:32.847 "traddr": "10.0.0.1", 00:20:32.847 "trsvcid": "45152" 00:20:32.847 }, 00:20:32.847 "auth": { 00:20:32.847 "state": "completed", 00:20:32.847 "digest": "sha384", 00:20:32.847 "dhgroup": "ffdhe3072" 00:20:32.847 } 00:20:32.847 } 00:20:32.847 ]' 00:20:32.847 00:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:32.847 00:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:32.847 00:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:32.847 00:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:32.847 00:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:32.847 00:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.847 00:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.847 00:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.104 00:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWQ0YmYyZTkyNDlmN2RjZjM4OGMzNjZiOWNjMDUyMzFiOWE3YmQ5Njg0NDQ3ZTE36pLhlg==: --dhchap-ctrl-secret DHHC-1:01:MDM1MmMxNjdlZGFlMTY0ZGI3MDI3MTAzMzQ5ZDU5ZGKCUd+h: 00:20:33.104 00:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MWQ0YmYyZTkyNDlmN2RjZjM4OGMzNjZiOWNjMDUyMzFiOWE3YmQ5Njg0NDQ3ZTE36pLhlg==: --dhchap-ctrl-secret DHHC-1:01:MDM1MmMxNjdlZGFlMTY0ZGI3MDI3MTAzMzQ5ZDU5ZGKCUd+h: 00:20:34.038 00:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.038 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.038 00:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:34.038 00:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.038 00:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.038 00:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.038 00:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:34.038 00:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:34.038 00:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:34.296 00:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:20:34.296 00:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:34.296 00:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:34.296 00:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:34.296 00:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:34.296 00:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.296 00:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:34.296 00:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.296 00:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.296 00:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.296 00:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:34.296 00:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:34.296 00:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:34.862 00:20:34.862 00:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:34.862 00:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:34.862 00:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.120 00:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.120 00:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.120 00:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.120 00:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.120 00:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.120 00:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:35.120 { 00:20:35.120 "cntlid": 71, 00:20:35.120 "qid": 0, 00:20:35.120 "state": "enabled", 00:20:35.120 "thread": "nvmf_tgt_poll_group_000", 00:20:35.120 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:35.120 "listen_address": { 00:20:35.120 "trtype": "TCP", 00:20:35.120 "adrfam": "IPv4", 00:20:35.120 "traddr": "10.0.0.2", 00:20:35.120 "trsvcid": "4420" 00:20:35.120 }, 00:20:35.120 "peer_address": { 00:20:35.120 "trtype": "TCP", 00:20:35.120 "adrfam": "IPv4", 00:20:35.120 "traddr": "10.0.0.1", 00:20:35.120 "trsvcid": "45174" 00:20:35.120 }, 00:20:35.120 "auth": { 00:20:35.120 "state": "completed", 00:20:35.120 "digest": "sha384", 00:20:35.120 "dhgroup": "ffdhe3072" 00:20:35.120 } 00:20:35.120 } 00:20:35.120 ]' 00:20:35.120 00:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:35.120 00:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:35.120 00:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:35.120 00:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:35.120 00:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:35.120 00:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.120 00:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.120 00:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.378 00:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmMxMGYyYmM1Y2QyZmViMTFlZWExYTY4OTFjYzFkY2I1OWU4NzMzZGVkNTM5MDJhMWUyOTQ0ZjcxMzgzNDVjYmBdiwU=: 00:20:35.378 00:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MmMxMGYyYmM1Y2QyZmViMTFlZWExYTY4OTFjYzFkY2I1OWU4NzMzZGVkNTM5MDJhMWUyOTQ0ZjcxMzgzNDVjYmBdiwU=: 00:20:36.311 00:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.311 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.311 00:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:36.311 00:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.311 00:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.311 00:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.311 00:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:36.311 00:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:36.311 00:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:36.311 00:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:36.569 00:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:20:36.569 00:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:36.569 00:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:36.569 00:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:36.569 00:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:36.569 00:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.569 00:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.569 00:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.569 00:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.569 00:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.569 00:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.569 00:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.569 00:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:37.135 00:20:37.135 00:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:37.135 00:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:37.135 00:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.393 00:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.393 00:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.393 00:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.393 00:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.393 00:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.393 00:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:37.393 { 00:20:37.393 "cntlid": 73, 00:20:37.393 "qid": 0, 00:20:37.393 "state": "enabled", 00:20:37.393 "thread": "nvmf_tgt_poll_group_000", 00:20:37.393 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:37.393 "listen_address": { 00:20:37.393 "trtype": "TCP", 00:20:37.393 "adrfam": "IPv4", 00:20:37.393 "traddr": "10.0.0.2", 00:20:37.393 "trsvcid": "4420" 00:20:37.393 }, 00:20:37.393 "peer_address": { 00:20:37.393 "trtype": "TCP", 00:20:37.393 "adrfam": "IPv4", 00:20:37.393 "traddr": "10.0.0.1", 00:20:37.393 "trsvcid": "45198" 00:20:37.393 }, 00:20:37.393 "auth": { 00:20:37.393 "state": "completed", 00:20:37.393 "digest": "sha384", 00:20:37.393 "dhgroup": "ffdhe4096" 00:20:37.393 } 00:20:37.393 } 00:20:37.393 ]' 00:20:37.393 00:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:37.393 00:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:37.393 00:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:37.393 00:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:37.393 00:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:37.393 00:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.393 00:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.393 00:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.651 00:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWQ0ZGZmN2VjZmU1YWQ0ODliMmZkNjkzMGVmNDFjNjNlYTY3NDVkZjRmNGU3MDMzy95nmg==: --dhchap-ctrl-secret DHHC-1:03:M2YzNzA2M2E0ODJjZjk5NmE5ZjkxODU3ZDUxZTk3YzgyNTc3OGI4NmNhMDJiMTU1NTZlM2Y5NzdhMDQwNzdiNZjIPvk=: 00:20:37.651 00:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MWQ0ZGZmN2VjZmU1YWQ0ODliMmZkNjkzMGVmNDFjNjNlYTY3NDVkZjRmNGU3MDMzy95nmg==: --dhchap-ctrl-secret DHHC-1:03:M2YzNzA2M2E0ODJjZjk5NmE5ZjkxODU3ZDUxZTk3YzgyNTc3OGI4NmNhMDJiMTU1NTZlM2Y5NzdhMDQwNzdiNZjIPvk=: 00:20:38.585 00:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.585 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.585 00:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:38.585 00:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.585 00:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.585 00:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.585 00:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:38.585 00:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:38.585 00:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:38.843 00:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:20:38.843 00:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:38.843 00:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:38.843 00:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:38.843 00:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:38.843 00:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.843 00:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.843 00:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.843 00:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.843 00:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.843 00:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.843 00:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.843 00:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:39.408 00:20:39.408 00:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:39.408 00:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:39.408 00:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.665 00:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.665 00:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.665 00:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.665 00:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.665 00:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.665 00:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:39.665 { 00:20:39.665 "cntlid": 75, 00:20:39.665 "qid": 0, 00:20:39.665 "state": "enabled", 00:20:39.665 "thread": "nvmf_tgt_poll_group_000", 00:20:39.665 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:39.665 "listen_address": { 00:20:39.665 "trtype": "TCP", 00:20:39.665 "adrfam": "IPv4", 00:20:39.665 "traddr": "10.0.0.2", 00:20:39.665 "trsvcid": "4420" 00:20:39.665 }, 00:20:39.665 "peer_address": { 00:20:39.665 "trtype": "TCP", 00:20:39.665 "adrfam": "IPv4", 00:20:39.665 "traddr": "10.0.0.1", 00:20:39.665 "trsvcid": "45230" 00:20:39.665 }, 00:20:39.665 "auth": { 00:20:39.665 "state": "completed", 00:20:39.665 "digest": "sha384", 00:20:39.665 "dhgroup": "ffdhe4096" 00:20:39.665 } 00:20:39.665 } 00:20:39.665 ]' 00:20:39.665 00:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:39.665 00:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:39.665 00:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:39.665 00:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:39.665 00:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:39.922 00:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.922 00:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.922 00:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:40.180 00:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmQ5ZGJhMjljNTJhY2MxNjZiMWRjZjgyMWRkNzVmMWR+AU16: --dhchap-ctrl-secret DHHC-1:02:ZjRiYjk2ZjM3MDA1NTMyZjE5ZmUyZWI5YTdkODUxYmY5NGE4NjI1MWRiM2Q2M2IzCitAaw==: 00:20:40.180 00:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmQ5ZGJhMjljNTJhY2MxNjZiMWRjZjgyMWRkNzVmMWR+AU16: --dhchap-ctrl-secret DHHC-1:02:ZjRiYjk2ZjM3MDA1NTMyZjE5ZmUyZWI5YTdkODUxYmY5NGE4NjI1MWRiM2Q2M2IzCitAaw==: 00:20:41.114 00:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.114 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.114 00:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:41.114 00:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.114 00:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.114 00:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.114 00:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:41.114 00:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:41.114 00:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:41.114 00:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:20:41.114 00:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:41.114 00:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:41.114 00:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:41.114 00:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:41.114 00:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.114 00:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:41.114 00:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.114 00:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.114 00:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.114 00:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:41.114 00:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:41.114 00:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:41.684 00:20:41.684 00:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:41.684 00:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.684 00:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:41.944 00:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.944 00:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.944 00:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.944 00:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.944 00:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.944 00:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:41.944 { 00:20:41.944 "cntlid": 77, 00:20:41.944 "qid": 0, 00:20:41.944 "state": "enabled", 00:20:41.944 "thread": "nvmf_tgt_poll_group_000", 00:20:41.944 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:41.944 "listen_address": { 00:20:41.944 "trtype": "TCP", 00:20:41.944 "adrfam": "IPv4", 00:20:41.944 "traddr": "10.0.0.2", 00:20:41.944 "trsvcid": "4420" 00:20:41.944 }, 00:20:41.944 "peer_address": { 00:20:41.944 "trtype": "TCP", 00:20:41.944 "adrfam": "IPv4", 00:20:41.944 "traddr": "10.0.0.1", 00:20:41.944 "trsvcid": "60206" 00:20:41.944 }, 00:20:41.944 "auth": { 00:20:41.944 "state": "completed", 00:20:41.944 "digest": "sha384", 00:20:41.944 "dhgroup": "ffdhe4096" 00:20:41.944 } 00:20:41.944 } 00:20:41.944 ]' 00:20:41.944 00:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:41.944 00:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:41.944 00:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:41.944 00:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:41.944 00:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:41.944 00:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.944 00:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.944 00:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.202 00:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWQ0YmYyZTkyNDlmN2RjZjM4OGMzNjZiOWNjMDUyMzFiOWE3YmQ5Njg0NDQ3ZTE36pLhlg==: --dhchap-ctrl-secret DHHC-1:01:MDM1MmMxNjdlZGFlMTY0ZGI3MDI3MTAzMzQ5ZDU5ZGKCUd+h: 00:20:42.202 00:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MWQ0YmYyZTkyNDlmN2RjZjM4OGMzNjZiOWNjMDUyMzFiOWE3YmQ5Njg0NDQ3ZTE36pLhlg==: --dhchap-ctrl-secret DHHC-1:01:MDM1MmMxNjdlZGFlMTY0ZGI3MDI3MTAzMzQ5ZDU5ZGKCUd+h: 00:20:43.149 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.149 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.149 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:43.149 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.149 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.149 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.149 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:43.150 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:43.150 00:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:43.408 00:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:20:43.408 00:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:43.408 00:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:43.408 00:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:43.408 00:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:43.408 00:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.408 00:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:43.408 00:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.408 00:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.408 00:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.408 00:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:43.408 00:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:43.408 00:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:43.973 00:20:43.973 00:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:43.973 00:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:43.973 00:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.230 00:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.230 00:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.230 00:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.230 00:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.230 00:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.230 00:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:44.230 { 00:20:44.230 "cntlid": 79, 00:20:44.230 "qid": 0, 00:20:44.230 "state": "enabled", 00:20:44.230 "thread": "nvmf_tgt_poll_group_000", 00:20:44.230 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:44.230 "listen_address": { 00:20:44.230 "trtype": "TCP", 00:20:44.230 "adrfam": "IPv4", 00:20:44.230 "traddr": "10.0.0.2", 00:20:44.230 "trsvcid": "4420" 00:20:44.230 }, 00:20:44.230 "peer_address": { 00:20:44.230 "trtype": "TCP", 00:20:44.230 "adrfam": "IPv4", 00:20:44.230 "traddr": "10.0.0.1", 00:20:44.230 "trsvcid": "60236" 00:20:44.230 }, 00:20:44.230 "auth": { 00:20:44.230 "state": "completed", 00:20:44.230 "digest": "sha384", 00:20:44.230 "dhgroup": "ffdhe4096" 00:20:44.230 } 00:20:44.230 } 00:20:44.230 ]' 00:20:44.230 00:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:44.230 00:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:44.230 00:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:44.230 00:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:44.230 00:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:44.230 00:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.230 00:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.230 00:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.488 00:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmMxMGYyYmM1Y2QyZmViMTFlZWExYTY4OTFjYzFkY2I1OWU4NzMzZGVkNTM5MDJhMWUyOTQ0ZjcxMzgzNDVjYmBdiwU=: 00:20:44.488 00:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MmMxMGYyYmM1Y2QyZmViMTFlZWExYTY4OTFjYzFkY2I1OWU4NzMzZGVkNTM5MDJhMWUyOTQ0ZjcxMzgzNDVjYmBdiwU=: 00:20:45.423 00:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.423 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.423 00:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:45.423 00:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.423 00:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.423 00:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.423 00:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:45.423 00:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:45.423 00:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:45.423 00:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:45.680 00:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:20:45.680 00:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:45.680 00:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:45.680 00:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:45.680 00:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:45.680 00:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.680 00:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.680 00:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.680 00:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.680 00:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.680 00:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.680 00:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.680 00:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.246 00:20:46.504 00:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:46.504 00:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:46.504 00:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.762 00:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.762 00:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.762 00:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.762 00:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.762 00:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.762 00:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:46.762 { 00:20:46.762 "cntlid": 81, 00:20:46.762 "qid": 0, 00:20:46.762 "state": "enabled", 00:20:46.762 "thread": "nvmf_tgt_poll_group_000", 00:20:46.762 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:46.762 "listen_address": { 00:20:46.762 "trtype": "TCP", 00:20:46.762 "adrfam": "IPv4", 00:20:46.762 "traddr": "10.0.0.2", 00:20:46.762 "trsvcid": "4420" 00:20:46.762 }, 00:20:46.762 "peer_address": { 00:20:46.762 "trtype": "TCP", 00:20:46.762 "adrfam": "IPv4", 00:20:46.762 "traddr": "10.0.0.1", 00:20:46.762 "trsvcid": "60260" 00:20:46.762 }, 00:20:46.762 "auth": { 00:20:46.762 "state": "completed", 00:20:46.762 "digest": "sha384", 00:20:46.762 "dhgroup": "ffdhe6144" 00:20:46.762 } 00:20:46.762 } 00:20:46.762 ]' 00:20:46.762 00:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:46.762 00:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:46.762 00:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:46.762 00:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:46.762 00:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:46.762 00:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.762 00:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.762 00:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.020 00:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWQ0ZGZmN2VjZmU1YWQ0ODliMmZkNjkzMGVmNDFjNjNlYTY3NDVkZjRmNGU3MDMzy95nmg==: --dhchap-ctrl-secret DHHC-1:03:M2YzNzA2M2E0ODJjZjk5NmE5ZjkxODU3ZDUxZTk3YzgyNTc3OGI4NmNhMDJiMTU1NTZlM2Y5NzdhMDQwNzdiNZjIPvk=: 00:20:47.020 00:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MWQ0ZGZmN2VjZmU1YWQ0ODliMmZkNjkzMGVmNDFjNjNlYTY3NDVkZjRmNGU3MDMzy95nmg==: --dhchap-ctrl-secret DHHC-1:03:M2YzNzA2M2E0ODJjZjk5NmE5ZjkxODU3ZDUxZTk3YzgyNTc3OGI4NmNhMDJiMTU1NTZlM2Y5NzdhMDQwNzdiNZjIPvk=: 00:20:47.953 00:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.953 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.953 00:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:47.953 00:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.953 00:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.953 00:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.953 00:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:47.953 00:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:47.953 00:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:48.212 00:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:20:48.212 00:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:48.212 00:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:48.212 00:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:48.212 00:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:48.212 00:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.212 00:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.212 00:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.212 00:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.212 00:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.212 00:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.212 00:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.212 00:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.777 00:20:48.777 00:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:48.777 00:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:48.777 00:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.036 00:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.036 00:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.036 00:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.036 00:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.036 00:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.036 00:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:49.036 { 00:20:49.036 "cntlid": 83, 00:20:49.036 "qid": 0, 00:20:49.036 "state": "enabled", 00:20:49.036 "thread": "nvmf_tgt_poll_group_000", 00:20:49.036 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:49.036 "listen_address": { 00:20:49.036 "trtype": "TCP", 00:20:49.036 "adrfam": "IPv4", 00:20:49.036 "traddr": "10.0.0.2", 00:20:49.036 "trsvcid": "4420" 00:20:49.036 }, 00:20:49.036 "peer_address": { 00:20:49.036 "trtype": "TCP", 00:20:49.036 "adrfam": "IPv4", 00:20:49.036 "traddr": "10.0.0.1", 00:20:49.036 "trsvcid": "60278" 00:20:49.036 }, 00:20:49.036 "auth": { 00:20:49.036 "state": "completed", 00:20:49.036 "digest": "sha384", 00:20:49.036 "dhgroup": "ffdhe6144" 00:20:49.036 } 00:20:49.036 } 00:20:49.036 ]' 00:20:49.036 00:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:49.294 00:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:49.294 00:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:49.294 00:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:49.294 00:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:49.294 00:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.294 00:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.294 00:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.553 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmQ5ZGJhMjljNTJhY2MxNjZiMWRjZjgyMWRkNzVmMWR+AU16: --dhchap-ctrl-secret DHHC-1:02:ZjRiYjk2ZjM3MDA1NTMyZjE5ZmUyZWI5YTdkODUxYmY5NGE4NjI1MWRiM2Q2M2IzCitAaw==: 00:20:49.553 00:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmQ5ZGJhMjljNTJhY2MxNjZiMWRjZjgyMWRkNzVmMWR+AU16: --dhchap-ctrl-secret DHHC-1:02:ZjRiYjk2ZjM3MDA1NTMyZjE5ZmUyZWI5YTdkODUxYmY5NGE4NjI1MWRiM2Q2M2IzCitAaw==: 00:20:50.486 00:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.486 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.486 00:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:50.486 00:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.486 00:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.486 00:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.486 00:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:50.486 00:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:50.486 00:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:50.743 00:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:20:50.743 00:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:50.743 00:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:50.743 00:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:50.743 00:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:50.743 00:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.744 00:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.744 00:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.744 00:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.744 00:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.744 00:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.744 00:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.744 00:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:51.309 00:20:51.309 00:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:51.309 00:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:51.309 00:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.568 00:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.568 00:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.568 00:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.568 00:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.568 00:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.568 00:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:51.568 { 00:20:51.568 "cntlid": 85, 00:20:51.568 "qid": 0, 00:20:51.568 "state": "enabled", 00:20:51.568 "thread": "nvmf_tgt_poll_group_000", 00:20:51.568 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:51.568 "listen_address": { 00:20:51.568 "trtype": "TCP", 00:20:51.568 "adrfam": "IPv4", 00:20:51.568 "traddr": "10.0.0.2", 00:20:51.568 "trsvcid": "4420" 00:20:51.568 }, 00:20:51.568 "peer_address": { 00:20:51.568 "trtype": "TCP", 00:20:51.568 "adrfam": "IPv4", 00:20:51.568 "traddr": "10.0.0.1", 00:20:51.568 "trsvcid": "49960" 00:20:51.568 }, 00:20:51.568 "auth": { 00:20:51.568 "state": "completed", 00:20:51.568 "digest": "sha384", 00:20:51.568 "dhgroup": "ffdhe6144" 00:20:51.568 } 00:20:51.568 } 00:20:51.568 ]' 00:20:51.568 00:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:51.568 00:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:51.568 00:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:51.568 00:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:51.568 00:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:51.830 00:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.830 00:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.830 00:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.091 00:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWQ0YmYyZTkyNDlmN2RjZjM4OGMzNjZiOWNjMDUyMzFiOWE3YmQ5Njg0NDQ3ZTE36pLhlg==: --dhchap-ctrl-secret DHHC-1:01:MDM1MmMxNjdlZGFlMTY0ZGI3MDI3MTAzMzQ5ZDU5ZGKCUd+h: 00:20:52.091 00:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MWQ0YmYyZTkyNDlmN2RjZjM4OGMzNjZiOWNjMDUyMzFiOWE3YmQ5Njg0NDQ3ZTE36pLhlg==: --dhchap-ctrl-secret DHHC-1:01:MDM1MmMxNjdlZGFlMTY0ZGI3MDI3MTAzMzQ5ZDU5ZGKCUd+h: 00:20:53.025 00:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.025 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.025 00:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:53.025 00:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.025 00:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.025 00:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.025 00:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:53.025 00:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:53.025 00:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:53.025 00:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:20:53.025 00:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:53.025 00:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:53.025 00:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:53.025 00:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:53.025 00:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.025 00:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:53.025 00:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.025 00:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.025 00:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.025 00:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:53.025 00:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:53.025 00:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:53.592 00:20:53.592 00:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:53.592 00:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.592 00:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:53.850 00:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.850 00:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.850 00:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.850 00:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.850 00:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.850 00:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:53.850 { 00:20:53.850 "cntlid": 87, 00:20:53.850 "qid": 0, 00:20:53.850 "state": "enabled", 00:20:53.850 "thread": "nvmf_tgt_poll_group_000", 00:20:53.850 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:53.850 "listen_address": { 00:20:53.850 "trtype": "TCP", 00:20:53.850 "adrfam": "IPv4", 00:20:53.850 "traddr": "10.0.0.2", 00:20:53.850 "trsvcid": "4420" 00:20:53.850 }, 00:20:53.850 "peer_address": { 00:20:53.850 "trtype": "TCP", 00:20:53.850 "adrfam": "IPv4", 00:20:53.850 "traddr": "10.0.0.1", 00:20:53.850 "trsvcid": "49988" 00:20:53.850 }, 00:20:53.850 "auth": { 00:20:53.850 "state": "completed", 00:20:53.850 "digest": "sha384", 00:20:53.850 "dhgroup": "ffdhe6144" 00:20:53.850 } 00:20:53.850 } 00:20:53.850 ]' 00:20:53.850 00:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:53.850 00:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:54.107 00:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:54.107 00:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:54.107 00:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:54.107 00:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.107 00:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.107 00:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.365 00:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmMxMGYyYmM1Y2QyZmViMTFlZWExYTY4OTFjYzFkY2I1OWU4NzMzZGVkNTM5MDJhMWUyOTQ0ZjcxMzgzNDVjYmBdiwU=: 00:20:54.365 00:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MmMxMGYyYmM1Y2QyZmViMTFlZWExYTY4OTFjYzFkY2I1OWU4NzMzZGVkNTM5MDJhMWUyOTQ0ZjcxMzgzNDVjYmBdiwU=: 00:20:55.299 00:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.299 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.299 00:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:55.299 00:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.299 00:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.299 00:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.299 00:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:55.299 00:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:55.299 00:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:55.299 00:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:55.557 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:20:55.557 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:55.557 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:55.557 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:55.557 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:55.557 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.557 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.557 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.557 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.557 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.557 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.557 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.557 00:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:56.490 00:20:56.490 00:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:56.490 00:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.490 00:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:56.748 00:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.748 00:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.748 00:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.748 00:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.748 00:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.748 00:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:56.748 { 00:20:56.748 "cntlid": 89, 00:20:56.748 "qid": 0, 00:20:56.748 "state": "enabled", 00:20:56.748 "thread": "nvmf_tgt_poll_group_000", 00:20:56.748 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:56.748 "listen_address": { 00:20:56.748 "trtype": "TCP", 00:20:56.748 "adrfam": "IPv4", 00:20:56.748 "traddr": "10.0.0.2", 00:20:56.748 "trsvcid": "4420" 00:20:56.748 }, 00:20:56.748 "peer_address": { 00:20:56.748 "trtype": "TCP", 00:20:56.748 "adrfam": "IPv4", 00:20:56.748 "traddr": "10.0.0.1", 00:20:56.748 "trsvcid": "50024" 00:20:56.748 }, 00:20:56.748 "auth": { 00:20:56.748 "state": "completed", 00:20:56.748 "digest": "sha384", 00:20:56.748 "dhgroup": "ffdhe8192" 00:20:56.748 } 00:20:56.748 } 00:20:56.748 ]' 00:20:56.748 00:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:56.748 00:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:56.748 00:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:56.748 00:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:56.748 00:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:56.748 00:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.748 00:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.748 00:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.006 00:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWQ0ZGZmN2VjZmU1YWQ0ODliMmZkNjkzMGVmNDFjNjNlYTY3NDVkZjRmNGU3MDMzy95nmg==: --dhchap-ctrl-secret DHHC-1:03:M2YzNzA2M2E0ODJjZjk5NmE5ZjkxODU3ZDUxZTk3YzgyNTc3OGI4NmNhMDJiMTU1NTZlM2Y5NzdhMDQwNzdiNZjIPvk=: 00:20:57.006 00:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MWQ0ZGZmN2VjZmU1YWQ0ODliMmZkNjkzMGVmNDFjNjNlYTY3NDVkZjRmNGU3MDMzy95nmg==: --dhchap-ctrl-secret DHHC-1:03:M2YzNzA2M2E0ODJjZjk5NmE5ZjkxODU3ZDUxZTk3YzgyNTc3OGI4NmNhMDJiMTU1NTZlM2Y5NzdhMDQwNzdiNZjIPvk=: 00:20:57.939 00:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.939 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.939 00:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:57.939 00:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.939 00:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.939 00:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.940 00:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:57.940 00:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:57.940 00:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:58.198 00:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:20:58.198 00:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:58.198 00:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:58.198 00:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:58.198 00:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:58.198 00:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.198 00:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:58.198 00:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.198 00:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.198 00:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.198 00:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:58.198 00:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:58.198 00:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.132 00:20:59.132 00:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:59.132 00:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:59.132 00:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.390 00:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.390 00:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.390 00:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.390 00:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.390 00:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.390 00:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:59.390 { 00:20:59.390 "cntlid": 91, 00:20:59.390 "qid": 0, 00:20:59.390 "state": "enabled", 00:20:59.390 "thread": "nvmf_tgt_poll_group_000", 00:20:59.390 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:59.390 "listen_address": { 00:20:59.390 "trtype": "TCP", 00:20:59.390 "adrfam": "IPv4", 00:20:59.390 "traddr": "10.0.0.2", 00:20:59.390 "trsvcid": "4420" 00:20:59.390 }, 00:20:59.390 "peer_address": { 00:20:59.390 "trtype": "TCP", 00:20:59.390 "adrfam": "IPv4", 00:20:59.390 "traddr": "10.0.0.1", 00:20:59.390 "trsvcid": "50046" 00:20:59.390 }, 00:20:59.390 "auth": { 00:20:59.390 "state": "completed", 00:20:59.390 "digest": "sha384", 00:20:59.390 "dhgroup": "ffdhe8192" 00:20:59.390 } 00:20:59.390 } 00:20:59.390 ]' 00:20:59.390 00:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:59.390 00:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:59.390 00:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:59.390 00:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:59.390 00:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:59.390 00:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.390 00:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.390 00:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.648 00:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmQ5ZGJhMjljNTJhY2MxNjZiMWRjZjgyMWRkNzVmMWR+AU16: --dhchap-ctrl-secret DHHC-1:02:ZjRiYjk2ZjM3MDA1NTMyZjE5ZmUyZWI5YTdkODUxYmY5NGE4NjI1MWRiM2Q2M2IzCitAaw==: 00:20:59.648 00:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmQ5ZGJhMjljNTJhY2MxNjZiMWRjZjgyMWRkNzVmMWR+AU16: --dhchap-ctrl-secret DHHC-1:02:ZjRiYjk2ZjM3MDA1NTMyZjE5ZmUyZWI5YTdkODUxYmY5NGE4NjI1MWRiM2Q2M2IzCitAaw==: 00:21:00.580 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.580 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.580 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:00.580 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.580 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.580 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.580 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:00.580 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:00.580 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:00.838 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:21:00.838 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:00.838 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:00.838 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:00.838 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:00.838 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.838 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:00.838 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.838 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.838 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.838 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:00.838 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:00.838 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.773 00:21:01.773 00:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:01.773 00:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:01.773 00:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.031 00:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.031 00:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.031 00:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.031 00:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.031 00:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.031 00:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:02.031 { 00:21:02.031 "cntlid": 93, 00:21:02.031 "qid": 0, 00:21:02.031 "state": "enabled", 00:21:02.031 "thread": "nvmf_tgt_poll_group_000", 00:21:02.031 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:02.031 "listen_address": { 00:21:02.031 "trtype": "TCP", 00:21:02.031 "adrfam": "IPv4", 00:21:02.031 "traddr": "10.0.0.2", 00:21:02.031 "trsvcid": "4420" 00:21:02.031 }, 00:21:02.031 "peer_address": { 00:21:02.031 "trtype": "TCP", 00:21:02.031 "adrfam": "IPv4", 00:21:02.031 "traddr": "10.0.0.1", 00:21:02.031 "trsvcid": "54928" 00:21:02.031 }, 00:21:02.031 "auth": { 00:21:02.031 "state": "completed", 00:21:02.031 "digest": "sha384", 00:21:02.031 "dhgroup": "ffdhe8192" 00:21:02.031 } 00:21:02.031 } 00:21:02.031 ]' 00:21:02.031 00:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:02.031 00:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:02.031 00:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:02.031 00:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:02.031 00:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:02.031 00:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.031 00:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.031 00:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.596 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWQ0YmYyZTkyNDlmN2RjZjM4OGMzNjZiOWNjMDUyMzFiOWE3YmQ5Njg0NDQ3ZTE36pLhlg==: --dhchap-ctrl-secret DHHC-1:01:MDM1MmMxNjdlZGFlMTY0ZGI3MDI3MTAzMzQ5ZDU5ZGKCUd+h: 00:21:02.596 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MWQ0YmYyZTkyNDlmN2RjZjM4OGMzNjZiOWNjMDUyMzFiOWE3YmQ5Njg0NDQ3ZTE36pLhlg==: --dhchap-ctrl-secret DHHC-1:01:MDM1MmMxNjdlZGFlMTY0ZGI3MDI3MTAzMzQ5ZDU5ZGKCUd+h: 00:21:03.528 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.528 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.528 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:03.528 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.528 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.528 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.528 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:03.528 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:03.528 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:03.528 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:21:03.528 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:03.528 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:03.528 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:03.528 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:03.528 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.528 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:03.528 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.528 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.528 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.528 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:03.528 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:03.528 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:04.461 00:21:04.461 00:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:04.461 00:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:04.461 00:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.719 00:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.719 00:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.719 00:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.719 00:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.719 00:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.719 00:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:04.719 { 00:21:04.719 "cntlid": 95, 00:21:04.719 "qid": 0, 00:21:04.719 "state": "enabled", 00:21:04.719 "thread": "nvmf_tgt_poll_group_000", 00:21:04.719 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:04.719 "listen_address": { 00:21:04.719 "trtype": "TCP", 00:21:04.719 "adrfam": "IPv4", 00:21:04.719 "traddr": "10.0.0.2", 00:21:04.719 "trsvcid": "4420" 00:21:04.719 }, 00:21:04.719 "peer_address": { 00:21:04.719 "trtype": "TCP", 00:21:04.719 "adrfam": "IPv4", 00:21:04.719 "traddr": "10.0.0.1", 00:21:04.719 "trsvcid": "54972" 00:21:04.719 }, 00:21:04.719 "auth": { 00:21:04.719 "state": "completed", 00:21:04.719 "digest": "sha384", 00:21:04.719 "dhgroup": "ffdhe8192" 00:21:04.719 } 00:21:04.719 } 00:21:04.719 ]' 00:21:04.719 00:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:04.719 00:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:04.719 00:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:04.719 00:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:04.719 00:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:04.977 00:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.977 00:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.977 00:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.236 00:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmMxMGYyYmM1Y2QyZmViMTFlZWExYTY4OTFjYzFkY2I1OWU4NzMzZGVkNTM5MDJhMWUyOTQ0ZjcxMzgzNDVjYmBdiwU=: 00:21:05.236 00:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MmMxMGYyYmM1Y2QyZmViMTFlZWExYTY4OTFjYzFkY2I1OWU4NzMzZGVkNTM5MDJhMWUyOTQ0ZjcxMzgzNDVjYmBdiwU=: 00:21:06.169 00:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.169 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.169 00:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:06.169 00:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.169 00:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.169 00:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.169 00:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:06.169 00:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:06.169 00:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:06.169 00:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:06.169 00:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:06.427 00:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:21:06.427 00:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:06.427 00:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:06.427 00:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:06.427 00:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:06.427 00:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.427 00:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.427 00:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.427 00:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.427 00:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.427 00:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.427 00:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.427 00:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.685 00:21:06.685 00:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:06.685 00:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.685 00:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:06.943 00:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.943 00:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.943 00:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.943 00:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.943 00:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.943 00:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:06.943 { 00:21:06.943 "cntlid": 97, 00:21:06.943 "qid": 0, 00:21:06.943 "state": "enabled", 00:21:06.943 "thread": "nvmf_tgt_poll_group_000", 00:21:06.943 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:06.943 "listen_address": { 00:21:06.944 "trtype": "TCP", 00:21:06.944 "adrfam": "IPv4", 00:21:06.944 "traddr": "10.0.0.2", 00:21:06.944 "trsvcid": "4420" 00:21:06.944 }, 00:21:06.944 "peer_address": { 00:21:06.944 "trtype": "TCP", 00:21:06.944 "adrfam": "IPv4", 00:21:06.944 "traddr": "10.0.0.1", 00:21:06.944 "trsvcid": "55002" 00:21:06.944 }, 00:21:06.944 "auth": { 00:21:06.944 "state": "completed", 00:21:06.944 "digest": "sha512", 00:21:06.944 "dhgroup": "null" 00:21:06.944 } 00:21:06.944 } 00:21:06.944 ]' 00:21:06.944 00:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:06.944 00:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:06.944 00:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:06.944 00:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:06.944 00:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:06.944 00:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.944 00:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.944 00:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.510 00:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWQ0ZGZmN2VjZmU1YWQ0ODliMmZkNjkzMGVmNDFjNjNlYTY3NDVkZjRmNGU3MDMzy95nmg==: --dhchap-ctrl-secret DHHC-1:03:M2YzNzA2M2E0ODJjZjk5NmE5ZjkxODU3ZDUxZTk3YzgyNTc3OGI4NmNhMDJiMTU1NTZlM2Y5NzdhMDQwNzdiNZjIPvk=: 00:21:07.510 00:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MWQ0ZGZmN2VjZmU1YWQ0ODliMmZkNjkzMGVmNDFjNjNlYTY3NDVkZjRmNGU3MDMzy95nmg==: --dhchap-ctrl-secret DHHC-1:03:M2YzNzA2M2E0ODJjZjk5NmE5ZjkxODU3ZDUxZTk3YzgyNTc3OGI4NmNhMDJiMTU1NTZlM2Y5NzdhMDQwNzdiNZjIPvk=: 00:21:08.443 00:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.443 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.443 00:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:08.443 00:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.443 00:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.443 00:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.443 00:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:08.443 00:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:08.443 00:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:08.443 00:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:21:08.443 00:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:08.443 00:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:08.443 00:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:08.443 00:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:08.443 00:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.443 00:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:08.443 00:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.443 00:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.443 00:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.443 00:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:08.443 00:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:08.443 00:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.008 00:21:09.008 00:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:09.008 00:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:09.008 00:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.267 00:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.267 00:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.267 00:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.267 00:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.267 00:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.267 00:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:09.267 { 00:21:09.267 "cntlid": 99, 00:21:09.267 "qid": 0, 00:21:09.267 "state": "enabled", 00:21:09.267 "thread": "nvmf_tgt_poll_group_000", 00:21:09.267 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:09.267 "listen_address": { 00:21:09.267 "trtype": "TCP", 00:21:09.267 "adrfam": "IPv4", 00:21:09.267 "traddr": "10.0.0.2", 00:21:09.267 "trsvcid": "4420" 00:21:09.267 }, 00:21:09.267 "peer_address": { 00:21:09.267 "trtype": "TCP", 00:21:09.267 "adrfam": "IPv4", 00:21:09.267 "traddr": "10.0.0.1", 00:21:09.267 "trsvcid": "55030" 00:21:09.267 }, 00:21:09.267 "auth": { 00:21:09.267 "state": "completed", 00:21:09.267 "digest": "sha512", 00:21:09.267 "dhgroup": "null" 00:21:09.267 } 00:21:09.267 } 00:21:09.267 ]' 00:21:09.267 00:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:09.267 00:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:09.267 00:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:09.267 00:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:09.267 00:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:09.267 00:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.267 00:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.267 00:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.832 00:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmQ5ZGJhMjljNTJhY2MxNjZiMWRjZjgyMWRkNzVmMWR+AU16: --dhchap-ctrl-secret DHHC-1:02:ZjRiYjk2ZjM3MDA1NTMyZjE5ZmUyZWI5YTdkODUxYmY5NGE4NjI1MWRiM2Q2M2IzCitAaw==: 00:21:09.833 00:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmQ5ZGJhMjljNTJhY2MxNjZiMWRjZjgyMWRkNzVmMWR+AU16: --dhchap-ctrl-secret DHHC-1:02:ZjRiYjk2ZjM3MDA1NTMyZjE5ZmUyZWI5YTdkODUxYmY5NGE4NjI1MWRiM2Q2M2IzCitAaw==: 00:21:10.398 00:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.656 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.656 00:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:10.656 00:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.656 00:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.656 00:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.656 00:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:10.656 00:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:10.656 00:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:10.914 00:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:21:10.914 00:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:10.914 00:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:10.914 00:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:10.914 00:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:10.914 00:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.914 00:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:10.914 00:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.914 00:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.914 00:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.914 00:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:10.914 00:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:10.914 00:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.172 00:21:11.172 00:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:11.172 00:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:11.172 00:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.431 00:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.431 00:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.431 00:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.431 00:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.431 00:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.431 00:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:11.431 { 00:21:11.431 "cntlid": 101, 00:21:11.431 "qid": 0, 00:21:11.431 "state": "enabled", 00:21:11.431 "thread": "nvmf_tgt_poll_group_000", 00:21:11.431 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:11.431 "listen_address": { 00:21:11.431 "trtype": "TCP", 00:21:11.431 "adrfam": "IPv4", 00:21:11.431 "traddr": "10.0.0.2", 00:21:11.431 "trsvcid": "4420" 00:21:11.431 }, 00:21:11.431 "peer_address": { 00:21:11.431 "trtype": "TCP", 00:21:11.431 "adrfam": "IPv4", 00:21:11.431 "traddr": "10.0.0.1", 00:21:11.431 "trsvcid": "54164" 00:21:11.431 }, 00:21:11.431 "auth": { 00:21:11.431 "state": "completed", 00:21:11.431 "digest": "sha512", 00:21:11.431 "dhgroup": "null" 00:21:11.431 } 00:21:11.431 } 00:21:11.431 ]' 00:21:11.431 00:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:11.431 00:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:11.431 00:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:11.431 00:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:11.431 00:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:11.431 00:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.431 00:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.431 00:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.998 00:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWQ0YmYyZTkyNDlmN2RjZjM4OGMzNjZiOWNjMDUyMzFiOWE3YmQ5Njg0NDQ3ZTE36pLhlg==: --dhchap-ctrl-secret DHHC-1:01:MDM1MmMxNjdlZGFlMTY0ZGI3MDI3MTAzMzQ5ZDU5ZGKCUd+h: 00:21:11.998 00:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MWQ0YmYyZTkyNDlmN2RjZjM4OGMzNjZiOWNjMDUyMzFiOWE3YmQ5Njg0NDQ3ZTE36pLhlg==: --dhchap-ctrl-secret DHHC-1:01:MDM1MmMxNjdlZGFlMTY0ZGI3MDI3MTAzMzQ5ZDU5ZGKCUd+h: 00:21:12.932 00:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.932 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.932 00:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:12.932 00:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.932 00:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.932 00:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.932 00:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:12.932 00:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:12.932 00:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:12.932 00:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:21:12.932 00:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:12.932 00:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:12.932 00:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:12.932 00:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:12.932 00:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.932 00:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:12.932 00:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.932 00:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.932 00:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.932 00:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:12.932 00:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:12.932 00:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:13.497 00:21:13.497 00:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:13.497 00:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.497 00:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:13.755 00:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.755 00:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.755 00:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.755 00:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.755 00:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.755 00:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:13.755 { 00:21:13.755 "cntlid": 103, 00:21:13.755 "qid": 0, 00:21:13.755 "state": "enabled", 00:21:13.755 "thread": "nvmf_tgt_poll_group_000", 00:21:13.755 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:13.755 "listen_address": { 00:21:13.755 "trtype": "TCP", 00:21:13.755 "adrfam": "IPv4", 00:21:13.755 "traddr": "10.0.0.2", 00:21:13.755 "trsvcid": "4420" 00:21:13.755 }, 00:21:13.755 "peer_address": { 00:21:13.755 "trtype": "TCP", 00:21:13.755 "adrfam": "IPv4", 00:21:13.755 "traddr": "10.0.0.1", 00:21:13.755 "trsvcid": "54192" 00:21:13.755 }, 00:21:13.755 "auth": { 00:21:13.755 "state": "completed", 00:21:13.755 "digest": "sha512", 00:21:13.755 "dhgroup": "null" 00:21:13.755 } 00:21:13.755 } 00:21:13.755 ]' 00:21:13.755 00:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:13.755 00:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:13.755 00:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:13.755 00:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:13.755 00:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:13.755 00:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.755 00:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.755 00:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.013 00:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmMxMGYyYmM1Y2QyZmViMTFlZWExYTY4OTFjYzFkY2I1OWU4NzMzZGVkNTM5MDJhMWUyOTQ0ZjcxMzgzNDVjYmBdiwU=: 00:21:14.013 00:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MmMxMGYyYmM1Y2QyZmViMTFlZWExYTY4OTFjYzFkY2I1OWU4NzMzZGVkNTM5MDJhMWUyOTQ0ZjcxMzgzNDVjYmBdiwU=: 00:21:14.947 00:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.947 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.947 00:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:14.947 00:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.947 00:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.947 00:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.947 00:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:14.947 00:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:14.947 00:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:14.947 00:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:15.206 00:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:21:15.206 00:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:15.206 00:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:15.206 00:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:15.206 00:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:15.206 00:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.206 00:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.206 00:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.206 00:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.206 00:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.206 00:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.206 00:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.206 00:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.770 00:21:15.770 00:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:15.770 00:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:15.770 00:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.029 00:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.029 00:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.029 00:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.029 00:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.029 00:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.029 00:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:16.029 { 00:21:16.029 "cntlid": 105, 00:21:16.029 "qid": 0, 00:21:16.029 "state": "enabled", 00:21:16.029 "thread": "nvmf_tgt_poll_group_000", 00:21:16.029 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:16.029 "listen_address": { 00:21:16.029 "trtype": "TCP", 00:21:16.029 "adrfam": "IPv4", 00:21:16.029 "traddr": "10.0.0.2", 00:21:16.029 "trsvcid": "4420" 00:21:16.029 }, 00:21:16.029 "peer_address": { 00:21:16.029 "trtype": "TCP", 00:21:16.029 "adrfam": "IPv4", 00:21:16.029 "traddr": "10.0.0.1", 00:21:16.029 "trsvcid": "54210" 00:21:16.029 }, 00:21:16.029 "auth": { 00:21:16.029 "state": "completed", 00:21:16.029 "digest": "sha512", 00:21:16.029 "dhgroup": "ffdhe2048" 00:21:16.029 } 00:21:16.029 } 00:21:16.029 ]' 00:21:16.029 00:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:16.029 00:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:16.029 00:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:16.029 00:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:16.029 00:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:16.029 00:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.029 00:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.029 00:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.287 00:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWQ0ZGZmN2VjZmU1YWQ0ODliMmZkNjkzMGVmNDFjNjNlYTY3NDVkZjRmNGU3MDMzy95nmg==: --dhchap-ctrl-secret DHHC-1:03:M2YzNzA2M2E0ODJjZjk5NmE5ZjkxODU3ZDUxZTk3YzgyNTc3OGI4NmNhMDJiMTU1NTZlM2Y5NzdhMDQwNzdiNZjIPvk=: 00:21:16.287 00:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MWQ0ZGZmN2VjZmU1YWQ0ODliMmZkNjkzMGVmNDFjNjNlYTY3NDVkZjRmNGU3MDMzy95nmg==: --dhchap-ctrl-secret DHHC-1:03:M2YzNzA2M2E0ODJjZjk5NmE5ZjkxODU3ZDUxZTk3YzgyNTc3OGI4NmNhMDJiMTU1NTZlM2Y5NzdhMDQwNzdiNZjIPvk=: 00:21:17.220 00:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.220 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.220 00:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:17.220 00:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.220 00:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.220 00:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.220 00:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:17.220 00:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:17.220 00:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:17.478 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:21:17.478 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:17.478 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:17.478 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:17.478 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:17.478 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.478 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.478 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.478 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.478 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.478 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.478 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.478 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:18.044 00:21:18.044 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:18.044 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:18.044 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.302 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.302 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.302 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.302 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.302 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.303 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:18.303 { 00:21:18.303 "cntlid": 107, 00:21:18.303 "qid": 0, 00:21:18.303 "state": "enabled", 00:21:18.303 "thread": "nvmf_tgt_poll_group_000", 00:21:18.303 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:18.303 "listen_address": { 00:21:18.303 "trtype": "TCP", 00:21:18.303 "adrfam": "IPv4", 00:21:18.303 "traddr": "10.0.0.2", 00:21:18.303 "trsvcid": "4420" 00:21:18.303 }, 00:21:18.303 "peer_address": { 00:21:18.303 "trtype": "TCP", 00:21:18.303 "adrfam": "IPv4", 00:21:18.303 "traddr": "10.0.0.1", 00:21:18.303 "trsvcid": "54236" 00:21:18.303 }, 00:21:18.303 "auth": { 00:21:18.303 "state": "completed", 00:21:18.303 "digest": "sha512", 00:21:18.303 "dhgroup": "ffdhe2048" 00:21:18.303 } 00:21:18.303 } 00:21:18.303 ]' 00:21:18.303 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:18.303 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:18.303 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:18.303 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:18.303 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:18.303 00:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.303 00:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.303 00:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.560 00:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmQ5ZGJhMjljNTJhY2MxNjZiMWRjZjgyMWRkNzVmMWR+AU16: --dhchap-ctrl-secret DHHC-1:02:ZjRiYjk2ZjM3MDA1NTMyZjE5ZmUyZWI5YTdkODUxYmY5NGE4NjI1MWRiM2Q2M2IzCitAaw==: 00:21:18.560 00:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmQ5ZGJhMjljNTJhY2MxNjZiMWRjZjgyMWRkNzVmMWR+AU16: --dhchap-ctrl-secret DHHC-1:02:ZjRiYjk2ZjM3MDA1NTMyZjE5ZmUyZWI5YTdkODUxYmY5NGE4NjI1MWRiM2Q2M2IzCitAaw==: 00:21:19.492 00:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.492 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.492 00:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:19.492 00:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.492 00:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.492 00:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.492 00:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:19.492 00:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:19.493 00:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:19.750 00:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:21:19.750 00:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:19.750 00:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:19.750 00:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:19.750 00:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:19.750 00:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.750 00:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.750 00:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.750 00:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.750 00:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.750 00:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.750 00:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.750 00:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:20.009 00:21:20.273 00:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:20.273 00:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:20.273 00:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:20.533 00:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.533 00:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:20.533 00:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.533 00:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.533 00:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.533 00:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:20.533 { 00:21:20.534 "cntlid": 109, 00:21:20.534 "qid": 0, 00:21:20.534 "state": "enabled", 00:21:20.534 "thread": "nvmf_tgt_poll_group_000", 00:21:20.534 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:20.534 "listen_address": { 00:21:20.534 "trtype": "TCP", 00:21:20.534 "adrfam": "IPv4", 00:21:20.534 "traddr": "10.0.0.2", 00:21:20.534 "trsvcid": "4420" 00:21:20.534 }, 00:21:20.534 "peer_address": { 00:21:20.534 "trtype": "TCP", 00:21:20.534 "adrfam": "IPv4", 00:21:20.534 "traddr": "10.0.0.1", 00:21:20.534 "trsvcid": "54252" 00:21:20.534 }, 00:21:20.534 "auth": { 00:21:20.534 "state": "completed", 00:21:20.534 "digest": "sha512", 00:21:20.534 "dhgroup": "ffdhe2048" 00:21:20.534 } 00:21:20.534 } 00:21:20.534 ]' 00:21:20.534 00:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:20.534 00:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:20.534 00:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:20.534 00:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:20.534 00:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:20.534 00:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.534 00:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.534 00:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.792 00:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWQ0YmYyZTkyNDlmN2RjZjM4OGMzNjZiOWNjMDUyMzFiOWE3YmQ5Njg0NDQ3ZTE36pLhlg==: --dhchap-ctrl-secret DHHC-1:01:MDM1MmMxNjdlZGFlMTY0ZGI3MDI3MTAzMzQ5ZDU5ZGKCUd+h: 00:21:20.792 00:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MWQ0YmYyZTkyNDlmN2RjZjM4OGMzNjZiOWNjMDUyMzFiOWE3YmQ5Njg0NDQ3ZTE36pLhlg==: --dhchap-ctrl-secret DHHC-1:01:MDM1MmMxNjdlZGFlMTY0ZGI3MDI3MTAzMzQ5ZDU5ZGKCUd+h: 00:21:21.724 00:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.724 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.724 00:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:21.724 00:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.724 00:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.724 00:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.724 00:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:21.724 00:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:21.724 00:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:21.982 00:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:21:21.982 00:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:21.982 00:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:21.982 00:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:21.982 00:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:21.982 00:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:21.982 00:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:21.982 00:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.982 00:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.982 00:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.982 00:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:21.982 00:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:21.982 00:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:22.548 00:21:22.548 00:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:22.548 00:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:22.548 00:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.806 00:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.806 00:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.806 00:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.806 00:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.806 00:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.806 00:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:22.806 { 00:21:22.806 "cntlid": 111, 00:21:22.806 "qid": 0, 00:21:22.806 "state": "enabled", 00:21:22.806 "thread": "nvmf_tgt_poll_group_000", 00:21:22.806 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:22.806 "listen_address": { 00:21:22.806 "trtype": "TCP", 00:21:22.806 "adrfam": "IPv4", 00:21:22.806 "traddr": "10.0.0.2", 00:21:22.806 "trsvcid": "4420" 00:21:22.806 }, 00:21:22.806 "peer_address": { 00:21:22.806 "trtype": "TCP", 00:21:22.806 "adrfam": "IPv4", 00:21:22.806 "traddr": "10.0.0.1", 00:21:22.806 "trsvcid": "59436" 00:21:22.806 }, 00:21:22.806 "auth": { 00:21:22.806 "state": "completed", 00:21:22.806 "digest": "sha512", 00:21:22.806 "dhgroup": "ffdhe2048" 00:21:22.806 } 00:21:22.806 } 00:21:22.806 ]' 00:21:22.806 00:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:22.806 00:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:22.806 00:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:22.806 00:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:22.806 00:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:22.806 00:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.806 00:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.806 00:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.064 00:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmMxMGYyYmM1Y2QyZmViMTFlZWExYTY4OTFjYzFkY2I1OWU4NzMzZGVkNTM5MDJhMWUyOTQ0ZjcxMzgzNDVjYmBdiwU=: 00:21:23.064 00:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MmMxMGYyYmM1Y2QyZmViMTFlZWExYTY4OTFjYzFkY2I1OWU4NzMzZGVkNTM5MDJhMWUyOTQ0ZjcxMzgzNDVjYmBdiwU=: 00:21:23.996 00:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.996 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.996 00:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:23.996 00:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.996 00:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.996 00:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.996 00:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:23.996 00:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:23.996 00:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:23.996 00:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:24.255 00:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:21:24.255 00:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:24.255 00:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:24.255 00:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:24.255 00:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:24.255 00:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.255 00:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:24.255 00:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.255 00:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.255 00:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.255 00:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:24.255 00:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:24.255 00:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:24.513 00:21:24.513 00:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:24.513 00:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.513 00:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:25.079 00:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.079 00:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:25.079 00:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.079 00:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.079 00:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.079 00:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:25.079 { 00:21:25.079 "cntlid": 113, 00:21:25.079 "qid": 0, 00:21:25.079 "state": "enabled", 00:21:25.079 "thread": "nvmf_tgt_poll_group_000", 00:21:25.079 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:25.079 "listen_address": { 00:21:25.079 "trtype": "TCP", 00:21:25.079 "adrfam": "IPv4", 00:21:25.079 "traddr": "10.0.0.2", 00:21:25.079 "trsvcid": "4420" 00:21:25.079 }, 00:21:25.079 "peer_address": { 00:21:25.079 "trtype": "TCP", 00:21:25.079 "adrfam": "IPv4", 00:21:25.079 "traddr": "10.0.0.1", 00:21:25.079 "trsvcid": "59458" 00:21:25.079 }, 00:21:25.079 "auth": { 00:21:25.079 "state": "completed", 00:21:25.079 "digest": "sha512", 00:21:25.079 "dhgroup": "ffdhe3072" 00:21:25.079 } 00:21:25.079 } 00:21:25.079 ]' 00:21:25.079 00:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:25.079 00:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:25.079 00:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:25.079 00:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:25.079 00:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:25.079 00:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.079 00:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.079 00:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.337 00:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWQ0ZGZmN2VjZmU1YWQ0ODliMmZkNjkzMGVmNDFjNjNlYTY3NDVkZjRmNGU3MDMzy95nmg==: --dhchap-ctrl-secret DHHC-1:03:M2YzNzA2M2E0ODJjZjk5NmE5ZjkxODU3ZDUxZTk3YzgyNTc3OGI4NmNhMDJiMTU1NTZlM2Y5NzdhMDQwNzdiNZjIPvk=: 00:21:25.337 00:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MWQ0ZGZmN2VjZmU1YWQ0ODliMmZkNjkzMGVmNDFjNjNlYTY3NDVkZjRmNGU3MDMzy95nmg==: --dhchap-ctrl-secret DHHC-1:03:M2YzNzA2M2E0ODJjZjk5NmE5ZjkxODU3ZDUxZTk3YzgyNTc3OGI4NmNhMDJiMTU1NTZlM2Y5NzdhMDQwNzdiNZjIPvk=: 00:21:26.269 00:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.269 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.269 00:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:26.269 00:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.269 00:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.269 00:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.269 00:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:26.269 00:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:26.269 00:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:26.528 00:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:21:26.528 00:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:26.528 00:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:26.528 00:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:26.528 00:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:26.528 00:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.528 00:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.528 00:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.528 00:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.528 00:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.528 00:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.528 00:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.528 00:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.797 00:21:26.797 00:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:26.797 00:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:26.797 00:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.364 00:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.364 00:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.364 00:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.364 00:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.364 00:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.364 00:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:27.364 { 00:21:27.364 "cntlid": 115, 00:21:27.364 "qid": 0, 00:21:27.364 "state": "enabled", 00:21:27.364 "thread": "nvmf_tgt_poll_group_000", 00:21:27.364 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:27.364 "listen_address": { 00:21:27.364 "trtype": "TCP", 00:21:27.364 "adrfam": "IPv4", 00:21:27.364 "traddr": "10.0.0.2", 00:21:27.364 "trsvcid": "4420" 00:21:27.364 }, 00:21:27.364 "peer_address": { 00:21:27.364 "trtype": "TCP", 00:21:27.364 "adrfam": "IPv4", 00:21:27.364 "traddr": "10.0.0.1", 00:21:27.364 "trsvcid": "59480" 00:21:27.364 }, 00:21:27.364 "auth": { 00:21:27.364 "state": "completed", 00:21:27.364 "digest": "sha512", 00:21:27.364 "dhgroup": "ffdhe3072" 00:21:27.364 } 00:21:27.364 } 00:21:27.364 ]' 00:21:27.364 00:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:27.364 00:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:27.364 00:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:27.364 00:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:27.364 00:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:27.364 00:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.364 00:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.364 00:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.623 00:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmQ5ZGJhMjljNTJhY2MxNjZiMWRjZjgyMWRkNzVmMWR+AU16: --dhchap-ctrl-secret DHHC-1:02:ZjRiYjk2ZjM3MDA1NTMyZjE5ZmUyZWI5YTdkODUxYmY5NGE4NjI1MWRiM2Q2M2IzCitAaw==: 00:21:27.623 00:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmQ5ZGJhMjljNTJhY2MxNjZiMWRjZjgyMWRkNzVmMWR+AU16: --dhchap-ctrl-secret DHHC-1:02:ZjRiYjk2ZjM3MDA1NTMyZjE5ZmUyZWI5YTdkODUxYmY5NGE4NjI1MWRiM2Q2M2IzCitAaw==: 00:21:28.557 00:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.557 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.557 00:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:28.557 00:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.557 00:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.557 00:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.557 00:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:28.557 00:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:28.557 00:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:28.817 00:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:21:28.817 00:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:28.818 00:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:28.818 00:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:28.818 00:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:28.818 00:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.818 00:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.818 00:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.818 00:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.818 00:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.818 00:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.818 00:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.818 00:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:29.384 00:21:29.384 00:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:29.384 00:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:29.384 00:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.641 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.641 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.641 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.641 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.641 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.641 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:29.641 { 00:21:29.641 "cntlid": 117, 00:21:29.641 "qid": 0, 00:21:29.641 "state": "enabled", 00:21:29.641 "thread": "nvmf_tgt_poll_group_000", 00:21:29.641 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:29.641 "listen_address": { 00:21:29.641 "trtype": "TCP", 00:21:29.641 "adrfam": "IPv4", 00:21:29.641 "traddr": "10.0.0.2", 00:21:29.641 "trsvcid": "4420" 00:21:29.641 }, 00:21:29.641 "peer_address": { 00:21:29.641 "trtype": "TCP", 00:21:29.641 "adrfam": "IPv4", 00:21:29.641 "traddr": "10.0.0.1", 00:21:29.641 "trsvcid": "59520" 00:21:29.641 }, 00:21:29.641 "auth": { 00:21:29.641 "state": "completed", 00:21:29.641 "digest": "sha512", 00:21:29.641 "dhgroup": "ffdhe3072" 00:21:29.641 } 00:21:29.641 } 00:21:29.641 ]' 00:21:29.641 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:29.641 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:29.641 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:29.641 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:29.641 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:29.641 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.641 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.641 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.898 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWQ0YmYyZTkyNDlmN2RjZjM4OGMzNjZiOWNjMDUyMzFiOWE3YmQ5Njg0NDQ3ZTE36pLhlg==: --dhchap-ctrl-secret DHHC-1:01:MDM1MmMxNjdlZGFlMTY0ZGI3MDI3MTAzMzQ5ZDU5ZGKCUd+h: 00:21:29.898 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MWQ0YmYyZTkyNDlmN2RjZjM4OGMzNjZiOWNjMDUyMzFiOWE3YmQ5Njg0NDQ3ZTE36pLhlg==: --dhchap-ctrl-secret DHHC-1:01:MDM1MmMxNjdlZGFlMTY0ZGI3MDI3MTAzMzQ5ZDU5ZGKCUd+h: 00:21:30.831 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.831 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.831 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:30.831 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.831 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.831 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.831 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:30.831 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:30.831 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:31.089 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:21:31.089 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:31.089 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:31.090 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:31.090 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:31.090 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.090 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:31.090 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.090 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.090 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.090 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:31.090 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:31.090 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:31.348 00:21:31.606 00:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:31.606 00:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.606 00:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:31.865 00:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.865 00:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.865 00:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.865 00:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.865 00:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.865 00:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:31.865 { 00:21:31.865 "cntlid": 119, 00:21:31.865 "qid": 0, 00:21:31.865 "state": "enabled", 00:21:31.865 "thread": "nvmf_tgt_poll_group_000", 00:21:31.865 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:31.865 "listen_address": { 00:21:31.865 "trtype": "TCP", 00:21:31.865 "adrfam": "IPv4", 00:21:31.865 "traddr": "10.0.0.2", 00:21:31.865 "trsvcid": "4420" 00:21:31.865 }, 00:21:31.865 "peer_address": { 00:21:31.865 "trtype": "TCP", 00:21:31.865 "adrfam": "IPv4", 00:21:31.865 "traddr": "10.0.0.1", 00:21:31.865 "trsvcid": "52474" 00:21:31.865 }, 00:21:31.865 "auth": { 00:21:31.865 "state": "completed", 00:21:31.865 "digest": "sha512", 00:21:31.865 "dhgroup": "ffdhe3072" 00:21:31.865 } 00:21:31.865 } 00:21:31.865 ]' 00:21:31.865 00:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:31.865 00:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:31.865 00:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:31.865 00:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:31.865 00:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:31.865 00:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.865 00:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.865 00:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.122 00:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmMxMGYyYmM1Y2QyZmViMTFlZWExYTY4OTFjYzFkY2I1OWU4NzMzZGVkNTM5MDJhMWUyOTQ0ZjcxMzgzNDVjYmBdiwU=: 00:21:32.122 00:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MmMxMGYyYmM1Y2QyZmViMTFlZWExYTY4OTFjYzFkY2I1OWU4NzMzZGVkNTM5MDJhMWUyOTQ0ZjcxMzgzNDVjYmBdiwU=: 00:21:33.055 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.055 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.055 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:33.055 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.055 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.055 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.055 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:33.055 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:33.055 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:33.055 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:33.313 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:21:33.313 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:33.313 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:33.313 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:33.313 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:33.313 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.313 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.313 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.313 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.313 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.313 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.313 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.313 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.879 00:21:33.879 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:33.879 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:33.879 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.138 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.138 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.138 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.138 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.138 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.138 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:34.138 { 00:21:34.138 "cntlid": 121, 00:21:34.138 "qid": 0, 00:21:34.138 "state": "enabled", 00:21:34.138 "thread": "nvmf_tgt_poll_group_000", 00:21:34.138 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:34.138 "listen_address": { 00:21:34.138 "trtype": "TCP", 00:21:34.138 "adrfam": "IPv4", 00:21:34.138 "traddr": "10.0.0.2", 00:21:34.138 "trsvcid": "4420" 00:21:34.138 }, 00:21:34.138 "peer_address": { 00:21:34.138 "trtype": "TCP", 00:21:34.138 "adrfam": "IPv4", 00:21:34.138 "traddr": "10.0.0.1", 00:21:34.138 "trsvcid": "52492" 00:21:34.138 }, 00:21:34.138 "auth": { 00:21:34.138 "state": "completed", 00:21:34.138 "digest": "sha512", 00:21:34.138 "dhgroup": "ffdhe4096" 00:21:34.138 } 00:21:34.138 } 00:21:34.138 ]' 00:21:34.138 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:34.138 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:34.138 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:34.138 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:34.138 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:34.138 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.138 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.138 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.397 00:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWQ0ZGZmN2VjZmU1YWQ0ODliMmZkNjkzMGVmNDFjNjNlYTY3NDVkZjRmNGU3MDMzy95nmg==: --dhchap-ctrl-secret DHHC-1:03:M2YzNzA2M2E0ODJjZjk5NmE5ZjkxODU3ZDUxZTk3YzgyNTc3OGI4NmNhMDJiMTU1NTZlM2Y5NzdhMDQwNzdiNZjIPvk=: 00:21:34.397 00:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MWQ0ZGZmN2VjZmU1YWQ0ODliMmZkNjkzMGVmNDFjNjNlYTY3NDVkZjRmNGU3MDMzy95nmg==: --dhchap-ctrl-secret DHHC-1:03:M2YzNzA2M2E0ODJjZjk5NmE5ZjkxODU3ZDUxZTk3YzgyNTc3OGI4NmNhMDJiMTU1NTZlM2Y5NzdhMDQwNzdiNZjIPvk=: 00:21:35.331 00:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.331 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.331 00:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:35.331 00:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.331 00:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.331 00:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.331 00:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:35.332 00:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:35.332 00:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:35.594 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:21:35.594 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:35.594 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:35.594 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:35.594 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:35.594 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:35.594 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:35.594 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.595 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.595 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.595 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:35.595 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:35.595 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:35.853 00:21:35.853 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:35.853 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.853 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:36.420 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.420 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.420 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.420 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.420 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.420 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:36.420 { 00:21:36.420 "cntlid": 123, 00:21:36.420 "qid": 0, 00:21:36.420 "state": "enabled", 00:21:36.420 "thread": "nvmf_tgt_poll_group_000", 00:21:36.420 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:36.420 "listen_address": { 00:21:36.420 "trtype": "TCP", 00:21:36.420 "adrfam": "IPv4", 00:21:36.420 "traddr": "10.0.0.2", 00:21:36.420 "trsvcid": "4420" 00:21:36.420 }, 00:21:36.420 "peer_address": { 00:21:36.420 "trtype": "TCP", 00:21:36.420 "adrfam": "IPv4", 00:21:36.420 "traddr": "10.0.0.1", 00:21:36.420 "trsvcid": "52514" 00:21:36.420 }, 00:21:36.420 "auth": { 00:21:36.420 "state": "completed", 00:21:36.420 "digest": "sha512", 00:21:36.420 "dhgroup": "ffdhe4096" 00:21:36.420 } 00:21:36.420 } 00:21:36.420 ]' 00:21:36.420 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:36.420 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:36.420 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:36.420 00:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:36.420 00:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:36.420 00:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.420 00:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.420 00:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.684 00:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmQ5ZGJhMjljNTJhY2MxNjZiMWRjZjgyMWRkNzVmMWR+AU16: --dhchap-ctrl-secret DHHC-1:02:ZjRiYjk2ZjM3MDA1NTMyZjE5ZmUyZWI5YTdkODUxYmY5NGE4NjI1MWRiM2Q2M2IzCitAaw==: 00:21:36.684 00:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmQ5ZGJhMjljNTJhY2MxNjZiMWRjZjgyMWRkNzVmMWR+AU16: --dhchap-ctrl-secret DHHC-1:02:ZjRiYjk2ZjM3MDA1NTMyZjE5ZmUyZWI5YTdkODUxYmY5NGE4NjI1MWRiM2Q2M2IzCitAaw==: 00:21:37.624 00:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.624 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.624 00:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:37.624 00:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.624 00:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.624 00:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.624 00:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:37.624 00:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:37.624 00:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:37.882 00:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:21:37.882 00:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:37.882 00:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:37.882 00:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:37.882 00:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:37.882 00:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.882 00:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.882 00:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.882 00:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.882 00:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.882 00:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.882 00:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.882 00:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:38.141 00:21:38.141 00:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:38.141 00:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:38.141 00:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.707 00:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.707 00:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.707 00:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.707 00:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.707 00:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.707 00:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:38.707 { 00:21:38.707 "cntlid": 125, 00:21:38.707 "qid": 0, 00:21:38.707 "state": "enabled", 00:21:38.707 "thread": "nvmf_tgt_poll_group_000", 00:21:38.707 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:38.707 "listen_address": { 00:21:38.707 "trtype": "TCP", 00:21:38.707 "adrfam": "IPv4", 00:21:38.707 "traddr": "10.0.0.2", 00:21:38.707 "trsvcid": "4420" 00:21:38.707 }, 00:21:38.707 "peer_address": { 00:21:38.707 "trtype": "TCP", 00:21:38.707 "adrfam": "IPv4", 00:21:38.707 "traddr": "10.0.0.1", 00:21:38.707 "trsvcid": "52532" 00:21:38.707 }, 00:21:38.707 "auth": { 00:21:38.707 "state": "completed", 00:21:38.707 "digest": "sha512", 00:21:38.707 "dhgroup": "ffdhe4096" 00:21:38.707 } 00:21:38.707 } 00:21:38.707 ]' 00:21:38.707 00:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:38.707 00:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:38.707 00:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:38.707 00:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:38.707 00:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:38.707 00:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.707 00:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.707 00:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.965 00:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWQ0YmYyZTkyNDlmN2RjZjM4OGMzNjZiOWNjMDUyMzFiOWE3YmQ5Njg0NDQ3ZTE36pLhlg==: --dhchap-ctrl-secret DHHC-1:01:MDM1MmMxNjdlZGFlMTY0ZGI3MDI3MTAzMzQ5ZDU5ZGKCUd+h: 00:21:38.965 00:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MWQ0YmYyZTkyNDlmN2RjZjM4OGMzNjZiOWNjMDUyMzFiOWE3YmQ5Njg0NDQ3ZTE36pLhlg==: --dhchap-ctrl-secret DHHC-1:01:MDM1MmMxNjdlZGFlMTY0ZGI3MDI3MTAzMzQ5ZDU5ZGKCUd+h: 00:21:39.899 00:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.899 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.899 00:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:39.899 00:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.899 00:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.899 00:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.899 00:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:39.899 00:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:39.899 00:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:40.157 00:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:21:40.157 00:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:40.157 00:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:40.157 00:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:40.157 00:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:40.157 00:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:40.157 00:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:40.157 00:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.157 00:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.157 00:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.157 00:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:40.157 00:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:40.157 00:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:40.416 00:21:40.416 00:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:40.416 00:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:40.416 00:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.674 00:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.674 00:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.674 00:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.674 00:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.932 00:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.932 00:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:40.932 { 00:21:40.932 "cntlid": 127, 00:21:40.932 "qid": 0, 00:21:40.932 "state": "enabled", 00:21:40.932 "thread": "nvmf_tgt_poll_group_000", 00:21:40.932 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:40.932 "listen_address": { 00:21:40.932 "trtype": "TCP", 00:21:40.932 "adrfam": "IPv4", 00:21:40.932 "traddr": "10.0.0.2", 00:21:40.932 "trsvcid": "4420" 00:21:40.932 }, 00:21:40.932 "peer_address": { 00:21:40.932 "trtype": "TCP", 00:21:40.932 "adrfam": "IPv4", 00:21:40.932 "traddr": "10.0.0.1", 00:21:40.932 "trsvcid": "52564" 00:21:40.932 }, 00:21:40.932 "auth": { 00:21:40.932 "state": "completed", 00:21:40.932 "digest": "sha512", 00:21:40.932 "dhgroup": "ffdhe4096" 00:21:40.932 } 00:21:40.932 } 00:21:40.932 ]' 00:21:40.932 00:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:40.932 00:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:40.932 00:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:40.932 00:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:40.932 00:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:40.932 00:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:40.932 00:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.932 00:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:41.190 00:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmMxMGYyYmM1Y2QyZmViMTFlZWExYTY4OTFjYzFkY2I1OWU4NzMzZGVkNTM5MDJhMWUyOTQ0ZjcxMzgzNDVjYmBdiwU=: 00:21:41.190 00:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MmMxMGYyYmM1Y2QyZmViMTFlZWExYTY4OTFjYzFkY2I1OWU4NzMzZGVkNTM5MDJhMWUyOTQ0ZjcxMzgzNDVjYmBdiwU=: 00:21:42.131 00:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.131 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.131 00:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:42.131 00:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.131 00:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.131 00:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.131 00:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:42.131 00:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:42.131 00:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:42.131 00:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:42.395 00:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:21:42.395 00:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:42.395 00:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:42.395 00:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:42.395 00:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:42.395 00:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.395 00:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:42.395 00:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.395 00:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.395 00:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.395 00:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:42.395 00:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:42.395 00:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:42.960 00:21:42.960 00:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:42.960 00:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:42.960 00:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.218 00:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.218 00:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:43.218 00:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.218 00:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.218 00:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.218 00:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:43.218 { 00:21:43.218 "cntlid": 129, 00:21:43.218 "qid": 0, 00:21:43.218 "state": "enabled", 00:21:43.218 "thread": "nvmf_tgt_poll_group_000", 00:21:43.218 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:43.218 "listen_address": { 00:21:43.218 "trtype": "TCP", 00:21:43.218 "adrfam": "IPv4", 00:21:43.218 "traddr": "10.0.0.2", 00:21:43.218 "trsvcid": "4420" 00:21:43.218 }, 00:21:43.218 "peer_address": { 00:21:43.218 "trtype": "TCP", 00:21:43.218 "adrfam": "IPv4", 00:21:43.218 "traddr": "10.0.0.1", 00:21:43.218 "trsvcid": "59248" 00:21:43.218 }, 00:21:43.218 "auth": { 00:21:43.218 "state": "completed", 00:21:43.218 "digest": "sha512", 00:21:43.218 "dhgroup": "ffdhe6144" 00:21:43.218 } 00:21:43.218 } 00:21:43.218 ]' 00:21:43.218 00:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:43.218 00:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:43.218 00:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:43.218 00:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:43.218 00:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:43.218 00:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:43.218 00:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:43.218 00:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.476 00:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWQ0ZGZmN2VjZmU1YWQ0ODliMmZkNjkzMGVmNDFjNjNlYTY3NDVkZjRmNGU3MDMzy95nmg==: --dhchap-ctrl-secret DHHC-1:03:M2YzNzA2M2E0ODJjZjk5NmE5ZjkxODU3ZDUxZTk3YzgyNTc3OGI4NmNhMDJiMTU1NTZlM2Y5NzdhMDQwNzdiNZjIPvk=: 00:21:43.476 00:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MWQ0ZGZmN2VjZmU1YWQ0ODliMmZkNjkzMGVmNDFjNjNlYTY3NDVkZjRmNGU3MDMzy95nmg==: --dhchap-ctrl-secret DHHC-1:03:M2YzNzA2M2E0ODJjZjk5NmE5ZjkxODU3ZDUxZTk3YzgyNTc3OGI4NmNhMDJiMTU1NTZlM2Y5NzdhMDQwNzdiNZjIPvk=: 00:21:44.407 00:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.407 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.407 00:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:44.407 00:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.407 00:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.407 00:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.407 00:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:44.407 00:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:44.407 00:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:44.665 00:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:21:44.665 00:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:44.665 00:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:44.665 00:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:44.665 00:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:44.665 00:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:44.665 00:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:44.665 00:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.665 00:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.665 00:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.665 00:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:44.665 00:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:44.665 00:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:45.231 00:21:45.231 00:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:45.231 00:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.231 00:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:45.488 00:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.488 00:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.488 00:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.488 00:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.488 00:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.488 00:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:45.488 { 00:21:45.488 "cntlid": 131, 00:21:45.488 "qid": 0, 00:21:45.488 "state": "enabled", 00:21:45.488 "thread": "nvmf_tgt_poll_group_000", 00:21:45.488 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:45.488 "listen_address": { 00:21:45.488 "trtype": "TCP", 00:21:45.488 "adrfam": "IPv4", 00:21:45.488 "traddr": "10.0.0.2", 00:21:45.488 "trsvcid": "4420" 00:21:45.488 }, 00:21:45.488 "peer_address": { 00:21:45.488 "trtype": "TCP", 00:21:45.488 "adrfam": "IPv4", 00:21:45.488 "traddr": "10.0.0.1", 00:21:45.488 "trsvcid": "59284" 00:21:45.488 }, 00:21:45.488 "auth": { 00:21:45.488 "state": "completed", 00:21:45.488 "digest": "sha512", 00:21:45.488 "dhgroup": "ffdhe6144" 00:21:45.488 } 00:21:45.488 } 00:21:45.488 ]' 00:21:45.488 00:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:45.746 00:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:45.746 00:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:45.746 00:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:45.746 00:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:45.746 00:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.746 00:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.746 00:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:46.004 00:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmQ5ZGJhMjljNTJhY2MxNjZiMWRjZjgyMWRkNzVmMWR+AU16: --dhchap-ctrl-secret DHHC-1:02:ZjRiYjk2ZjM3MDA1NTMyZjE5ZmUyZWI5YTdkODUxYmY5NGE4NjI1MWRiM2Q2M2IzCitAaw==: 00:21:46.004 00:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmQ5ZGJhMjljNTJhY2MxNjZiMWRjZjgyMWRkNzVmMWR+AU16: --dhchap-ctrl-secret DHHC-1:02:ZjRiYjk2ZjM3MDA1NTMyZjE5ZmUyZWI5YTdkODUxYmY5NGE4NjI1MWRiM2Q2M2IzCitAaw==: 00:21:46.938 00:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.938 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.938 00:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:46.938 00:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.938 00:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.938 00:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.938 00:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:46.938 00:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:46.938 00:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:47.196 00:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:21:47.196 00:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:47.196 00:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:47.196 00:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:47.196 00:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:47.196 00:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:47.196 00:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:47.196 00:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.196 00:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.196 00:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.196 00:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:47.196 00:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:47.196 00:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:47.763 00:21:47.763 00:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:47.763 00:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.763 00:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:48.021 00:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.021 00:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:48.021 00:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.021 00:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.021 00:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.021 00:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:48.021 { 00:21:48.021 "cntlid": 133, 00:21:48.021 "qid": 0, 00:21:48.021 "state": "enabled", 00:21:48.021 "thread": "nvmf_tgt_poll_group_000", 00:21:48.021 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:48.021 "listen_address": { 00:21:48.021 "trtype": "TCP", 00:21:48.021 "adrfam": "IPv4", 00:21:48.021 "traddr": "10.0.0.2", 00:21:48.021 "trsvcid": "4420" 00:21:48.021 }, 00:21:48.021 "peer_address": { 00:21:48.021 "trtype": "TCP", 00:21:48.021 "adrfam": "IPv4", 00:21:48.021 "traddr": "10.0.0.1", 00:21:48.021 "trsvcid": "59312" 00:21:48.021 }, 00:21:48.021 "auth": { 00:21:48.021 "state": "completed", 00:21:48.021 "digest": "sha512", 00:21:48.021 "dhgroup": "ffdhe6144" 00:21:48.021 } 00:21:48.021 } 00:21:48.021 ]' 00:21:48.021 00:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:48.021 00:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:48.021 00:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:48.021 00:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:48.021 00:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:48.021 00:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:48.021 00:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:48.021 00:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.587 00:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWQ0YmYyZTkyNDlmN2RjZjM4OGMzNjZiOWNjMDUyMzFiOWE3YmQ5Njg0NDQ3ZTE36pLhlg==: --dhchap-ctrl-secret DHHC-1:01:MDM1MmMxNjdlZGFlMTY0ZGI3MDI3MTAzMzQ5ZDU5ZGKCUd+h: 00:21:48.587 00:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MWQ0YmYyZTkyNDlmN2RjZjM4OGMzNjZiOWNjMDUyMzFiOWE3YmQ5Njg0NDQ3ZTE36pLhlg==: --dhchap-ctrl-secret DHHC-1:01:MDM1MmMxNjdlZGFlMTY0ZGI3MDI3MTAzMzQ5ZDU5ZGKCUd+h: 00:21:49.153 00:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:49.411 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:49.411 00:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:49.411 00:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.411 00:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.411 00:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.411 00:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:49.412 00:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:49.412 00:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:49.670 00:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:21:49.670 00:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:49.670 00:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:49.670 00:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:49.670 00:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:49.670 00:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:49.670 00:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:49.670 00:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.670 00:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.670 00:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.670 00:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:49.670 00:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:49.670 00:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:50.236 00:21:50.236 00:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:50.236 00:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.236 00:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:50.494 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.494 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.494 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.494 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.494 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.494 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:50.494 { 00:21:50.494 "cntlid": 135, 00:21:50.494 "qid": 0, 00:21:50.494 "state": "enabled", 00:21:50.494 "thread": "nvmf_tgt_poll_group_000", 00:21:50.494 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:50.494 "listen_address": { 00:21:50.494 "trtype": "TCP", 00:21:50.494 "adrfam": "IPv4", 00:21:50.494 "traddr": "10.0.0.2", 00:21:50.494 "trsvcid": "4420" 00:21:50.494 }, 00:21:50.494 "peer_address": { 00:21:50.494 "trtype": "TCP", 00:21:50.494 "adrfam": "IPv4", 00:21:50.494 "traddr": "10.0.0.1", 00:21:50.494 "trsvcid": "59336" 00:21:50.494 }, 00:21:50.494 "auth": { 00:21:50.494 "state": "completed", 00:21:50.494 "digest": "sha512", 00:21:50.494 "dhgroup": "ffdhe6144" 00:21:50.494 } 00:21:50.494 } 00:21:50.494 ]' 00:21:50.494 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:50.494 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:50.494 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:50.494 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:50.494 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:50.494 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:50.494 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:50.494 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.752 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmMxMGYyYmM1Y2QyZmViMTFlZWExYTY4OTFjYzFkY2I1OWU4NzMzZGVkNTM5MDJhMWUyOTQ0ZjcxMzgzNDVjYmBdiwU=: 00:21:50.752 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MmMxMGYyYmM1Y2QyZmViMTFlZWExYTY4OTFjYzFkY2I1OWU4NzMzZGVkNTM5MDJhMWUyOTQ0ZjcxMzgzNDVjYmBdiwU=: 00:21:51.684 00:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.684 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.684 00:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:51.684 00:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.684 00:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.684 00:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.684 00:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:51.684 00:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:51.684 00:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:51.685 00:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:51.948 00:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:21:51.948 00:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:51.948 00:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:51.949 00:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:51.949 00:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:51.949 00:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:51.949 00:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:51.949 00:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.949 00:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.949 00:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.949 00:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:51.949 00:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:51.949 00:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:52.885 00:21:52.885 00:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:52.885 00:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:52.885 00:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:53.143 00:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.143 00:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:53.143 00:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.143 00:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.143 00:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.143 00:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:53.143 { 00:21:53.143 "cntlid": 137, 00:21:53.143 "qid": 0, 00:21:53.143 "state": "enabled", 00:21:53.143 "thread": "nvmf_tgt_poll_group_000", 00:21:53.143 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:53.143 "listen_address": { 00:21:53.143 "trtype": "TCP", 00:21:53.143 "adrfam": "IPv4", 00:21:53.143 "traddr": "10.0.0.2", 00:21:53.143 "trsvcid": "4420" 00:21:53.143 }, 00:21:53.143 "peer_address": { 00:21:53.143 "trtype": "TCP", 00:21:53.143 "adrfam": "IPv4", 00:21:53.143 "traddr": "10.0.0.1", 00:21:53.143 "trsvcid": "55388" 00:21:53.143 }, 00:21:53.143 "auth": { 00:21:53.143 "state": "completed", 00:21:53.143 "digest": "sha512", 00:21:53.143 "dhgroup": "ffdhe8192" 00:21:53.143 } 00:21:53.143 } 00:21:53.143 ]' 00:21:53.143 00:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:53.143 00:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:53.143 00:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:53.143 00:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:53.143 00:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:53.401 00:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:53.401 00:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:53.401 00:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:53.659 00:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWQ0ZGZmN2VjZmU1YWQ0ODliMmZkNjkzMGVmNDFjNjNlYTY3NDVkZjRmNGU3MDMzy95nmg==: --dhchap-ctrl-secret DHHC-1:03:M2YzNzA2M2E0ODJjZjk5NmE5ZjkxODU3ZDUxZTk3YzgyNTc3OGI4NmNhMDJiMTU1NTZlM2Y5NzdhMDQwNzdiNZjIPvk=: 00:21:53.659 00:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MWQ0ZGZmN2VjZmU1YWQ0ODliMmZkNjkzMGVmNDFjNjNlYTY3NDVkZjRmNGU3MDMzy95nmg==: --dhchap-ctrl-secret DHHC-1:03:M2YzNzA2M2E0ODJjZjk5NmE5ZjkxODU3ZDUxZTk3YzgyNTc3OGI4NmNhMDJiMTU1NTZlM2Y5NzdhMDQwNzdiNZjIPvk=: 00:21:54.593 00:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:54.593 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:54.593 00:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:54.593 00:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.593 00:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.593 00:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.593 00:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:54.593 00:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:54.593 00:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:54.851 00:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:21:54.851 00:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:54.852 00:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:54.852 00:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:54.852 00:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:54.852 00:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:54.852 00:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:54.852 00:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.852 00:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.852 00:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.852 00:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:54.852 00:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:54.852 00:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:55.418 00:21:55.676 00:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:55.676 00:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:55.676 00:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.933 00:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.933 00:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:55.933 00:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.933 00:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.933 00:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.933 00:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:55.933 { 00:21:55.933 "cntlid": 139, 00:21:55.933 "qid": 0, 00:21:55.933 "state": "enabled", 00:21:55.933 "thread": "nvmf_tgt_poll_group_000", 00:21:55.933 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:55.933 "listen_address": { 00:21:55.933 "trtype": "TCP", 00:21:55.933 "adrfam": "IPv4", 00:21:55.933 "traddr": "10.0.0.2", 00:21:55.933 "trsvcid": "4420" 00:21:55.933 }, 00:21:55.933 "peer_address": { 00:21:55.933 "trtype": "TCP", 00:21:55.933 "adrfam": "IPv4", 00:21:55.933 "traddr": "10.0.0.1", 00:21:55.933 "trsvcid": "55400" 00:21:55.933 }, 00:21:55.933 "auth": { 00:21:55.933 "state": "completed", 00:21:55.933 "digest": "sha512", 00:21:55.933 "dhgroup": "ffdhe8192" 00:21:55.933 } 00:21:55.933 } 00:21:55.933 ]' 00:21:55.933 00:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:55.933 00:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:55.933 00:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:55.933 00:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:55.933 00:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:55.933 00:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:55.933 00:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:55.934 00:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.192 00:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmQ5ZGJhMjljNTJhY2MxNjZiMWRjZjgyMWRkNzVmMWR+AU16: --dhchap-ctrl-secret DHHC-1:02:ZjRiYjk2ZjM3MDA1NTMyZjE5ZmUyZWI5YTdkODUxYmY5NGE4NjI1MWRiM2Q2M2IzCitAaw==: 00:21:56.192 00:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmQ5ZGJhMjljNTJhY2MxNjZiMWRjZjgyMWRkNzVmMWR+AU16: --dhchap-ctrl-secret DHHC-1:02:ZjRiYjk2ZjM3MDA1NTMyZjE5ZmUyZWI5YTdkODUxYmY5NGE4NjI1MWRiM2Q2M2IzCitAaw==: 00:21:57.129 00:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:57.129 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:57.129 00:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:57.129 00:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.129 00:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.129 00:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.129 00:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:57.129 00:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:57.129 00:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:57.387 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:21:57.387 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:57.387 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:57.387 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:57.387 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:57.387 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:57.387 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:57.387 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.387 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.387 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.387 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:57.387 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:57.387 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:58.320 00:21:58.320 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:58.320 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:58.320 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.577 00:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.578 00:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:58.578 00:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.578 00:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.578 00:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.578 00:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:58.578 { 00:21:58.578 "cntlid": 141, 00:21:58.578 "qid": 0, 00:21:58.578 "state": "enabled", 00:21:58.578 "thread": "nvmf_tgt_poll_group_000", 00:21:58.578 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:58.578 "listen_address": { 00:21:58.578 "trtype": "TCP", 00:21:58.578 "adrfam": "IPv4", 00:21:58.578 "traddr": "10.0.0.2", 00:21:58.578 "trsvcid": "4420" 00:21:58.578 }, 00:21:58.578 "peer_address": { 00:21:58.578 "trtype": "TCP", 00:21:58.578 "adrfam": "IPv4", 00:21:58.578 "traddr": "10.0.0.1", 00:21:58.578 "trsvcid": "55428" 00:21:58.578 }, 00:21:58.578 "auth": { 00:21:58.578 "state": "completed", 00:21:58.578 "digest": "sha512", 00:21:58.578 "dhgroup": "ffdhe8192" 00:21:58.578 } 00:21:58.578 } 00:21:58.578 ]' 00:21:58.578 00:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:58.578 00:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:58.578 00:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:58.578 00:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:58.578 00:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:58.578 00:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:58.578 00:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:58.578 00:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:58.835 00:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWQ0YmYyZTkyNDlmN2RjZjM4OGMzNjZiOWNjMDUyMzFiOWE3YmQ5Njg0NDQ3ZTE36pLhlg==: --dhchap-ctrl-secret DHHC-1:01:MDM1MmMxNjdlZGFlMTY0ZGI3MDI3MTAzMzQ5ZDU5ZGKCUd+h: 00:21:58.835 00:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MWQ0YmYyZTkyNDlmN2RjZjM4OGMzNjZiOWNjMDUyMzFiOWE3YmQ5Njg0NDQ3ZTE36pLhlg==: --dhchap-ctrl-secret DHHC-1:01:MDM1MmMxNjdlZGFlMTY0ZGI3MDI3MTAzMzQ5ZDU5ZGKCUd+h: 00:21:59.768 00:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:59.768 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:59.768 00:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:59.768 00:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.768 00:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.026 00:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.026 00:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:00.026 00:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:00.026 00:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:00.287 00:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:22:00.287 00:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:00.287 00:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:00.287 00:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:00.287 00:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:00.287 00:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:00.287 00:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:00.287 00:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.287 00:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.287 00:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.287 00:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:00.287 00:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:00.287 00:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:01.228 00:22:01.228 00:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:01.228 00:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:01.228 00:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:01.486 00:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.486 00:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:01.486 00:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.486 00:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.486 00:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.486 00:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:01.486 { 00:22:01.486 "cntlid": 143, 00:22:01.486 "qid": 0, 00:22:01.486 "state": "enabled", 00:22:01.486 "thread": "nvmf_tgt_poll_group_000", 00:22:01.486 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:01.486 "listen_address": { 00:22:01.486 "trtype": "TCP", 00:22:01.486 "adrfam": "IPv4", 00:22:01.486 "traddr": "10.0.0.2", 00:22:01.486 "trsvcid": "4420" 00:22:01.486 }, 00:22:01.486 "peer_address": { 00:22:01.486 "trtype": "TCP", 00:22:01.486 "adrfam": "IPv4", 00:22:01.486 "traddr": "10.0.0.1", 00:22:01.486 "trsvcid": "55454" 00:22:01.486 }, 00:22:01.486 "auth": { 00:22:01.486 "state": "completed", 00:22:01.486 "digest": "sha512", 00:22:01.486 "dhgroup": "ffdhe8192" 00:22:01.487 } 00:22:01.487 } 00:22:01.487 ]' 00:22:01.487 00:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:01.487 00:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:01.487 00:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:01.487 00:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:01.487 00:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:01.487 00:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:01.487 00:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:01.487 00:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:01.744 00:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmMxMGYyYmM1Y2QyZmViMTFlZWExYTY4OTFjYzFkY2I1OWU4NzMzZGVkNTM5MDJhMWUyOTQ0ZjcxMzgzNDVjYmBdiwU=: 00:22:01.744 00:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MmMxMGYyYmM1Y2QyZmViMTFlZWExYTY4OTFjYzFkY2I1OWU4NzMzZGVkNTM5MDJhMWUyOTQ0ZjcxMzgzNDVjYmBdiwU=: 00:22:02.678 00:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:02.678 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:02.678 00:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:02.678 00:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.678 00:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.678 00:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.678 00:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:02.678 00:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:22:02.678 00:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:02.678 00:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:02.678 00:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:02.678 00:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:02.937 00:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:22:02.937 00:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:02.937 00:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:02.937 00:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:02.937 00:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:02.937 00:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:02.937 00:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:02.937 00:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.937 00:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.937 00:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.937 00:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:02.937 00:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:02.937 00:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:03.868 00:22:03.868 00:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:03.868 00:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:03.868 00:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:04.125 00:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.125 00:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:04.125 00:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.125 00:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.125 00:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.125 00:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:04.125 { 00:22:04.125 "cntlid": 145, 00:22:04.125 "qid": 0, 00:22:04.125 "state": "enabled", 00:22:04.125 "thread": "nvmf_tgt_poll_group_000", 00:22:04.125 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:04.125 "listen_address": { 00:22:04.125 "trtype": "TCP", 00:22:04.125 "adrfam": "IPv4", 00:22:04.125 "traddr": "10.0.0.2", 00:22:04.125 "trsvcid": "4420" 00:22:04.125 }, 00:22:04.125 "peer_address": { 00:22:04.125 "trtype": "TCP", 00:22:04.125 "adrfam": "IPv4", 00:22:04.125 "traddr": "10.0.0.1", 00:22:04.125 "trsvcid": "47814" 00:22:04.125 }, 00:22:04.125 "auth": { 00:22:04.125 "state": "completed", 00:22:04.125 "digest": "sha512", 00:22:04.125 "dhgroup": "ffdhe8192" 00:22:04.125 } 00:22:04.125 } 00:22:04.125 ]' 00:22:04.125 00:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:04.125 00:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:04.125 00:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:04.125 00:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:04.125 00:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:04.125 00:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:04.125 00:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:04.125 00:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:04.382 00:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWQ0ZGZmN2VjZmU1YWQ0ODliMmZkNjkzMGVmNDFjNjNlYTY3NDVkZjRmNGU3MDMzy95nmg==: --dhchap-ctrl-secret DHHC-1:03:M2YzNzA2M2E0ODJjZjk5NmE5ZjkxODU3ZDUxZTk3YzgyNTc3OGI4NmNhMDJiMTU1NTZlM2Y5NzdhMDQwNzdiNZjIPvk=: 00:22:04.382 00:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:MWQ0ZGZmN2VjZmU1YWQ0ODliMmZkNjkzMGVmNDFjNjNlYTY3NDVkZjRmNGU3MDMzy95nmg==: --dhchap-ctrl-secret DHHC-1:03:M2YzNzA2M2E0ODJjZjk5NmE5ZjkxODU3ZDUxZTk3YzgyNTc3OGI4NmNhMDJiMTU1NTZlM2Y5NzdhMDQwNzdiNZjIPvk=: 00:22:05.317 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:05.317 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:05.317 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:05.317 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.317 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.317 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.317 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:05.317 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.317 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.317 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.317 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:22:05.317 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:05.317 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:22:05.317 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:05.317 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:05.317 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:05.317 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:05.317 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:22:05.317 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:05.317 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:06.250 request: 00:22:06.250 { 00:22:06.250 "name": "nvme0", 00:22:06.250 "trtype": "tcp", 00:22:06.250 "traddr": "10.0.0.2", 00:22:06.250 "adrfam": "ipv4", 00:22:06.250 "trsvcid": "4420", 00:22:06.250 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:06.250 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:06.250 "prchk_reftag": false, 00:22:06.250 "prchk_guard": false, 00:22:06.250 "hdgst": false, 00:22:06.250 "ddgst": false, 00:22:06.250 "dhchap_key": "key2", 00:22:06.250 "allow_unrecognized_csi": false, 00:22:06.250 "method": "bdev_nvme_attach_controller", 00:22:06.250 "req_id": 1 00:22:06.250 } 00:22:06.250 Got JSON-RPC error response 00:22:06.250 response: 00:22:06.250 { 00:22:06.250 "code": -5, 00:22:06.250 "message": "Input/output error" 00:22:06.250 } 00:22:06.250 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:06.250 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:06.250 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:06.250 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:06.250 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:06.250 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.250 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.250 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.250 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:06.250 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.250 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.250 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.250 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:06.250 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:06.251 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:06.251 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:06.251 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:06.251 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:06.251 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:06.251 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:06.251 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:06.251 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:07.186 request: 00:22:07.186 { 00:22:07.186 "name": "nvme0", 00:22:07.186 "trtype": "tcp", 00:22:07.186 "traddr": "10.0.0.2", 00:22:07.186 "adrfam": "ipv4", 00:22:07.186 "trsvcid": "4420", 00:22:07.186 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:07.186 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:07.186 "prchk_reftag": false, 00:22:07.186 "prchk_guard": false, 00:22:07.186 "hdgst": false, 00:22:07.186 "ddgst": false, 00:22:07.186 "dhchap_key": "key1", 00:22:07.186 "dhchap_ctrlr_key": "ckey2", 00:22:07.186 "allow_unrecognized_csi": false, 00:22:07.186 "method": "bdev_nvme_attach_controller", 00:22:07.186 "req_id": 1 00:22:07.186 } 00:22:07.186 Got JSON-RPC error response 00:22:07.186 response: 00:22:07.186 { 00:22:07.186 "code": -5, 00:22:07.186 "message": "Input/output error" 00:22:07.186 } 00:22:07.186 00:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:07.186 00:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:07.186 00:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:07.186 00:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:07.186 00:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:07.186 00:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.186 00:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.186 00:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.186 00:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:07.186 00:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.186 00:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.186 00:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.186 00:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:07.186 00:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:07.186 00:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:07.186 00:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:07.186 00:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:07.186 00:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:07.186 00:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:07.186 00:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:07.186 00:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:07.186 00:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:07.833 request: 00:22:07.834 { 00:22:07.834 "name": "nvme0", 00:22:07.834 "trtype": "tcp", 00:22:07.834 "traddr": "10.0.0.2", 00:22:07.834 "adrfam": "ipv4", 00:22:07.834 "trsvcid": "4420", 00:22:07.834 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:07.834 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:07.834 "prchk_reftag": false, 00:22:07.834 "prchk_guard": false, 00:22:07.834 "hdgst": false, 00:22:07.834 "ddgst": false, 00:22:07.834 "dhchap_key": "key1", 00:22:07.834 "dhchap_ctrlr_key": "ckey1", 00:22:07.834 "allow_unrecognized_csi": false, 00:22:07.834 "method": "bdev_nvme_attach_controller", 00:22:07.834 "req_id": 1 00:22:07.834 } 00:22:07.834 Got JSON-RPC error response 00:22:07.834 response: 00:22:07.834 { 00:22:07.834 "code": -5, 00:22:07.834 "message": "Input/output error" 00:22:07.834 } 00:22:07.834 00:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:07.834 00:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:07.834 00:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:07.834 00:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:07.834 00:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:07.834 00:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.834 00:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.834 00:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.834 00:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 241504 00:22:07.834 00:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 241504 ']' 00:22:07.834 00:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 241504 00:22:07.834 00:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:07.834 00:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:07.834 00:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 241504 00:22:07.834 00:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:07.834 00:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:07.834 00:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 241504' 00:22:07.834 killing process with pid 241504 00:22:07.834 00:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 241504 00:22:07.834 00:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 241504 00:22:08.113 00:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:08.113 00:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:08.113 00:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:08.113 00:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.113 00:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=264565 00:22:08.113 00:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:08.113 00:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 264565 00:22:08.113 00:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 264565 ']' 00:22:08.113 00:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:08.113 00:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:08.113 00:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:08.113 00:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:08.113 00:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.395 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:08.395 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:22:08.395 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:08.395 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:08.395 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.395 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:08.395 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:08.395 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 264565 00:22:08.395 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 264565 ']' 00:22:08.395 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:08.395 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:08.395 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:08.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:08.395 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:08.395 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.680 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:08.680 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:22:08.680 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:22:08.680 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.680 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.680 null0 00:22:08.680 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.680 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:08.680 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.UTS 00:22:08.680 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.680 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.975 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.975 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.m3S ]] 00:22:08.975 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.m3S 00:22:08.975 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.975 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.975 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.975 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:08.975 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.34U 00:22:08.975 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.975 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.975 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.975 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.F0B ]] 00:22:08.975 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.F0B 00:22:08.975 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.975 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.975 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.975 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:08.975 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Tn2 00:22:08.975 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.975 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.975 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.975 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.wtG ]] 00:22:08.975 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.wtG 00:22:08.975 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.975 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.975 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.975 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:08.975 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.0dg 00:22:08.975 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.975 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.975 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.975 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:22:08.975 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:22:08.975 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:08.975 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:08.975 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:08.975 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:08.975 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:08.975 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:08.975 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.975 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.975 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.975 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:08.975 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:08.975 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:10.441 nvme0n1 00:22:10.441 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:10.441 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:10.441 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:10.441 00:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.441 00:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:10.441 00:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.441 00:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.442 00:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.442 00:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:10.442 { 00:22:10.442 "cntlid": 1, 00:22:10.442 "qid": 0, 00:22:10.442 "state": "enabled", 00:22:10.442 "thread": "nvmf_tgt_poll_group_000", 00:22:10.442 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:10.442 "listen_address": { 00:22:10.442 "trtype": "TCP", 00:22:10.442 "adrfam": "IPv4", 00:22:10.442 "traddr": "10.0.0.2", 00:22:10.442 "trsvcid": "4420" 00:22:10.442 }, 00:22:10.442 "peer_address": { 00:22:10.442 "trtype": "TCP", 00:22:10.442 "adrfam": "IPv4", 00:22:10.442 "traddr": "10.0.0.1", 00:22:10.442 "trsvcid": "47870" 00:22:10.442 }, 00:22:10.442 "auth": { 00:22:10.442 "state": "completed", 00:22:10.442 "digest": "sha512", 00:22:10.442 "dhgroup": "ffdhe8192" 00:22:10.442 } 00:22:10.442 } 00:22:10.442 ]' 00:22:10.442 00:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:10.442 00:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:10.442 00:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:10.442 00:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:10.442 00:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:10.699 00:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:10.699 00:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:10.699 00:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:10.957 00:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmMxMGYyYmM1Y2QyZmViMTFlZWExYTY4OTFjYzFkY2I1OWU4NzMzZGVkNTM5MDJhMWUyOTQ0ZjcxMzgzNDVjYmBdiwU=: 00:22:10.957 00:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MmMxMGYyYmM1Y2QyZmViMTFlZWExYTY4OTFjYzFkY2I1OWU4NzMzZGVkNTM5MDJhMWUyOTQ0ZjcxMzgzNDVjYmBdiwU=: 00:22:11.890 00:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:11.890 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:11.890 00:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:11.890 00:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.890 00:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.890 00:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.890 00:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:11.890 00:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.890 00:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.890 00:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.890 00:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:11.890 00:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:12.148 00:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:12.148 00:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:12.148 00:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:12.148 00:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:12.148 00:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:12.148 00:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:12.148 00:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:12.148 00:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:12.148 00:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:12.148 00:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:12.406 request: 00:22:12.407 { 00:22:12.407 "name": "nvme0", 00:22:12.407 "trtype": "tcp", 00:22:12.407 "traddr": "10.0.0.2", 00:22:12.407 "adrfam": "ipv4", 00:22:12.407 "trsvcid": "4420", 00:22:12.407 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:12.407 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:12.407 "prchk_reftag": false, 00:22:12.407 "prchk_guard": false, 00:22:12.407 "hdgst": false, 00:22:12.407 "ddgst": false, 00:22:12.407 "dhchap_key": "key3", 00:22:12.407 "allow_unrecognized_csi": false, 00:22:12.407 "method": "bdev_nvme_attach_controller", 00:22:12.407 "req_id": 1 00:22:12.407 } 00:22:12.407 Got JSON-RPC error response 00:22:12.407 response: 00:22:12.407 { 00:22:12.407 "code": -5, 00:22:12.407 "message": "Input/output error" 00:22:12.407 } 00:22:12.407 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:12.407 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:12.407 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:12.407 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:12.407 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:22:12.407 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:22:12.407 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:12.407 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:12.665 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:12.665 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:12.665 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:12.665 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:12.665 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:12.665 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:12.665 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:12.665 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:12.665 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:12.665 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:12.922 request: 00:22:12.922 { 00:22:12.922 "name": "nvme0", 00:22:12.922 "trtype": "tcp", 00:22:12.922 "traddr": "10.0.0.2", 00:22:12.922 "adrfam": "ipv4", 00:22:12.922 "trsvcid": "4420", 00:22:12.922 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:12.922 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:12.922 "prchk_reftag": false, 00:22:12.922 "prchk_guard": false, 00:22:12.922 "hdgst": false, 00:22:12.922 "ddgst": false, 00:22:12.922 "dhchap_key": "key3", 00:22:12.922 "allow_unrecognized_csi": false, 00:22:12.922 "method": "bdev_nvme_attach_controller", 00:22:12.922 "req_id": 1 00:22:12.922 } 00:22:12.922 Got JSON-RPC error response 00:22:12.922 response: 00:22:12.922 { 00:22:12.922 "code": -5, 00:22:12.922 "message": "Input/output error" 00:22:12.922 } 00:22:12.922 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:12.922 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:12.922 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:12.922 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:12.922 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:12.922 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:22:12.922 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:12.922 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:12.922 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:12.922 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:13.179 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:13.179 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.179 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.179 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.179 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:13.179 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.180 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.180 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.180 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:13.180 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:13.180 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:13.180 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:13.180 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:13.180 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:13.180 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:13.180 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:13.180 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:13.180 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:13.746 request: 00:22:13.746 { 00:22:13.746 "name": "nvme0", 00:22:13.746 "trtype": "tcp", 00:22:13.746 "traddr": "10.0.0.2", 00:22:13.746 "adrfam": "ipv4", 00:22:13.746 "trsvcid": "4420", 00:22:13.746 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:13.746 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:13.746 "prchk_reftag": false, 00:22:13.746 "prchk_guard": false, 00:22:13.746 "hdgst": false, 00:22:13.746 "ddgst": false, 00:22:13.746 "dhchap_key": "key0", 00:22:13.746 "dhchap_ctrlr_key": "key1", 00:22:13.746 "allow_unrecognized_csi": false, 00:22:13.746 "method": "bdev_nvme_attach_controller", 00:22:13.746 "req_id": 1 00:22:13.746 } 00:22:13.746 Got JSON-RPC error response 00:22:13.746 response: 00:22:13.746 { 00:22:13.746 "code": -5, 00:22:13.746 "message": "Input/output error" 00:22:13.746 } 00:22:13.746 00:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:13.746 00:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:13.746 00:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:13.746 00:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:13.746 00:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:22:13.746 00:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:13.746 00:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:14.005 nvme0n1 00:22:14.005 00:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:22:14.005 00:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:22:14.005 00:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:14.263 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.263 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:14.263 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:14.520 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:14.520 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.520 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.520 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.520 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:14.520 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:14.521 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:15.903 nvme0n1 00:22:15.903 00:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:22:15.903 00:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:22:15.903 00:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:16.161 00:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.161 00:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:16.161 00:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.161 00:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.161 00:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.161 00:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:22:16.161 00:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:22:16.161 00:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:16.419 00:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.419 00:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:MWQ0YmYyZTkyNDlmN2RjZjM4OGMzNjZiOWNjMDUyMzFiOWE3YmQ5Njg0NDQ3ZTE36pLhlg==: --dhchap-ctrl-secret DHHC-1:03:MmMxMGYyYmM1Y2QyZmViMTFlZWExYTY4OTFjYzFkY2I1OWU4NzMzZGVkNTM5MDJhMWUyOTQ0ZjcxMzgzNDVjYmBdiwU=: 00:22:16.419 00:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MWQ0YmYyZTkyNDlmN2RjZjM4OGMzNjZiOWNjMDUyMzFiOWE3YmQ5Njg0NDQ3ZTE36pLhlg==: --dhchap-ctrl-secret DHHC-1:03:MmMxMGYyYmM1Y2QyZmViMTFlZWExYTY4OTFjYzFkY2I1OWU4NzMzZGVkNTM5MDJhMWUyOTQ0ZjcxMzgzNDVjYmBdiwU=: 00:22:17.351 00:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:22:17.351 00:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:22:17.351 00:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:22:17.351 00:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:22:17.351 00:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:22:17.351 00:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:22:17.351 00:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:22:17.351 00:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:17.351 00:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:17.609 00:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:22:17.609 00:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:17.609 00:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:22:17.609 00:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:17.609 00:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:17.609 00:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:17.609 00:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:17.609 00:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:17.609 00:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:17.609 00:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:18.541 request: 00:22:18.541 { 00:22:18.541 "name": "nvme0", 00:22:18.541 "trtype": "tcp", 00:22:18.541 "traddr": "10.0.0.2", 00:22:18.541 "adrfam": "ipv4", 00:22:18.541 "trsvcid": "4420", 00:22:18.541 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:18.541 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:18.541 "prchk_reftag": false, 00:22:18.541 "prchk_guard": false, 00:22:18.541 "hdgst": false, 00:22:18.541 "ddgst": false, 00:22:18.541 "dhchap_key": "key1", 00:22:18.541 "allow_unrecognized_csi": false, 00:22:18.541 "method": "bdev_nvme_attach_controller", 00:22:18.541 "req_id": 1 00:22:18.541 } 00:22:18.541 Got JSON-RPC error response 00:22:18.541 response: 00:22:18.541 { 00:22:18.541 "code": -5, 00:22:18.541 "message": "Input/output error" 00:22:18.541 } 00:22:18.541 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:18.541 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:18.541 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:18.541 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:18.541 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:18.541 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:18.541 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:19.912 nvme0n1 00:22:19.912 00:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:22:19.912 00:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:22:19.912 00:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:19.912 00:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.912 00:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:19.912 00:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:20.479 00:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:20.479 00:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.479 00:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.479 00:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.479 00:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:22:20.479 00:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:20.479 00:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:20.737 nvme0n1 00:22:20.737 00:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:22:20.737 00:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:22:20.737 00:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:20.994 00:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.994 00:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:20.994 00:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:21.251 00:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:21.251 00:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.251 00:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.251 00:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.251 00:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YmQ5ZGJhMjljNTJhY2MxNjZiMWRjZjgyMWRkNzVmMWR+AU16: '' 2s 00:22:21.251 00:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:21.251 00:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:21.252 00:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YmQ5ZGJhMjljNTJhY2MxNjZiMWRjZjgyMWRkNzVmMWR+AU16: 00:22:21.252 00:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:22:21.252 00:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:21.252 00:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:21.252 00:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YmQ5ZGJhMjljNTJhY2MxNjZiMWRjZjgyMWRkNzVmMWR+AU16: ]] 00:22:21.252 00:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YmQ5ZGJhMjljNTJhY2MxNjZiMWRjZjgyMWRkNzVmMWR+AU16: 00:22:21.252 00:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:22:21.252 00:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:21.252 00:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:23.150 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:22:23.150 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:22:23.150 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:23.150 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:23.150 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:23.150 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:23.407 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:22:23.407 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key2 00:22:23.407 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.407 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.407 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.407 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:MWQ0YmYyZTkyNDlmN2RjZjM4OGMzNjZiOWNjMDUyMzFiOWE3YmQ5Njg0NDQ3ZTE36pLhlg==: 2s 00:22:23.407 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:23.408 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:23.408 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:22:23.408 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:MWQ0YmYyZTkyNDlmN2RjZjM4OGMzNjZiOWNjMDUyMzFiOWE3YmQ5Njg0NDQ3ZTE36pLhlg==: 00:22:23.408 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:23.408 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:23.408 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:22:23.408 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:MWQ0YmYyZTkyNDlmN2RjZjM4OGMzNjZiOWNjMDUyMzFiOWE3YmQ5Njg0NDQ3ZTE36pLhlg==: ]] 00:22:23.408 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:MWQ0YmYyZTkyNDlmN2RjZjM4OGMzNjZiOWNjMDUyMzFiOWE3YmQ5Njg0NDQ3ZTE36pLhlg==: 00:22:23.408 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:23.408 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:25.310 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:22:25.310 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:22:25.310 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:25.310 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:25.310 00:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:25.311 00:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:25.311 00:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:22:25.311 00:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:25.311 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:25.311 00:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:25.311 00:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.311 00:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.311 00:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.311 00:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:25.311 00:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:25.311 00:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:26.693 nvme0n1 00:22:26.693 00:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:26.693 00:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.693 00:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.693 00:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.693 00:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:26.693 00:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:27.627 00:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:22:27.627 00:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:22:27.627 00:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:27.884 00:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:27.884 00:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:27.884 00:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.884 00:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.884 00:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.884 00:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:22:27.884 00:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:22:28.140 00:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:22:28.141 00:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:22:28.141 00:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:28.397 00:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.397 00:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:28.397 00:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.397 00:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.397 00:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.397 00:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:28.397 00:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:28.397 00:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:28.397 00:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:22:28.397 00:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:28.397 00:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:22:28.397 00:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:28.397 00:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:28.397 00:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:29.329 request: 00:22:29.329 { 00:22:29.329 "name": "nvme0", 00:22:29.329 "dhchap_key": "key1", 00:22:29.329 "dhchap_ctrlr_key": "key3", 00:22:29.330 "method": "bdev_nvme_set_keys", 00:22:29.330 "req_id": 1 00:22:29.330 } 00:22:29.330 Got JSON-RPC error response 00:22:29.330 response: 00:22:29.330 { 00:22:29.330 "code": -13, 00:22:29.330 "message": "Permission denied" 00:22:29.330 } 00:22:29.330 00:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:29.330 00:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:29.330 00:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:29.330 00:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:29.330 00:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:29.330 00:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:29.330 00:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:29.587 00:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:22:29.587 00:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:22:30.521 00:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:30.521 00:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:30.521 00:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:30.779 00:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:22:30.779 00:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:30.779 00:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.779 00:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.779 00:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.779 00:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:30.779 00:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:30.779 00:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:32.152 nvme0n1 00:22:32.152 00:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:32.152 00:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.152 00:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.152 00:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.152 00:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:32.152 00:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:32.152 00:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:32.152 00:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:22:32.152 00:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:32.152 00:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:22:32.152 00:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:32.152 00:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:32.152 00:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:33.086 request: 00:22:33.086 { 00:22:33.086 "name": "nvme0", 00:22:33.086 "dhchap_key": "key2", 00:22:33.086 "dhchap_ctrlr_key": "key0", 00:22:33.086 "method": "bdev_nvme_set_keys", 00:22:33.086 "req_id": 1 00:22:33.086 } 00:22:33.086 Got JSON-RPC error response 00:22:33.086 response: 00:22:33.086 { 00:22:33.086 "code": -13, 00:22:33.086 "message": "Permission denied" 00:22:33.086 } 00:22:33.086 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:33.086 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:33.086 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:33.086 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:33.086 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:33.086 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:33.086 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:33.344 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:22:33.344 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:22:34.278 00:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:34.278 00:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:34.278 00:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:34.536 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:22:34.536 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:22:34.536 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:22:34.536 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 241532 00:22:34.536 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 241532 ']' 00:22:34.536 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 241532 00:22:34.536 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:34.536 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:34.536 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 241532 00:22:34.536 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:34.536 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:34.536 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 241532' 00:22:34.536 killing process with pid 241532 00:22:34.536 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 241532 00:22:34.536 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 241532 00:22:35.102 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:35.102 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:35.102 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:22:35.102 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:35.102 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:22:35.102 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:35.102 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:35.102 rmmod nvme_tcp 00:22:35.102 rmmod nvme_fabrics 00:22:35.102 rmmod nvme_keyring 00:22:35.102 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:35.102 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:22:35.102 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:22:35.102 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 264565 ']' 00:22:35.102 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 264565 00:22:35.102 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 264565 ']' 00:22:35.102 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 264565 00:22:35.102 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:35.102 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:35.102 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 264565 00:22:35.102 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:35.102 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:35.102 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 264565' 00:22:35.102 killing process with pid 264565 00:22:35.102 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 264565 00:22:35.102 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 264565 00:22:35.361 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:35.361 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:35.361 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:35.361 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:22:35.361 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:22:35.361 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:35.361 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:22:35.361 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:35.361 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:35.361 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.361 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:35.361 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.267 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:37.267 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.UTS /tmp/spdk.key-sha256.34U /tmp/spdk.key-sha384.Tn2 /tmp/spdk.key-sha512.0dg /tmp/spdk.key-sha512.m3S /tmp/spdk.key-sha384.F0B /tmp/spdk.key-sha256.wtG '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:37.267 00:22:37.267 real 3m33.498s 00:22:37.267 user 8m19.133s 00:22:37.267 sys 0m28.343s 00:22:37.267 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:37.267 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.267 ************************************ 00:22:37.267 END TEST nvmf_auth_target 00:22:37.267 ************************************ 00:22:37.267 00:28:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:22:37.267 00:28:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:37.267 00:28:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:37.267 00:28:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:37.267 00:28:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:37.526 ************************************ 00:22:37.526 START TEST nvmf_bdevio_no_huge 00:22:37.526 ************************************ 00:22:37.526 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:37.526 * Looking for test storage... 00:22:37.526 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:37.526 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:37.526 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:22:37.526 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:37.526 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:37.526 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:37.526 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:37.526 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:37.526 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:22:37.526 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:22:37.526 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:22:37.526 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:22:37.526 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:22:37.526 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:22:37.526 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:22:37.526 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:37.526 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:22:37.526 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:22:37.526 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:37.526 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:37.526 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:22:37.526 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:22:37.526 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:37.526 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:22:37.526 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:22:37.526 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:22:37.526 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:22:37.527 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:37.527 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:22:37.527 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:22:37.527 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:37.527 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:37.527 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:22:37.527 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:37.527 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:37.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.527 --rc genhtml_branch_coverage=1 00:22:37.527 --rc genhtml_function_coverage=1 00:22:37.527 --rc genhtml_legend=1 00:22:37.527 --rc geninfo_all_blocks=1 00:22:37.527 --rc geninfo_unexecuted_blocks=1 00:22:37.527 00:22:37.527 ' 00:22:37.527 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:37.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.527 --rc genhtml_branch_coverage=1 00:22:37.527 --rc genhtml_function_coverage=1 00:22:37.527 --rc genhtml_legend=1 00:22:37.527 --rc geninfo_all_blocks=1 00:22:37.527 --rc geninfo_unexecuted_blocks=1 00:22:37.527 00:22:37.527 ' 00:22:37.527 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:37.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.527 --rc genhtml_branch_coverage=1 00:22:37.527 --rc genhtml_function_coverage=1 00:22:37.527 --rc genhtml_legend=1 00:22:37.527 --rc geninfo_all_blocks=1 00:22:37.527 --rc geninfo_unexecuted_blocks=1 00:22:37.527 00:22:37.527 ' 00:22:37.527 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:37.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.527 --rc genhtml_branch_coverage=1 00:22:37.527 --rc genhtml_function_coverage=1 00:22:37.527 --rc genhtml_legend=1 00:22:37.527 --rc geninfo_all_blocks=1 00:22:37.527 --rc geninfo_unexecuted_blocks=1 00:22:37.527 00:22:37.527 ' 00:22:37.527 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:37.527 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:37.527 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:37.527 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:37.527 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:37.527 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:37.527 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:37.527 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:37.527 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:37.527 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:37.527 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:37.527 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:37.527 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:37.527 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:37.527 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:37.527 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:37.527 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:37.527 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:37.527 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:37.527 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:22:37.527 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:37.527 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:37.527 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:37.527 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.527 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.527 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.527 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:37.527 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.527 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:22:37.527 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:37.527 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:37.527 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:37.527 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:37.527 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:37.527 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:37.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:37.527 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:37.527 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:37.527 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:37.527 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:37.527 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:37.527 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:37.527 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:37.527 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:37.527 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:37.527 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:37.527 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:37.527 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:37.527 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:37.527 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.527 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:37.527 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:37.528 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:22:37.528 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:40.071 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:40.072 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:40.072 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:40.072 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:40.072 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:40.072 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:40.073 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:40.073 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:40.073 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:40.073 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:40.073 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:40.073 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:40.073 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:40.073 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:40.073 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:40.073 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:40.073 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:40.073 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:22:40.073 00:22:40.073 --- 10.0.0.2 ping statistics --- 00:22:40.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.073 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:22:40.073 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:40.073 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:40.073 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:22:40.073 00:22:40.073 --- 10.0.0.1 ping statistics --- 00:22:40.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.073 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:22:40.073 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:40.073 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:22:40.073 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:40.073 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:40.073 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:40.073 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:40.073 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:40.073 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:40.073 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:40.073 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:40.073 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:40.073 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:40.073 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:40.073 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=269745 00:22:40.073 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:40.073 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 269745 00:22:40.073 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 269745 ']' 00:22:40.073 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:40.073 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:40.073 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:40.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:40.073 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:40.073 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:40.073 [2024-11-18 00:28:03.631112] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:22:40.073 [2024-11-18 00:28:03.631208] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:40.073 [2024-11-18 00:28:03.706859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:40.073 [2024-11-18 00:28:03.755232] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:40.073 [2024-11-18 00:28:03.755285] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:40.073 [2024-11-18 00:28:03.755336] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:40.073 [2024-11-18 00:28:03.755351] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:40.073 [2024-11-18 00:28:03.755366] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:40.073 [2024-11-18 00:28:03.756424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:40.073 [2024-11-18 00:28:03.756485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:22:40.073 [2024-11-18 00:28:03.756532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:22:40.073 [2024-11-18 00:28:03.756535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:40.073 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:40.073 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:22:40.073 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:40.073 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:40.073 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:40.331 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:40.331 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:40.332 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.332 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:40.332 [2024-11-18 00:28:03.910979] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:40.332 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.332 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:40.332 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.332 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:40.332 Malloc0 00:22:40.332 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.332 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:40.332 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.332 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:40.332 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.332 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:40.332 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.332 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:40.332 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.332 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:40.332 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.332 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:40.332 [2024-11-18 00:28:03.949251] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:40.332 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.332 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:40.332 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:40.332 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:22:40.332 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:22:40.332 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:40.332 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:40.332 { 00:22:40.332 "params": { 00:22:40.332 "name": "Nvme$subsystem", 00:22:40.332 "trtype": "$TEST_TRANSPORT", 00:22:40.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.332 "adrfam": "ipv4", 00:22:40.332 "trsvcid": "$NVMF_PORT", 00:22:40.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.332 "hdgst": ${hdgst:-false}, 00:22:40.332 "ddgst": ${ddgst:-false} 00:22:40.332 }, 00:22:40.332 "method": "bdev_nvme_attach_controller" 00:22:40.332 } 00:22:40.332 EOF 00:22:40.332 )") 00:22:40.332 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:22:40.332 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:22:40.332 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:22:40.332 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:40.332 "params": { 00:22:40.332 "name": "Nvme1", 00:22:40.332 "trtype": "tcp", 00:22:40.332 "traddr": "10.0.0.2", 00:22:40.332 "adrfam": "ipv4", 00:22:40.332 "trsvcid": "4420", 00:22:40.332 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:40.332 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:40.332 "hdgst": false, 00:22:40.332 "ddgst": false 00:22:40.332 }, 00:22:40.332 "method": "bdev_nvme_attach_controller" 00:22:40.332 }' 00:22:40.332 [2024-11-18 00:28:03.999024] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:22:40.332 [2024-11-18 00:28:03.999094] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid269884 ] 00:22:40.332 [2024-11-18 00:28:04.067194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:40.332 [2024-11-18 00:28:04.118210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:40.332 [2024-11-18 00:28:04.118263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:40.332 [2024-11-18 00:28:04.118266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:40.590 I/O targets: 00:22:40.590 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:40.590 00:22:40.590 00:22:40.590 CUnit - A unit testing framework for C - Version 2.1-3 00:22:40.590 http://cunit.sourceforge.net/ 00:22:40.590 00:22:40.590 00:22:40.590 Suite: bdevio tests on: Nvme1n1 00:22:40.590 Test: blockdev write read block ...passed 00:22:40.590 Test: blockdev write zeroes read block ...passed 00:22:40.590 Test: blockdev write zeroes read no split ...passed 00:22:40.590 Test: blockdev write zeroes read split ...passed 00:22:40.848 Test: blockdev write zeroes read split partial ...passed 00:22:40.849 Test: blockdev reset ...[2024-11-18 00:28:04.465731] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:40.849 [2024-11-18 00:28:04.465843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a486a0 (9): Bad file descriptor 00:22:40.849 [2024-11-18 00:28:04.563452] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:22:40.849 passed 00:22:40.849 Test: blockdev write read 8 blocks ...passed 00:22:40.849 Test: blockdev write read size > 128k ...passed 00:22:40.849 Test: blockdev write read invalid size ...passed 00:22:40.849 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:40.849 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:40.849 Test: blockdev write read max offset ...passed 00:22:41.107 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:41.107 Test: blockdev writev readv 8 blocks ...passed 00:22:41.107 Test: blockdev writev readv 30 x 1block ...passed 00:22:41.107 Test: blockdev writev readv block ...passed 00:22:41.107 Test: blockdev writev readv size > 128k ...passed 00:22:41.107 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:41.107 Test: blockdev comparev and writev ...[2024-11-18 00:28:04.777282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:41.107 [2024-11-18 00:28:04.777338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.107 [2024-11-18 00:28:04.777370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:41.107 [2024-11-18 00:28:04.777389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:41.107 [2024-11-18 00:28:04.777699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:41.107 [2024-11-18 00:28:04.777724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:41.107 [2024-11-18 00:28:04.777747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:41.107 [2024-11-18 00:28:04.777763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:41.107 [2024-11-18 00:28:04.778069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:41.107 [2024-11-18 00:28:04.778094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:41.107 [2024-11-18 00:28:04.778116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:41.107 [2024-11-18 00:28:04.778133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:41.107 [2024-11-18 00:28:04.778451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:41.107 [2024-11-18 00:28:04.778476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:41.107 [2024-11-18 00:28:04.778498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:41.107 [2024-11-18 00:28:04.778514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:41.107 passed 00:22:41.107 Test: blockdev nvme passthru rw ...passed 00:22:41.107 Test: blockdev nvme passthru vendor specific ...[2024-11-18 00:28:04.861568] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:41.107 [2024-11-18 00:28:04.861595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:41.107 [2024-11-18 00:28:04.861760] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:41.107 [2024-11-18 00:28:04.861789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:41.107 [2024-11-18 00:28:04.861924] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:41.107 [2024-11-18 00:28:04.861947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:41.107 [2024-11-18 00:28:04.862090] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:41.107 [2024-11-18 00:28:04.862113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:41.107 passed 00:22:41.107 Test: blockdev nvme admin passthru ...passed 00:22:41.107 Test: blockdev copy ...passed 00:22:41.107 00:22:41.107 Run Summary: Type Total Ran Passed Failed Inactive 00:22:41.107 suites 1 1 n/a 0 0 00:22:41.107 tests 23 23 23 0 0 00:22:41.107 asserts 152 152 152 0 n/a 00:22:41.107 00:22:41.107 Elapsed time = 1.227 seconds 00:22:41.673 00:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:41.673 00:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.673 00:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:41.673 00:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.673 00:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:41.673 00:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:41.673 00:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:41.673 00:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:22:41.673 00:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:41.673 00:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:22:41.673 00:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:41.673 00:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:41.673 rmmod nvme_tcp 00:22:41.673 rmmod nvme_fabrics 00:22:41.673 rmmod nvme_keyring 00:22:41.673 00:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:41.673 00:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:22:41.673 00:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:22:41.673 00:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 269745 ']' 00:22:41.673 00:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 269745 00:22:41.673 00:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 269745 ']' 00:22:41.673 00:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 269745 00:22:41.673 00:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:22:41.673 00:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:41.673 00:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 269745 00:22:41.673 00:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:22:41.673 00:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:22:41.673 00:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 269745' 00:22:41.673 killing process with pid 269745 00:22:41.673 00:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 269745 00:22:41.673 00:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 269745 00:22:41.933 00:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:41.933 00:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:41.933 00:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:41.933 00:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:22:41.933 00:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:22:41.933 00:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:41.933 00:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:22:41.933 00:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:41.933 00:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:41.933 00:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:41.933 00:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:41.933 00:28:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:44.470 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:44.470 00:22:44.470 real 0m6.642s 00:22:44.470 user 0m10.343s 00:22:44.470 sys 0m2.733s 00:22:44.470 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:44.470 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:44.470 ************************************ 00:22:44.470 END TEST nvmf_bdevio_no_huge 00:22:44.470 ************************************ 00:22:44.470 00:28:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:44.470 00:28:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:44.470 00:28:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:44.470 00:28:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:44.470 ************************************ 00:22:44.470 START TEST nvmf_tls 00:22:44.470 ************************************ 00:22:44.470 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:44.470 * Looking for test storage... 00:22:44.470 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:44.470 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:44.470 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:22:44.470 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:44.470 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:44.470 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:44.470 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:44.470 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:44.470 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:22:44.470 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:22:44.470 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:22:44.470 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:22:44.470 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:22:44.470 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:22:44.470 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:22:44.470 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:44.470 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:22:44.470 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:22:44.470 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:44.470 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:44.470 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:22:44.470 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:22:44.470 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:44.470 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:22:44.470 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:22:44.470 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:22:44.470 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:22:44.470 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:44.470 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:22:44.470 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:22:44.470 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:44.470 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:44.470 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:22:44.470 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:44.470 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:44.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:44.470 --rc genhtml_branch_coverage=1 00:22:44.470 --rc genhtml_function_coverage=1 00:22:44.470 --rc genhtml_legend=1 00:22:44.470 --rc geninfo_all_blocks=1 00:22:44.470 --rc geninfo_unexecuted_blocks=1 00:22:44.470 00:22:44.470 ' 00:22:44.470 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:44.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:44.470 --rc genhtml_branch_coverage=1 00:22:44.470 --rc genhtml_function_coverage=1 00:22:44.470 --rc genhtml_legend=1 00:22:44.470 --rc geninfo_all_blocks=1 00:22:44.470 --rc geninfo_unexecuted_blocks=1 00:22:44.470 00:22:44.470 ' 00:22:44.470 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:44.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:44.470 --rc genhtml_branch_coverage=1 00:22:44.470 --rc genhtml_function_coverage=1 00:22:44.470 --rc genhtml_legend=1 00:22:44.470 --rc geninfo_all_blocks=1 00:22:44.470 --rc geninfo_unexecuted_blocks=1 00:22:44.470 00:22:44.470 ' 00:22:44.470 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:44.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:44.470 --rc genhtml_branch_coverage=1 00:22:44.470 --rc genhtml_function_coverage=1 00:22:44.470 --rc genhtml_legend=1 00:22:44.470 --rc geninfo_all_blocks=1 00:22:44.470 --rc geninfo_unexecuted_blocks=1 00:22:44.470 00:22:44.470 ' 00:22:44.470 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:44.470 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:44.470 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:44.470 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:44.470 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:44.470 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:44.470 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:44.470 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:44.470 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:44.470 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:44.470 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:44.470 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:44.470 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:44.470 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:44.470 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:44.470 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:44.470 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:44.470 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:44.470 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:44.470 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:22:44.470 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:44.470 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:44.471 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:44.471 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.471 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.471 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.471 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:44.471 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.471 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:22:44.471 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:44.471 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:44.471 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:44.471 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:44.471 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:44.471 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:44.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:44.471 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:44.471 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:44.471 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:44.471 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:44.471 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:22:44.471 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:44.471 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:44.471 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:44.471 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:44.471 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:44.471 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:44.471 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:44.471 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:44.471 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:44.471 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:44.471 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:22:44.471 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:46.373 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:46.373 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:46.373 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:46.373 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:46.373 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:46.374 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:46.374 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:46.374 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:46.374 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:46.374 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:46.374 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:46.374 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:46.374 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:46.374 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:46.374 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:46.374 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:22:46.374 00:22:46.374 --- 10.0.0.2 ping statistics --- 00:22:46.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:46.374 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:22:46.374 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:46.374 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:46.374 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:22:46.374 00:22:46.374 --- 10.0.0.1 ping statistics --- 00:22:46.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:46.374 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:22:46.374 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:46.374 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:22:46.374 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:46.374 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:46.374 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:46.374 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:46.374 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:46.374 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:46.374 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:46.632 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:46.632 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:46.632 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:46.632 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:46.632 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=271963 00:22:46.632 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:46.632 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 271963 00:22:46.632 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 271963 ']' 00:22:46.632 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:46.632 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:46.632 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:46.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:46.633 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:46.633 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:46.633 [2024-11-18 00:28:10.254339] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:22:46.633 [2024-11-18 00:28:10.254428] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:46.633 [2024-11-18 00:28:10.327950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:46.633 [2024-11-18 00:28:10.372483] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:46.633 [2024-11-18 00:28:10.372526] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:46.633 [2024-11-18 00:28:10.372555] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:46.633 [2024-11-18 00:28:10.372565] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:46.633 [2024-11-18 00:28:10.372575] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:46.633 [2024-11-18 00:28:10.373154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:46.890 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:46.890 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:46.890 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:46.890 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:46.891 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:46.891 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:46.891 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:22:46.891 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:47.148 true 00:22:47.148 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:47.148 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:22:47.406 00:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:22:47.406 00:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:22:47.406 00:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:47.664 00:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:47.664 00:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:22:47.923 00:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:22:47.923 00:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:22:47.923 00:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:48.181 00:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:48.181 00:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:22:48.438 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:22:48.438 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:22:48.439 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:22:48.439 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:48.696 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:22:48.696 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:22:48.696 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:48.954 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:48.954 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:22:49.212 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:22:49.212 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:22:49.212 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:49.470 00:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:49.470 00:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:22:49.728 00:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:22:49.728 00:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:22:49.728 00:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:49.728 00:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:49.728 00:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:49.728 00:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:49.728 00:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:22:49.728 00:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:49.728 00:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:49.986 00:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:49.986 00:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:49.986 00:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:49.986 00:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:49.986 00:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:49.986 00:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:22:49.986 00:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:49.986 00:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:49.986 00:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:49.986 00:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:49.986 00:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.NPcwb9lkxb 00:22:49.986 00:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:22:49.986 00:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.c3siypWjgW 00:22:49.986 00:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:49.986 00:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:49.986 00:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.NPcwb9lkxb 00:22:49.986 00:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.c3siypWjgW 00:22:49.986 00:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:50.243 00:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:50.501 00:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.NPcwb9lkxb 00:22:50.501 00:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.NPcwb9lkxb 00:22:50.501 00:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:50.760 [2024-11-18 00:28:14.569038] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:51.018 00:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:51.276 00:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:51.543 [2024-11-18 00:28:15.118552] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:51.543 [2024-11-18 00:28:15.118777] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:51.544 00:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:51.803 malloc0 00:22:51.803 00:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:52.061 00:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.NPcwb9lkxb 00:22:52.323 00:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:52.581 00:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.NPcwb9lkxb 00:23:02.562 Initializing NVMe Controllers 00:23:02.562 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:02.562 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:02.562 Initialization complete. Launching workers. 00:23:02.562 ======================================================== 00:23:02.562 Latency(us) 00:23:02.562 Device Information : IOPS MiB/s Average min max 00:23:02.562 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8616.36 33.66 7429.77 982.89 10297.06 00:23:02.562 ======================================================== 00:23:02.562 Total : 8616.36 33.66 7429.77 982.89 10297.06 00:23:02.562 00:23:02.562 00:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.NPcwb9lkxb 00:23:02.562 00:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:02.562 00:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:02.562 00:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:02.562 00:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.NPcwb9lkxb 00:23:02.562 00:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:02.562 00:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=273862 00:23:02.562 00:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:02.562 00:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:02.562 00:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 273862 /var/tmp/bdevperf.sock 00:23:02.562 00:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 273862 ']' 00:23:02.562 00:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:02.562 00:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:02.562 00:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:02.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:02.562 00:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:02.562 00:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:02.827 [2024-11-18 00:28:26.400772] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:23:02.827 [2024-11-18 00:28:26.400860] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid273862 ] 00:23:02.827 [2024-11-18 00:28:26.469751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.827 [2024-11-18 00:28:26.515063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:02.827 00:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:02.827 00:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:02.827 00:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.NPcwb9lkxb 00:23:03.086 00:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:03.653 [2024-11-18 00:28:27.210188] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:03.653 TLSTESTn1 00:23:03.653 00:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:03.653 Running I/O for 10 seconds... 00:23:05.958 2994.00 IOPS, 11.70 MiB/s [2024-11-17T23:28:30.714Z] 3107.00 IOPS, 12.14 MiB/s [2024-11-17T23:28:31.660Z] 3125.67 IOPS, 12.21 MiB/s [2024-11-17T23:28:32.595Z] 3098.25 IOPS, 12.10 MiB/s [2024-11-17T23:28:33.531Z] 3126.40 IOPS, 12.21 MiB/s [2024-11-17T23:28:34.464Z] 3142.00 IOPS, 12.27 MiB/s [2024-11-17T23:28:35.836Z] 3151.14 IOPS, 12.31 MiB/s [2024-11-17T23:28:36.770Z] 3145.12 IOPS, 12.29 MiB/s [2024-11-17T23:28:37.704Z] 3148.89 IOPS, 12.30 MiB/s [2024-11-17T23:28:37.704Z] 3135.70 IOPS, 12.25 MiB/s 00:23:13.882 Latency(us) 00:23:13.882 [2024-11-17T23:28:37.704Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:13.882 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:13.882 Verification LBA range: start 0x0 length 0x2000 00:23:13.882 TLSTESTn1 : 10.02 3142.31 12.27 0.00 0.00 40668.62 6893.42 40195.41 00:23:13.882 [2024-11-17T23:28:37.704Z] =================================================================================================================== 00:23:13.882 [2024-11-17T23:28:37.704Z] Total : 3142.31 12.27 0.00 0.00 40668.62 6893.42 40195.41 00:23:13.882 { 00:23:13.882 "results": [ 00:23:13.882 { 00:23:13.882 "job": "TLSTESTn1", 00:23:13.882 "core_mask": "0x4", 00:23:13.882 "workload": "verify", 00:23:13.882 "status": "finished", 00:23:13.882 "verify_range": { 00:23:13.882 "start": 0, 00:23:13.882 "length": 8192 00:23:13.882 }, 00:23:13.882 "queue_depth": 128, 00:23:13.882 "io_size": 4096, 00:23:13.882 "runtime": 10.019395, 00:23:13.882 "iops": 3142.305498485687, 00:23:13.882 "mibps": 12.274630853459715, 00:23:13.882 "io_failed": 0, 00:23:13.882 "io_timeout": 0, 00:23:13.882 "avg_latency_us": 40668.624039676826, 00:23:13.882 "min_latency_us": 6893.416296296296, 00:23:13.882 "max_latency_us": 40195.41333333333 00:23:13.882 } 00:23:13.882 ], 00:23:13.882 "core_count": 1 00:23:13.882 } 00:23:13.882 00:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:13.882 00:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 273862 00:23:13.882 00:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 273862 ']' 00:23:13.882 00:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 273862 00:23:13.882 00:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:13.882 00:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:13.882 00:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 273862 00:23:13.882 00:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:13.882 00:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:13.882 00:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 273862' 00:23:13.882 killing process with pid 273862 00:23:13.882 00:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 273862 00:23:13.882 Received shutdown signal, test time was about 10.000000 seconds 00:23:13.882 00:23:13.882 Latency(us) 00:23:13.882 [2024-11-17T23:28:37.704Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:13.882 [2024-11-17T23:28:37.704Z] =================================================================================================================== 00:23:13.882 [2024-11-17T23:28:37.704Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:13.882 00:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 273862 00:23:14.140 00:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.c3siypWjgW 00:23:14.140 00:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:14.140 00:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.c3siypWjgW 00:23:14.140 00:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:14.140 00:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:14.140 00:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:14.140 00:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:14.140 00:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.c3siypWjgW 00:23:14.140 00:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:14.140 00:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:14.140 00:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:14.140 00:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.c3siypWjgW 00:23:14.140 00:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:14.140 00:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=275176 00:23:14.140 00:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:14.140 00:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:14.140 00:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 275176 /var/tmp/bdevperf.sock 00:23:14.140 00:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 275176 ']' 00:23:14.140 00:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:14.140 00:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:14.140 00:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:14.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:14.140 00:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:14.140 00:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:14.140 [2024-11-18 00:28:37.751551] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:23:14.140 [2024-11-18 00:28:37.751637] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid275176 ] 00:23:14.140 [2024-11-18 00:28:37.819171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.140 [2024-11-18 00:28:37.865492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:14.398 00:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:14.398 00:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:14.398 00:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.c3siypWjgW 00:23:14.656 00:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:14.915 [2024-11-18 00:28:38.606211] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:14.915 [2024-11-18 00:28:38.614147] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:14.915 [2024-11-18 00:28:38.614330] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e98370 (107): Transport endpoint is not connected 00:23:14.915 [2024-11-18 00:28:38.615318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e98370 (9): Bad file descriptor 00:23:14.915 [2024-11-18 00:28:38.616317] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:14.915 [2024-11-18 00:28:38.616342] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:14.915 [2024-11-18 00:28:38.616357] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:14.915 [2024-11-18 00:28:38.616376] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:14.915 request: 00:23:14.915 { 00:23:14.915 "name": "TLSTEST", 00:23:14.915 "trtype": "tcp", 00:23:14.915 "traddr": "10.0.0.2", 00:23:14.915 "adrfam": "ipv4", 00:23:14.915 "trsvcid": "4420", 00:23:14.915 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:14.915 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:14.915 "prchk_reftag": false, 00:23:14.915 "prchk_guard": false, 00:23:14.915 "hdgst": false, 00:23:14.915 "ddgst": false, 00:23:14.915 "psk": "key0", 00:23:14.915 "allow_unrecognized_csi": false, 00:23:14.915 "method": "bdev_nvme_attach_controller", 00:23:14.915 "req_id": 1 00:23:14.915 } 00:23:14.915 Got JSON-RPC error response 00:23:14.915 response: 00:23:14.915 { 00:23:14.915 "code": -5, 00:23:14.915 "message": "Input/output error" 00:23:14.915 } 00:23:14.915 00:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 275176 00:23:14.915 00:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 275176 ']' 00:23:14.915 00:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 275176 00:23:14.915 00:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:14.915 00:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:14.915 00:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 275176 00:23:14.915 00:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:14.915 00:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:14.915 00:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 275176' 00:23:14.915 killing process with pid 275176 00:23:14.915 00:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 275176 00:23:14.915 Received shutdown signal, test time was about 10.000000 seconds 00:23:14.915 00:23:14.915 Latency(us) 00:23:14.915 [2024-11-17T23:28:38.737Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:14.915 [2024-11-17T23:28:38.737Z] =================================================================================================================== 00:23:14.915 [2024-11-17T23:28:38.737Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:14.915 00:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 275176 00:23:15.173 00:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:15.173 00:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:15.173 00:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:15.173 00:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:15.173 00:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:15.173 00:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.NPcwb9lkxb 00:23:15.173 00:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:15.173 00:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.NPcwb9lkxb 00:23:15.173 00:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:15.173 00:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:15.174 00:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:15.174 00:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:15.174 00:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.NPcwb9lkxb 00:23:15.174 00:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:15.174 00:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:15.174 00:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:15.174 00:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.NPcwb9lkxb 00:23:15.174 00:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:15.174 00:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=275320 00:23:15.174 00:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:15.174 00:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:15.174 00:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 275320 /var/tmp/bdevperf.sock 00:23:15.174 00:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 275320 ']' 00:23:15.174 00:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:15.174 00:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:15.174 00:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:15.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:15.174 00:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:15.174 00:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:15.174 [2024-11-18 00:28:38.879233] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:23:15.174 [2024-11-18 00:28:38.879342] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid275320 ] 00:23:15.174 [2024-11-18 00:28:38.950680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.432 [2024-11-18 00:28:39.000812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:15.432 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:15.432 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:15.432 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.NPcwb9lkxb 00:23:15.689 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:23:15.947 [2024-11-18 00:28:39.631754] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:15.947 [2024-11-18 00:28:39.637218] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:15.947 [2024-11-18 00:28:39.637248] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:15.947 [2024-11-18 00:28:39.637290] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:15.947 [2024-11-18 00:28:39.637838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d07370 (107): Transport endpoint is not connected 00:23:15.947 [2024-11-18 00:28:39.638825] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d07370 (9): Bad file descriptor 00:23:15.947 [2024-11-18 00:28:39.639825] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:15.947 [2024-11-18 00:28:39.639852] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:15.948 [2024-11-18 00:28:39.639881] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:15.948 [2024-11-18 00:28:39.639899] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:15.948 request: 00:23:15.948 { 00:23:15.948 "name": "TLSTEST", 00:23:15.948 "trtype": "tcp", 00:23:15.948 "traddr": "10.0.0.2", 00:23:15.948 "adrfam": "ipv4", 00:23:15.948 "trsvcid": "4420", 00:23:15.948 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.948 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:15.948 "prchk_reftag": false, 00:23:15.948 "prchk_guard": false, 00:23:15.948 "hdgst": false, 00:23:15.948 "ddgst": false, 00:23:15.948 "psk": "key0", 00:23:15.948 "allow_unrecognized_csi": false, 00:23:15.948 "method": "bdev_nvme_attach_controller", 00:23:15.948 "req_id": 1 00:23:15.948 } 00:23:15.948 Got JSON-RPC error response 00:23:15.948 response: 00:23:15.948 { 00:23:15.948 "code": -5, 00:23:15.948 "message": "Input/output error" 00:23:15.948 } 00:23:15.948 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 275320 00:23:15.948 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 275320 ']' 00:23:15.948 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 275320 00:23:15.948 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:15.948 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:15.948 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 275320 00:23:15.948 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:15.948 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:15.948 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 275320' 00:23:15.948 killing process with pid 275320 00:23:15.948 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 275320 00:23:15.948 Received shutdown signal, test time was about 10.000000 seconds 00:23:15.948 00:23:15.948 Latency(us) 00:23:15.948 [2024-11-17T23:28:39.770Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:15.948 [2024-11-17T23:28:39.770Z] =================================================================================================================== 00:23:15.948 [2024-11-17T23:28:39.770Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:15.948 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 275320 00:23:16.206 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:16.206 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:16.206 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:16.206 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:16.206 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:16.206 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.NPcwb9lkxb 00:23:16.206 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:16.206 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.NPcwb9lkxb 00:23:16.206 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:16.206 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:16.206 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:16.206 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:16.206 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.NPcwb9lkxb 00:23:16.206 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:16.206 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:16.206 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:16.206 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.NPcwb9lkxb 00:23:16.206 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:16.206 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=275454 00:23:16.206 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:16.206 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:16.206 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 275454 /var/tmp/bdevperf.sock 00:23:16.206 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 275454 ']' 00:23:16.206 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:16.206 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:16.206 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:16.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:16.206 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:16.206 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:16.206 [2024-11-18 00:28:39.936626] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:23:16.206 [2024-11-18 00:28:39.936721] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid275454 ] 00:23:16.206 [2024-11-18 00:28:40.003616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.464 [2024-11-18 00:28:40.059081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:16.464 00:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:16.464 00:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:16.464 00:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.NPcwb9lkxb 00:23:16.721 00:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:16.978 [2024-11-18 00:28:40.718094] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:16.978 [2024-11-18 00:28:40.729626] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:16.978 [2024-11-18 00:28:40.729655] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:16.978 [2024-11-18 00:28:40.729698] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:16.978 [2024-11-18 00:28:40.730347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a6370 (107): Transport endpoint is not connected 00:23:16.978 [2024-11-18 00:28:40.731326] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a6370 (9): Bad file descriptor 00:23:16.978 [2024-11-18 00:28:40.732325] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:23:16.978 [2024-11-18 00:28:40.732347] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:16.978 [2024-11-18 00:28:40.732361] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:23:16.978 [2024-11-18 00:28:40.732380] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:23:16.978 request: 00:23:16.978 { 00:23:16.978 "name": "TLSTEST", 00:23:16.978 "trtype": "tcp", 00:23:16.978 "traddr": "10.0.0.2", 00:23:16.978 "adrfam": "ipv4", 00:23:16.978 "trsvcid": "4420", 00:23:16.978 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:16.978 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:16.978 "prchk_reftag": false, 00:23:16.978 "prchk_guard": false, 00:23:16.978 "hdgst": false, 00:23:16.978 "ddgst": false, 00:23:16.978 "psk": "key0", 00:23:16.978 "allow_unrecognized_csi": false, 00:23:16.978 "method": "bdev_nvme_attach_controller", 00:23:16.978 "req_id": 1 00:23:16.978 } 00:23:16.978 Got JSON-RPC error response 00:23:16.978 response: 00:23:16.978 { 00:23:16.978 "code": -5, 00:23:16.978 "message": "Input/output error" 00:23:16.978 } 00:23:16.978 00:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 275454 00:23:16.978 00:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 275454 ']' 00:23:16.978 00:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 275454 00:23:16.978 00:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:16.978 00:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:16.978 00:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 275454 00:23:16.978 00:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:16.978 00:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:16.978 00:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 275454' 00:23:16.978 killing process with pid 275454 00:23:16.978 00:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 275454 00:23:16.978 Received shutdown signal, test time was about 10.000000 seconds 00:23:16.978 00:23:16.978 Latency(us) 00:23:16.978 [2024-11-17T23:28:40.800Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:16.978 [2024-11-17T23:28:40.800Z] =================================================================================================================== 00:23:16.978 [2024-11-17T23:28:40.800Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:16.978 00:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 275454 00:23:17.236 00:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:17.236 00:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:17.236 00:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:17.236 00:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:17.236 00:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:17.236 00:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:17.236 00:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:17.236 00:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:17.236 00:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:17.236 00:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:17.236 00:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:17.236 00:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:17.236 00:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:17.236 00:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:17.236 00:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:17.236 00:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:17.236 00:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:17.236 00:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:17.236 00:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=275596 00:23:17.236 00:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:17.237 00:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:17.237 00:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 275596 /var/tmp/bdevperf.sock 00:23:17.237 00:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 275596 ']' 00:23:17.237 00:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:17.237 00:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:17.237 00:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:17.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:17.237 00:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:17.237 00:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:17.237 [2024-11-18 00:28:41.037968] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:23:17.237 [2024-11-18 00:28:41.038061] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid275596 ] 00:23:17.495 [2024-11-18 00:28:41.106374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.495 [2024-11-18 00:28:41.153484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:17.495 00:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:17.495 00:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:17.495 00:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:23:17.753 [2024-11-18 00:28:41.532339] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:23:17.753 [2024-11-18 00:28:41.532386] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:17.753 request: 00:23:17.753 { 00:23:17.753 "name": "key0", 00:23:17.753 "path": "", 00:23:17.753 "method": "keyring_file_add_key", 00:23:17.753 "req_id": 1 00:23:17.753 } 00:23:17.753 Got JSON-RPC error response 00:23:17.753 response: 00:23:17.753 { 00:23:17.753 "code": -1, 00:23:17.753 "message": "Operation not permitted" 00:23:17.753 } 00:23:17.753 00:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:18.011 [2024-11-18 00:28:41.797171] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:18.011 [2024-11-18 00:28:41.797230] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:18.011 request: 00:23:18.011 { 00:23:18.011 "name": "TLSTEST", 00:23:18.011 "trtype": "tcp", 00:23:18.011 "traddr": "10.0.0.2", 00:23:18.011 "adrfam": "ipv4", 00:23:18.011 "trsvcid": "4420", 00:23:18.011 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:18.011 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:18.011 "prchk_reftag": false, 00:23:18.011 "prchk_guard": false, 00:23:18.011 "hdgst": false, 00:23:18.011 "ddgst": false, 00:23:18.011 "psk": "key0", 00:23:18.011 "allow_unrecognized_csi": false, 00:23:18.011 "method": "bdev_nvme_attach_controller", 00:23:18.011 "req_id": 1 00:23:18.011 } 00:23:18.011 Got JSON-RPC error response 00:23:18.011 response: 00:23:18.011 { 00:23:18.011 "code": -126, 00:23:18.011 "message": "Required key not available" 00:23:18.011 } 00:23:18.011 00:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 275596 00:23:18.011 00:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 275596 ']' 00:23:18.011 00:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 275596 00:23:18.011 00:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:18.011 00:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:18.011 00:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 275596 00:23:18.276 00:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:18.276 00:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:18.276 00:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 275596' 00:23:18.276 killing process with pid 275596 00:23:18.276 00:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 275596 00:23:18.276 Received shutdown signal, test time was about 10.000000 seconds 00:23:18.276 00:23:18.276 Latency(us) 00:23:18.276 [2024-11-17T23:28:42.098Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.276 [2024-11-17T23:28:42.098Z] =================================================================================================================== 00:23:18.276 [2024-11-17T23:28:42.098Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:18.276 00:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 275596 00:23:18.276 00:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:18.276 00:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:18.276 00:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:18.276 00:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:18.276 00:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:18.276 00:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 271963 00:23:18.276 00:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 271963 ']' 00:23:18.276 00:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 271963 00:23:18.276 00:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:18.276 00:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:18.276 00:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 271963 00:23:18.276 00:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:18.276 00:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:18.276 00:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 271963' 00:23:18.276 killing process with pid 271963 00:23:18.276 00:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 271963 00:23:18.276 00:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 271963 00:23:18.539 00:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:18.539 00:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:18.539 00:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:18.539 00:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:18.539 00:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:18.539 00:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:23:18.539 00:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:18.539 00:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:18.539 00:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:23:18.539 00:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.f9zssfYYJu 00:23:18.539 00:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:18.539 00:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.f9zssfYYJu 00:23:18.539 00:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:23:18.539 00:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:18.539 00:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:18.539 00:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:18.539 00:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=275797 00:23:18.539 00:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:18.540 00:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 275797 00:23:18.540 00:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 275797 ']' 00:23:18.540 00:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:18.540 00:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:18.540 00:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:18.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:18.540 00:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:18.540 00:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:18.540 [2024-11-18 00:28:42.338030] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:23:18.540 [2024-11-18 00:28:42.338123] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:18.798 [2024-11-18 00:28:42.411879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.798 [2024-11-18 00:28:42.460393] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:18.798 [2024-11-18 00:28:42.460451] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:18.798 [2024-11-18 00:28:42.460479] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:18.798 [2024-11-18 00:28:42.460490] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:18.798 [2024-11-18 00:28:42.460500] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:18.798 [2024-11-18 00:28:42.461094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:18.798 00:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:18.798 00:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:18.798 00:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:18.798 00:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:18.798 00:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:18.798 00:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:18.798 00:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.f9zssfYYJu 00:23:18.798 00:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.f9zssfYYJu 00:23:18.798 00:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:19.057 [2024-11-18 00:28:42.846962] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:19.057 00:28:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:19.314 00:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:19.572 [2024-11-18 00:28:43.384456] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:19.572 [2024-11-18 00:28:43.384735] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:19.830 00:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:20.089 malloc0 00:23:20.089 00:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:20.347 00:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.f9zssfYYJu 00:23:20.604 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:20.863 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.f9zssfYYJu 00:23:20.863 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:20.863 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:20.863 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:20.863 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.f9zssfYYJu 00:23:20.863 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:20.863 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=276046 00:23:20.863 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:20.863 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:20.863 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 276046 /var/tmp/bdevperf.sock 00:23:20.863 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 276046 ']' 00:23:20.863 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:20.863 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:20.863 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:20.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:20.863 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:20.863 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:20.863 [2024-11-18 00:28:44.532328] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:23:20.863 [2024-11-18 00:28:44.532431] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid276046 ] 00:23:20.863 [2024-11-18 00:28:44.600391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.863 [2024-11-18 00:28:44.645388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:21.121 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:21.121 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:21.121 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.f9zssfYYJu 00:23:21.379 00:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:21.637 [2024-11-18 00:28:45.288834] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:21.637 TLSTESTn1 00:23:21.637 00:28:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:21.894 Running I/O for 10 seconds... 00:23:23.759 3253.00 IOPS, 12.71 MiB/s [2024-11-17T23:28:48.528Z] 3287.50 IOPS, 12.84 MiB/s [2024-11-17T23:28:49.899Z] 3289.33 IOPS, 12.85 MiB/s [2024-11-17T23:28:50.852Z] 3307.75 IOPS, 12.92 MiB/s [2024-11-17T23:28:51.792Z] 3311.60 IOPS, 12.94 MiB/s [2024-11-17T23:28:52.725Z] 3342.67 IOPS, 13.06 MiB/s [2024-11-17T23:28:53.657Z] 3349.00 IOPS, 13.08 MiB/s [2024-11-17T23:28:54.591Z] 3339.50 IOPS, 13.04 MiB/s [2024-11-17T23:28:55.524Z] 3351.56 IOPS, 13.09 MiB/s [2024-11-17T23:28:55.524Z] 3350.80 IOPS, 13.09 MiB/s 00:23:31.702 Latency(us) 00:23:31.702 [2024-11-17T23:28:55.524Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:31.702 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:31.702 Verification LBA range: start 0x0 length 0x2000 00:23:31.702 TLSTESTn1 : 10.02 3357.13 13.11 0.00 0.00 38067.07 6699.24 43496.49 00:23:31.702 [2024-11-17T23:28:55.524Z] =================================================================================================================== 00:23:31.702 [2024-11-17T23:28:55.524Z] Total : 3357.13 13.11 0.00 0.00 38067.07 6699.24 43496.49 00:23:31.960 { 00:23:31.960 "results": [ 00:23:31.960 { 00:23:31.960 "job": "TLSTESTn1", 00:23:31.960 "core_mask": "0x4", 00:23:31.960 "workload": "verify", 00:23:31.960 "status": "finished", 00:23:31.960 "verify_range": { 00:23:31.960 "start": 0, 00:23:31.960 "length": 8192 00:23:31.960 }, 00:23:31.960 "queue_depth": 128, 00:23:31.960 "io_size": 4096, 00:23:31.960 "runtime": 10.019258, 00:23:31.960 "iops": 3357.134829744877, 00:23:31.960 "mibps": 13.113807928690926, 00:23:31.960 "io_failed": 0, 00:23:31.960 "io_timeout": 0, 00:23:31.960 "avg_latency_us": 38067.06575512128, 00:23:31.960 "min_latency_us": 6699.235555555556, 00:23:31.960 "max_latency_us": 43496.485925925925 00:23:31.960 } 00:23:31.960 ], 00:23:31.960 "core_count": 1 00:23:31.960 } 00:23:31.960 00:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:31.960 00:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 276046 00:23:31.960 00:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 276046 ']' 00:23:31.960 00:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 276046 00:23:31.960 00:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:31.960 00:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:31.960 00:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 276046 00:23:31.960 00:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:31.960 00:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:31.960 00:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 276046' 00:23:31.960 killing process with pid 276046 00:23:31.960 00:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 276046 00:23:31.960 Received shutdown signal, test time was about 10.000000 seconds 00:23:31.960 00:23:31.960 Latency(us) 00:23:31.960 [2024-11-17T23:28:55.782Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:31.960 [2024-11-17T23:28:55.782Z] =================================================================================================================== 00:23:31.960 [2024-11-17T23:28:55.782Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:31.960 00:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 276046 00:23:31.960 00:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.f9zssfYYJu 00:23:32.219 00:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.f9zssfYYJu 00:23:32.219 00:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:32.219 00:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.f9zssfYYJu 00:23:32.219 00:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:32.219 00:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:32.219 00:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:32.219 00:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:32.219 00:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.f9zssfYYJu 00:23:32.219 00:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:32.219 00:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:32.219 00:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:32.219 00:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.f9zssfYYJu 00:23:32.219 00:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:32.219 00:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=277355 00:23:32.219 00:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:32.219 00:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:32.219 00:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 277355 /var/tmp/bdevperf.sock 00:23:32.219 00:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 277355 ']' 00:23:32.219 00:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:32.219 00:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:32.219 00:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:32.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:32.219 00:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:32.219 00:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:32.219 [2024-11-18 00:28:55.835618] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:23:32.219 [2024-11-18 00:28:55.835723] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid277355 ] 00:23:32.219 [2024-11-18 00:28:55.905011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.219 [2024-11-18 00:28:55.954219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:32.477 00:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:32.477 00:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:32.477 00:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.f9zssfYYJu 00:23:32.735 [2024-11-18 00:28:56.330707] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.f9zssfYYJu': 0100666 00:23:32.735 [2024-11-18 00:28:56.330751] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:32.735 request: 00:23:32.735 { 00:23:32.735 "name": "key0", 00:23:32.735 "path": "/tmp/tmp.f9zssfYYJu", 00:23:32.735 "method": "keyring_file_add_key", 00:23:32.735 "req_id": 1 00:23:32.735 } 00:23:32.735 Got JSON-RPC error response 00:23:32.735 response: 00:23:32.735 { 00:23:32.735 "code": -1, 00:23:32.735 "message": "Operation not permitted" 00:23:32.735 } 00:23:32.735 00:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:32.993 [2024-11-18 00:28:56.595524] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:32.993 [2024-11-18 00:28:56.595599] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:32.993 request: 00:23:32.993 { 00:23:32.993 "name": "TLSTEST", 00:23:32.993 "trtype": "tcp", 00:23:32.993 "traddr": "10.0.0.2", 00:23:32.993 "adrfam": "ipv4", 00:23:32.993 "trsvcid": "4420", 00:23:32.993 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:32.993 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:32.993 "prchk_reftag": false, 00:23:32.993 "prchk_guard": false, 00:23:32.993 "hdgst": false, 00:23:32.993 "ddgst": false, 00:23:32.993 "psk": "key0", 00:23:32.993 "allow_unrecognized_csi": false, 00:23:32.993 "method": "bdev_nvme_attach_controller", 00:23:32.993 "req_id": 1 00:23:32.993 } 00:23:32.993 Got JSON-RPC error response 00:23:32.993 response: 00:23:32.993 { 00:23:32.993 "code": -126, 00:23:32.993 "message": "Required key not available" 00:23:32.993 } 00:23:32.993 00:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 277355 00:23:32.993 00:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 277355 ']' 00:23:32.993 00:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 277355 00:23:32.993 00:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:32.993 00:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:32.993 00:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 277355 00:23:32.993 00:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:32.993 00:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:32.993 00:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 277355' 00:23:32.993 killing process with pid 277355 00:23:32.993 00:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 277355 00:23:32.993 Received shutdown signal, test time was about 10.000000 seconds 00:23:32.993 00:23:32.993 Latency(us) 00:23:32.993 [2024-11-17T23:28:56.815Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:32.993 [2024-11-17T23:28:56.815Z] =================================================================================================================== 00:23:32.993 [2024-11-17T23:28:56.815Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:32.993 00:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 277355 00:23:33.260 00:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:33.260 00:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:33.260 00:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:33.260 00:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:33.260 00:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:33.260 00:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 275797 00:23:33.260 00:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 275797 ']' 00:23:33.260 00:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 275797 00:23:33.260 00:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:33.260 00:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:33.260 00:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 275797 00:23:33.260 00:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:33.260 00:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:33.261 00:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 275797' 00:23:33.261 killing process with pid 275797 00:23:33.261 00:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 275797 00:23:33.261 00:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 275797 00:23:33.524 00:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:23:33.524 00:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:33.524 00:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:33.524 00:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.524 00:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=277621 00:23:33.524 00:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:33.524 00:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 277621 00:23:33.524 00:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 277621 ']' 00:23:33.524 00:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:33.524 00:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:33.524 00:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:33.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:33.524 00:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:33.524 00:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.524 [2024-11-18 00:28:57.161252] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:23:33.524 [2024-11-18 00:28:57.161353] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:33.524 [2024-11-18 00:28:57.237145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.524 [2024-11-18 00:28:57.283720] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:33.524 [2024-11-18 00:28:57.283784] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:33.524 [2024-11-18 00:28:57.283813] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:33.524 [2024-11-18 00:28:57.283824] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:33.524 [2024-11-18 00:28:57.283834] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:33.524 [2024-11-18 00:28:57.284478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:33.782 00:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:33.782 00:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:33.782 00:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:33.782 00:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:33.782 00:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.782 00:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:33.782 00:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.f9zssfYYJu 00:23:33.782 00:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:33.782 00:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.f9zssfYYJu 00:23:33.782 00:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:23:33.782 00:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:33.782 00:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:23:33.782 00:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:33.782 00:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.f9zssfYYJu 00:23:33.782 00:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.f9zssfYYJu 00:23:33.782 00:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:34.040 [2024-11-18 00:28:57.731629] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:34.040 00:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:34.298 00:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:34.556 [2024-11-18 00:28:58.305222] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:34.556 [2024-11-18 00:28:58.305513] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:34.556 00:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:34.814 malloc0 00:23:34.814 00:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:35.073 00:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.f9zssfYYJu 00:23:35.331 [2024-11-18 00:28:59.110546] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.f9zssfYYJu': 0100666 00:23:35.331 [2024-11-18 00:28:59.110586] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:35.331 request: 00:23:35.331 { 00:23:35.331 "name": "key0", 00:23:35.331 "path": "/tmp/tmp.f9zssfYYJu", 00:23:35.331 "method": "keyring_file_add_key", 00:23:35.331 "req_id": 1 00:23:35.331 } 00:23:35.331 Got JSON-RPC error response 00:23:35.331 response: 00:23:35.331 { 00:23:35.331 "code": -1, 00:23:35.331 "message": "Operation not permitted" 00:23:35.331 } 00:23:35.331 00:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:35.589 [2024-11-18 00:28:59.387396] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:23:35.589 [2024-11-18 00:28:59.387456] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:35.589 request: 00:23:35.589 { 00:23:35.589 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:35.589 "host": "nqn.2016-06.io.spdk:host1", 00:23:35.589 "psk": "key0", 00:23:35.589 "method": "nvmf_subsystem_add_host", 00:23:35.589 "req_id": 1 00:23:35.589 } 00:23:35.589 Got JSON-RPC error response 00:23:35.589 response: 00:23:35.589 { 00:23:35.589 "code": -32603, 00:23:35.589 "message": "Internal error" 00:23:35.589 } 00:23:35.589 00:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:35.589 00:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:35.589 00:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:35.589 00:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:35.589 00:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 277621 00:23:35.589 00:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 277621 ']' 00:23:35.589 00:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 277621 00:23:35.847 00:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:35.847 00:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:35.847 00:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 277621 00:23:35.847 00:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:35.847 00:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:35.847 00:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 277621' 00:23:35.847 killing process with pid 277621 00:23:35.847 00:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 277621 00:23:35.847 00:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 277621 00:23:35.847 00:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.f9zssfYYJu 00:23:35.847 00:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:23:35.847 00:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:35.847 00:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:35.847 00:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:35.847 00:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=277927 00:23:35.847 00:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:35.847 00:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 277927 00:23:35.847 00:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 277927 ']' 00:23:35.847 00:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:35.847 00:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:35.847 00:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:35.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:35.847 00:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:35.847 00:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.105 [2024-11-18 00:28:59.700818] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:23:36.105 [2024-11-18 00:28:59.700918] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:36.105 [2024-11-18 00:28:59.771823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.105 [2024-11-18 00:28:59.816295] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:36.105 [2024-11-18 00:28:59.816353] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:36.105 [2024-11-18 00:28:59.816382] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:36.105 [2024-11-18 00:28:59.816394] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:36.105 [2024-11-18 00:28:59.816403] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:36.105 [2024-11-18 00:28:59.816954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:36.105 00:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:36.106 00:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:36.106 00:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:36.363 00:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:36.363 00:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.363 00:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:36.363 00:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.f9zssfYYJu 00:23:36.363 00:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.f9zssfYYJu 00:23:36.363 00:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:36.620 [2024-11-18 00:29:00.195331] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:36.620 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:36.879 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:37.137 [2024-11-18 00:29:00.748814] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:37.137 [2024-11-18 00:29:00.749054] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:37.137 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:37.394 malloc0 00:23:37.394 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:37.652 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.f9zssfYYJu 00:23:37.909 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:38.167 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=278212 00:23:38.168 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:38.168 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:38.168 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 278212 /var/tmp/bdevperf.sock 00:23:38.168 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 278212 ']' 00:23:38.168 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:38.168 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:38.168 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:38.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:38.168 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:38.168 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:38.168 [2024-11-18 00:29:01.883173] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:23:38.168 [2024-11-18 00:29:01.883250] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid278212 ] 00:23:38.168 [2024-11-18 00:29:01.951760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.426 [2024-11-18 00:29:02.004013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:38.426 00:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:38.426 00:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:38.426 00:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.f9zssfYYJu 00:23:38.683 00:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:38.942 [2024-11-18 00:29:02.656712] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:38.942 TLSTESTn1 00:23:38.942 00:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:39.510 00:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:23:39.510 "subsystems": [ 00:23:39.510 { 00:23:39.510 "subsystem": "keyring", 00:23:39.510 "config": [ 00:23:39.510 { 00:23:39.510 "method": "keyring_file_add_key", 00:23:39.511 "params": { 00:23:39.511 "name": "key0", 00:23:39.511 "path": "/tmp/tmp.f9zssfYYJu" 00:23:39.511 } 00:23:39.511 } 00:23:39.511 ] 00:23:39.511 }, 00:23:39.511 { 00:23:39.511 "subsystem": "iobuf", 00:23:39.511 "config": [ 00:23:39.511 { 00:23:39.511 "method": "iobuf_set_options", 00:23:39.511 "params": { 00:23:39.511 "small_pool_count": 8192, 00:23:39.511 "large_pool_count": 1024, 00:23:39.511 "small_bufsize": 8192, 00:23:39.511 "large_bufsize": 135168, 00:23:39.511 "enable_numa": false 00:23:39.511 } 00:23:39.511 } 00:23:39.511 ] 00:23:39.511 }, 00:23:39.511 { 00:23:39.511 "subsystem": "sock", 00:23:39.511 "config": [ 00:23:39.511 { 00:23:39.511 "method": "sock_set_default_impl", 00:23:39.511 "params": { 00:23:39.511 "impl_name": "posix" 00:23:39.511 } 00:23:39.511 }, 00:23:39.511 { 00:23:39.511 "method": "sock_impl_set_options", 00:23:39.511 "params": { 00:23:39.511 "impl_name": "ssl", 00:23:39.511 "recv_buf_size": 4096, 00:23:39.511 "send_buf_size": 4096, 00:23:39.511 "enable_recv_pipe": true, 00:23:39.511 "enable_quickack": false, 00:23:39.511 "enable_placement_id": 0, 00:23:39.511 "enable_zerocopy_send_server": true, 00:23:39.511 "enable_zerocopy_send_client": false, 00:23:39.511 "zerocopy_threshold": 0, 00:23:39.511 "tls_version": 0, 00:23:39.511 "enable_ktls": false 00:23:39.511 } 00:23:39.511 }, 00:23:39.511 { 00:23:39.511 "method": "sock_impl_set_options", 00:23:39.511 "params": { 00:23:39.511 "impl_name": "posix", 00:23:39.511 "recv_buf_size": 2097152, 00:23:39.511 "send_buf_size": 2097152, 00:23:39.511 "enable_recv_pipe": true, 00:23:39.511 "enable_quickack": false, 00:23:39.511 "enable_placement_id": 0, 00:23:39.511 "enable_zerocopy_send_server": true, 00:23:39.511 "enable_zerocopy_send_client": false, 00:23:39.511 "zerocopy_threshold": 0, 00:23:39.511 "tls_version": 0, 00:23:39.511 "enable_ktls": false 00:23:39.511 } 00:23:39.511 } 00:23:39.511 ] 00:23:39.511 }, 00:23:39.511 { 00:23:39.511 "subsystem": "vmd", 00:23:39.511 "config": [] 00:23:39.511 }, 00:23:39.511 { 00:23:39.511 "subsystem": "accel", 00:23:39.511 "config": [ 00:23:39.511 { 00:23:39.511 "method": "accel_set_options", 00:23:39.511 "params": { 00:23:39.511 "small_cache_size": 128, 00:23:39.511 "large_cache_size": 16, 00:23:39.511 "task_count": 2048, 00:23:39.511 "sequence_count": 2048, 00:23:39.511 "buf_count": 2048 00:23:39.511 } 00:23:39.511 } 00:23:39.511 ] 00:23:39.511 }, 00:23:39.511 { 00:23:39.511 "subsystem": "bdev", 00:23:39.511 "config": [ 00:23:39.511 { 00:23:39.511 "method": "bdev_set_options", 00:23:39.511 "params": { 00:23:39.511 "bdev_io_pool_size": 65535, 00:23:39.511 "bdev_io_cache_size": 256, 00:23:39.511 "bdev_auto_examine": true, 00:23:39.511 "iobuf_small_cache_size": 128, 00:23:39.511 "iobuf_large_cache_size": 16 00:23:39.511 } 00:23:39.511 }, 00:23:39.511 { 00:23:39.511 "method": "bdev_raid_set_options", 00:23:39.511 "params": { 00:23:39.511 "process_window_size_kb": 1024, 00:23:39.511 "process_max_bandwidth_mb_sec": 0 00:23:39.511 } 00:23:39.511 }, 00:23:39.511 { 00:23:39.511 "method": "bdev_iscsi_set_options", 00:23:39.511 "params": { 00:23:39.511 "timeout_sec": 30 00:23:39.511 } 00:23:39.511 }, 00:23:39.511 { 00:23:39.511 "method": "bdev_nvme_set_options", 00:23:39.511 "params": { 00:23:39.511 "action_on_timeout": "none", 00:23:39.511 "timeout_us": 0, 00:23:39.511 "timeout_admin_us": 0, 00:23:39.511 "keep_alive_timeout_ms": 10000, 00:23:39.511 "arbitration_burst": 0, 00:23:39.511 "low_priority_weight": 0, 00:23:39.511 "medium_priority_weight": 0, 00:23:39.511 "high_priority_weight": 0, 00:23:39.511 "nvme_adminq_poll_period_us": 10000, 00:23:39.511 "nvme_ioq_poll_period_us": 0, 00:23:39.511 "io_queue_requests": 0, 00:23:39.511 "delay_cmd_submit": true, 00:23:39.511 "transport_retry_count": 4, 00:23:39.511 "bdev_retry_count": 3, 00:23:39.511 "transport_ack_timeout": 0, 00:23:39.511 "ctrlr_loss_timeout_sec": 0, 00:23:39.511 "reconnect_delay_sec": 0, 00:23:39.511 "fast_io_fail_timeout_sec": 0, 00:23:39.511 "disable_auto_failback": false, 00:23:39.511 "generate_uuids": false, 00:23:39.511 "transport_tos": 0, 00:23:39.511 "nvme_error_stat": false, 00:23:39.511 "rdma_srq_size": 0, 00:23:39.511 "io_path_stat": false, 00:23:39.511 "allow_accel_sequence": false, 00:23:39.511 "rdma_max_cq_size": 0, 00:23:39.511 "rdma_cm_event_timeout_ms": 0, 00:23:39.511 "dhchap_digests": [ 00:23:39.511 "sha256", 00:23:39.511 "sha384", 00:23:39.511 "sha512" 00:23:39.511 ], 00:23:39.511 "dhchap_dhgroups": [ 00:23:39.511 "null", 00:23:39.511 "ffdhe2048", 00:23:39.511 "ffdhe3072", 00:23:39.511 "ffdhe4096", 00:23:39.511 "ffdhe6144", 00:23:39.511 "ffdhe8192" 00:23:39.511 ] 00:23:39.511 } 00:23:39.511 }, 00:23:39.511 { 00:23:39.511 "method": "bdev_nvme_set_hotplug", 00:23:39.511 "params": { 00:23:39.511 "period_us": 100000, 00:23:39.511 "enable": false 00:23:39.511 } 00:23:39.511 }, 00:23:39.511 { 00:23:39.511 "method": "bdev_malloc_create", 00:23:39.511 "params": { 00:23:39.511 "name": "malloc0", 00:23:39.511 "num_blocks": 8192, 00:23:39.511 "block_size": 4096, 00:23:39.511 "physical_block_size": 4096, 00:23:39.511 "uuid": "6f99b693-143c-438b-9437-b492701601fe", 00:23:39.511 "optimal_io_boundary": 0, 00:23:39.511 "md_size": 0, 00:23:39.511 "dif_type": 0, 00:23:39.511 "dif_is_head_of_md": false, 00:23:39.511 "dif_pi_format": 0 00:23:39.511 } 00:23:39.511 }, 00:23:39.511 { 00:23:39.511 "method": "bdev_wait_for_examine" 00:23:39.511 } 00:23:39.511 ] 00:23:39.511 }, 00:23:39.511 { 00:23:39.511 "subsystem": "nbd", 00:23:39.511 "config": [] 00:23:39.511 }, 00:23:39.511 { 00:23:39.511 "subsystem": "scheduler", 00:23:39.511 "config": [ 00:23:39.511 { 00:23:39.511 "method": "framework_set_scheduler", 00:23:39.511 "params": { 00:23:39.511 "name": "static" 00:23:39.511 } 00:23:39.511 } 00:23:39.511 ] 00:23:39.511 }, 00:23:39.511 { 00:23:39.511 "subsystem": "nvmf", 00:23:39.511 "config": [ 00:23:39.511 { 00:23:39.511 "method": "nvmf_set_config", 00:23:39.511 "params": { 00:23:39.511 "discovery_filter": "match_any", 00:23:39.511 "admin_cmd_passthru": { 00:23:39.511 "identify_ctrlr": false 00:23:39.511 }, 00:23:39.511 "dhchap_digests": [ 00:23:39.511 "sha256", 00:23:39.511 "sha384", 00:23:39.511 "sha512" 00:23:39.511 ], 00:23:39.511 "dhchap_dhgroups": [ 00:23:39.511 "null", 00:23:39.511 "ffdhe2048", 00:23:39.511 "ffdhe3072", 00:23:39.511 "ffdhe4096", 00:23:39.511 "ffdhe6144", 00:23:39.511 "ffdhe8192" 00:23:39.511 ] 00:23:39.511 } 00:23:39.511 }, 00:23:39.511 { 00:23:39.511 "method": "nvmf_set_max_subsystems", 00:23:39.511 "params": { 00:23:39.511 "max_subsystems": 1024 00:23:39.511 } 00:23:39.511 }, 00:23:39.511 { 00:23:39.511 "method": "nvmf_set_crdt", 00:23:39.511 "params": { 00:23:39.511 "crdt1": 0, 00:23:39.511 "crdt2": 0, 00:23:39.511 "crdt3": 0 00:23:39.511 } 00:23:39.511 }, 00:23:39.511 { 00:23:39.511 "method": "nvmf_create_transport", 00:23:39.511 "params": { 00:23:39.511 "trtype": "TCP", 00:23:39.511 "max_queue_depth": 128, 00:23:39.511 "max_io_qpairs_per_ctrlr": 127, 00:23:39.511 "in_capsule_data_size": 4096, 00:23:39.512 "max_io_size": 131072, 00:23:39.512 "io_unit_size": 131072, 00:23:39.512 "max_aq_depth": 128, 00:23:39.512 "num_shared_buffers": 511, 00:23:39.512 "buf_cache_size": 4294967295, 00:23:39.512 "dif_insert_or_strip": false, 00:23:39.512 "zcopy": false, 00:23:39.512 "c2h_success": false, 00:23:39.512 "sock_priority": 0, 00:23:39.512 "abort_timeout_sec": 1, 00:23:39.512 "ack_timeout": 0, 00:23:39.512 "data_wr_pool_size": 0 00:23:39.512 } 00:23:39.512 }, 00:23:39.512 { 00:23:39.512 "method": "nvmf_create_subsystem", 00:23:39.512 "params": { 00:23:39.512 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.512 "allow_any_host": false, 00:23:39.512 "serial_number": "SPDK00000000000001", 00:23:39.512 "model_number": "SPDK bdev Controller", 00:23:39.512 "max_namespaces": 10, 00:23:39.512 "min_cntlid": 1, 00:23:39.512 "max_cntlid": 65519, 00:23:39.512 "ana_reporting": false 00:23:39.512 } 00:23:39.512 }, 00:23:39.512 { 00:23:39.512 "method": "nvmf_subsystem_add_host", 00:23:39.512 "params": { 00:23:39.512 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.512 "host": "nqn.2016-06.io.spdk:host1", 00:23:39.512 "psk": "key0" 00:23:39.512 } 00:23:39.512 }, 00:23:39.512 { 00:23:39.512 "method": "nvmf_subsystem_add_ns", 00:23:39.512 "params": { 00:23:39.512 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.512 "namespace": { 00:23:39.512 "nsid": 1, 00:23:39.512 "bdev_name": "malloc0", 00:23:39.512 "nguid": "6F99B693143C438B9437B492701601FE", 00:23:39.512 "uuid": "6f99b693-143c-438b-9437-b492701601fe", 00:23:39.512 "no_auto_visible": false 00:23:39.512 } 00:23:39.512 } 00:23:39.512 }, 00:23:39.512 { 00:23:39.512 "method": "nvmf_subsystem_add_listener", 00:23:39.512 "params": { 00:23:39.512 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.512 "listen_address": { 00:23:39.512 "trtype": "TCP", 00:23:39.512 "adrfam": "IPv4", 00:23:39.512 "traddr": "10.0.0.2", 00:23:39.512 "trsvcid": "4420" 00:23:39.512 }, 00:23:39.512 "secure_channel": true 00:23:39.512 } 00:23:39.512 } 00:23:39.512 ] 00:23:39.512 } 00:23:39.512 ] 00:23:39.512 }' 00:23:39.512 00:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:39.780 00:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:23:39.780 "subsystems": [ 00:23:39.780 { 00:23:39.780 "subsystem": "keyring", 00:23:39.780 "config": [ 00:23:39.780 { 00:23:39.780 "method": "keyring_file_add_key", 00:23:39.780 "params": { 00:23:39.780 "name": "key0", 00:23:39.780 "path": "/tmp/tmp.f9zssfYYJu" 00:23:39.780 } 00:23:39.780 } 00:23:39.780 ] 00:23:39.780 }, 00:23:39.780 { 00:23:39.780 "subsystem": "iobuf", 00:23:39.780 "config": [ 00:23:39.780 { 00:23:39.780 "method": "iobuf_set_options", 00:23:39.780 "params": { 00:23:39.780 "small_pool_count": 8192, 00:23:39.780 "large_pool_count": 1024, 00:23:39.780 "small_bufsize": 8192, 00:23:39.780 "large_bufsize": 135168, 00:23:39.780 "enable_numa": false 00:23:39.780 } 00:23:39.780 } 00:23:39.780 ] 00:23:39.780 }, 00:23:39.780 { 00:23:39.780 "subsystem": "sock", 00:23:39.780 "config": [ 00:23:39.780 { 00:23:39.780 "method": "sock_set_default_impl", 00:23:39.781 "params": { 00:23:39.781 "impl_name": "posix" 00:23:39.781 } 00:23:39.781 }, 00:23:39.781 { 00:23:39.781 "method": "sock_impl_set_options", 00:23:39.781 "params": { 00:23:39.781 "impl_name": "ssl", 00:23:39.781 "recv_buf_size": 4096, 00:23:39.781 "send_buf_size": 4096, 00:23:39.781 "enable_recv_pipe": true, 00:23:39.781 "enable_quickack": false, 00:23:39.781 "enable_placement_id": 0, 00:23:39.781 "enable_zerocopy_send_server": true, 00:23:39.781 "enable_zerocopy_send_client": false, 00:23:39.781 "zerocopy_threshold": 0, 00:23:39.781 "tls_version": 0, 00:23:39.781 "enable_ktls": false 00:23:39.781 } 00:23:39.781 }, 00:23:39.781 { 00:23:39.781 "method": "sock_impl_set_options", 00:23:39.781 "params": { 00:23:39.781 "impl_name": "posix", 00:23:39.781 "recv_buf_size": 2097152, 00:23:39.781 "send_buf_size": 2097152, 00:23:39.781 "enable_recv_pipe": true, 00:23:39.781 "enable_quickack": false, 00:23:39.781 "enable_placement_id": 0, 00:23:39.781 "enable_zerocopy_send_server": true, 00:23:39.781 "enable_zerocopy_send_client": false, 00:23:39.781 "zerocopy_threshold": 0, 00:23:39.781 "tls_version": 0, 00:23:39.781 "enable_ktls": false 00:23:39.781 } 00:23:39.781 } 00:23:39.781 ] 00:23:39.781 }, 00:23:39.781 { 00:23:39.781 "subsystem": "vmd", 00:23:39.781 "config": [] 00:23:39.781 }, 00:23:39.781 { 00:23:39.781 "subsystem": "accel", 00:23:39.781 "config": [ 00:23:39.781 { 00:23:39.781 "method": "accel_set_options", 00:23:39.781 "params": { 00:23:39.781 "small_cache_size": 128, 00:23:39.781 "large_cache_size": 16, 00:23:39.782 "task_count": 2048, 00:23:39.782 "sequence_count": 2048, 00:23:39.782 "buf_count": 2048 00:23:39.782 } 00:23:39.782 } 00:23:39.782 ] 00:23:39.782 }, 00:23:39.782 { 00:23:39.782 "subsystem": "bdev", 00:23:39.782 "config": [ 00:23:39.782 { 00:23:39.782 "method": "bdev_set_options", 00:23:39.782 "params": { 00:23:39.782 "bdev_io_pool_size": 65535, 00:23:39.782 "bdev_io_cache_size": 256, 00:23:39.782 "bdev_auto_examine": true, 00:23:39.782 "iobuf_small_cache_size": 128, 00:23:39.782 "iobuf_large_cache_size": 16 00:23:39.782 } 00:23:39.782 }, 00:23:39.782 { 00:23:39.782 "method": "bdev_raid_set_options", 00:23:39.782 "params": { 00:23:39.782 "process_window_size_kb": 1024, 00:23:39.782 "process_max_bandwidth_mb_sec": 0 00:23:39.782 } 00:23:39.782 }, 00:23:39.782 { 00:23:39.782 "method": "bdev_iscsi_set_options", 00:23:39.782 "params": { 00:23:39.782 "timeout_sec": 30 00:23:39.782 } 00:23:39.782 }, 00:23:39.782 { 00:23:39.782 "method": "bdev_nvme_set_options", 00:23:39.782 "params": { 00:23:39.782 "action_on_timeout": "none", 00:23:39.782 "timeout_us": 0, 00:23:39.782 "timeout_admin_us": 0, 00:23:39.782 "keep_alive_timeout_ms": 10000, 00:23:39.783 "arbitration_burst": 0, 00:23:39.783 "low_priority_weight": 0, 00:23:39.783 "medium_priority_weight": 0, 00:23:39.783 "high_priority_weight": 0, 00:23:39.783 "nvme_adminq_poll_period_us": 10000, 00:23:39.783 "nvme_ioq_poll_period_us": 0, 00:23:39.783 "io_queue_requests": 512, 00:23:39.783 "delay_cmd_submit": true, 00:23:39.783 "transport_retry_count": 4, 00:23:39.783 "bdev_retry_count": 3, 00:23:39.783 "transport_ack_timeout": 0, 00:23:39.783 "ctrlr_loss_timeout_sec": 0, 00:23:39.783 "reconnect_delay_sec": 0, 00:23:39.783 "fast_io_fail_timeout_sec": 0, 00:23:39.783 "disable_auto_failback": false, 00:23:39.783 "generate_uuids": false, 00:23:39.783 "transport_tos": 0, 00:23:39.783 "nvme_error_stat": false, 00:23:39.783 "rdma_srq_size": 0, 00:23:39.783 "io_path_stat": false, 00:23:39.783 "allow_accel_sequence": false, 00:23:39.783 "rdma_max_cq_size": 0, 00:23:39.783 "rdma_cm_event_timeout_ms": 0, 00:23:39.783 "dhchap_digests": [ 00:23:39.783 "sha256", 00:23:39.783 "sha384", 00:23:39.783 "sha512" 00:23:39.783 ], 00:23:39.783 "dhchap_dhgroups": [ 00:23:39.783 "null", 00:23:39.801 "ffdhe2048", 00:23:39.801 "ffdhe3072", 00:23:39.801 "ffdhe4096", 00:23:39.801 "ffdhe6144", 00:23:39.801 "ffdhe8192" 00:23:39.801 ] 00:23:39.801 } 00:23:39.801 }, 00:23:39.801 { 00:23:39.801 "method": "bdev_nvme_attach_controller", 00:23:39.801 "params": { 00:23:39.801 "name": "TLSTEST", 00:23:39.801 "trtype": "TCP", 00:23:39.801 "adrfam": "IPv4", 00:23:39.801 "traddr": "10.0.0.2", 00:23:39.801 "trsvcid": "4420", 00:23:39.801 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.801 "prchk_reftag": false, 00:23:39.801 "prchk_guard": false, 00:23:39.801 "ctrlr_loss_timeout_sec": 0, 00:23:39.801 "reconnect_delay_sec": 0, 00:23:39.801 "fast_io_fail_timeout_sec": 0, 00:23:39.801 "psk": "key0", 00:23:39.801 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:39.801 "hdgst": false, 00:23:39.801 "ddgst": false, 00:23:39.801 "multipath": "multipath" 00:23:39.801 } 00:23:39.801 }, 00:23:39.801 { 00:23:39.801 "method": "bdev_nvme_set_hotplug", 00:23:39.801 "params": { 00:23:39.801 "period_us": 100000, 00:23:39.801 "enable": false 00:23:39.801 } 00:23:39.801 }, 00:23:39.801 { 00:23:39.801 "method": "bdev_wait_for_examine" 00:23:39.801 } 00:23:39.801 ] 00:23:39.801 }, 00:23:39.801 { 00:23:39.801 "subsystem": "nbd", 00:23:39.801 "config": [] 00:23:39.801 } 00:23:39.801 ] 00:23:39.801 }' 00:23:39.801 00:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 278212 00:23:39.801 00:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 278212 ']' 00:23:39.801 00:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 278212 00:23:39.801 00:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:39.801 00:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:39.801 00:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 278212 00:23:39.801 00:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:39.801 00:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:39.801 00:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 278212' 00:23:39.801 killing process with pid 278212 00:23:39.801 00:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 278212 00:23:39.801 Received shutdown signal, test time was about 10.000000 seconds 00:23:39.801 00:23:39.801 Latency(us) 00:23:39.801 [2024-11-17T23:29:03.623Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:39.801 [2024-11-17T23:29:03.623Z] =================================================================================================================== 00:23:39.801 [2024-11-17T23:29:03.623Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:39.801 00:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 278212 00:23:40.061 00:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 277927 00:23:40.061 00:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 277927 ']' 00:23:40.061 00:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 277927 00:23:40.061 00:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:40.061 00:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:40.061 00:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 277927 00:23:40.061 00:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:40.061 00:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:40.061 00:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 277927' 00:23:40.061 killing process with pid 277927 00:23:40.061 00:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 277927 00:23:40.061 00:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 277927 00:23:40.321 00:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:40.321 00:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:23:40.321 "subsystems": [ 00:23:40.321 { 00:23:40.321 "subsystem": "keyring", 00:23:40.321 "config": [ 00:23:40.321 { 00:23:40.321 "method": "keyring_file_add_key", 00:23:40.321 "params": { 00:23:40.321 "name": "key0", 00:23:40.321 "path": "/tmp/tmp.f9zssfYYJu" 00:23:40.321 } 00:23:40.321 } 00:23:40.321 ] 00:23:40.321 }, 00:23:40.321 { 00:23:40.321 "subsystem": "iobuf", 00:23:40.321 "config": [ 00:23:40.321 { 00:23:40.321 "method": "iobuf_set_options", 00:23:40.321 "params": { 00:23:40.321 "small_pool_count": 8192, 00:23:40.321 "large_pool_count": 1024, 00:23:40.321 "small_bufsize": 8192, 00:23:40.321 "large_bufsize": 135168, 00:23:40.321 "enable_numa": false 00:23:40.321 } 00:23:40.321 } 00:23:40.321 ] 00:23:40.321 }, 00:23:40.321 { 00:23:40.321 "subsystem": "sock", 00:23:40.321 "config": [ 00:23:40.321 { 00:23:40.321 "method": "sock_set_default_impl", 00:23:40.321 "params": { 00:23:40.321 "impl_name": "posix" 00:23:40.321 } 00:23:40.321 }, 00:23:40.321 { 00:23:40.321 "method": "sock_impl_set_options", 00:23:40.321 "params": { 00:23:40.321 "impl_name": "ssl", 00:23:40.321 "recv_buf_size": 4096, 00:23:40.321 "send_buf_size": 4096, 00:23:40.321 "enable_recv_pipe": true, 00:23:40.321 "enable_quickack": false, 00:23:40.321 "enable_placement_id": 0, 00:23:40.321 "enable_zerocopy_send_server": true, 00:23:40.321 "enable_zerocopy_send_client": false, 00:23:40.321 "zerocopy_threshold": 0, 00:23:40.321 "tls_version": 0, 00:23:40.321 "enable_ktls": false 00:23:40.321 } 00:23:40.321 }, 00:23:40.321 { 00:23:40.321 "method": "sock_impl_set_options", 00:23:40.321 "params": { 00:23:40.321 "impl_name": "posix", 00:23:40.321 "recv_buf_size": 2097152, 00:23:40.321 "send_buf_size": 2097152, 00:23:40.321 "enable_recv_pipe": true, 00:23:40.321 "enable_quickack": false, 00:23:40.321 "enable_placement_id": 0, 00:23:40.321 "enable_zerocopy_send_server": true, 00:23:40.321 "enable_zerocopy_send_client": false, 00:23:40.321 "zerocopy_threshold": 0, 00:23:40.321 "tls_version": 0, 00:23:40.321 "enable_ktls": false 00:23:40.321 } 00:23:40.321 } 00:23:40.321 ] 00:23:40.321 }, 00:23:40.321 { 00:23:40.321 "subsystem": "vmd", 00:23:40.321 "config": [] 00:23:40.321 }, 00:23:40.321 { 00:23:40.321 "subsystem": "accel", 00:23:40.321 "config": [ 00:23:40.321 { 00:23:40.321 "method": "accel_set_options", 00:23:40.321 "params": { 00:23:40.321 "small_cache_size": 128, 00:23:40.321 "large_cache_size": 16, 00:23:40.321 "task_count": 2048, 00:23:40.321 "sequence_count": 2048, 00:23:40.321 "buf_count": 2048 00:23:40.321 } 00:23:40.321 } 00:23:40.321 ] 00:23:40.321 }, 00:23:40.321 { 00:23:40.321 "subsystem": "bdev", 00:23:40.321 "config": [ 00:23:40.321 { 00:23:40.321 "method": "bdev_set_options", 00:23:40.321 "params": { 00:23:40.321 "bdev_io_pool_size": 65535, 00:23:40.321 "bdev_io_cache_size": 256, 00:23:40.321 "bdev_auto_examine": true, 00:23:40.321 "iobuf_small_cache_size": 128, 00:23:40.321 "iobuf_large_cache_size": 16 00:23:40.321 } 00:23:40.321 }, 00:23:40.321 { 00:23:40.321 "method": "bdev_raid_set_options", 00:23:40.321 "params": { 00:23:40.321 "process_window_size_kb": 1024, 00:23:40.321 "process_max_bandwidth_mb_sec": 0 00:23:40.321 } 00:23:40.321 }, 00:23:40.321 { 00:23:40.321 "method": "bdev_iscsi_set_options", 00:23:40.321 "params": { 00:23:40.321 "timeout_sec": 30 00:23:40.321 } 00:23:40.321 }, 00:23:40.321 { 00:23:40.321 "method": "bdev_nvme_set_options", 00:23:40.321 "params": { 00:23:40.321 "action_on_timeout": "none", 00:23:40.321 "timeout_us": 0, 00:23:40.321 "timeout_admin_us": 0, 00:23:40.321 "keep_alive_timeout_ms": 10000, 00:23:40.321 "arbitration_burst": 0, 00:23:40.321 "low_priority_weight": 0, 00:23:40.321 "medium_priority_weight": 0, 00:23:40.321 "high_priority_weight": 0, 00:23:40.321 "nvme_adminq_poll_period_us": 10000, 00:23:40.321 "nvme_ioq_poll_period_us": 0, 00:23:40.321 "io_queue_requests": 0, 00:23:40.321 "delay_cmd_submit": true, 00:23:40.321 "transport_retry_count": 4, 00:23:40.321 "bdev_retry_count": 3, 00:23:40.321 "transport_ack_timeout": 0, 00:23:40.321 "ctrlr_loss_timeout_sec": 0, 00:23:40.321 "reconnect_delay_sec": 0, 00:23:40.321 "fast_io_fail_timeout_sec": 0, 00:23:40.321 "disable_auto_failback": false, 00:23:40.321 "generate_uuids": false, 00:23:40.321 "transport_tos": 0, 00:23:40.321 "nvme_error_stat": false, 00:23:40.321 "rdma_srq_size": 0, 00:23:40.321 "io_path_stat": false, 00:23:40.321 "allow_accel_sequence": false, 00:23:40.321 "rdma_max_cq_size": 0, 00:23:40.321 "rdma_cm_event_timeout_ms": 0, 00:23:40.321 00:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:40.321 "dhchap_digests": [ 00:23:40.321 "sha256", 00:23:40.321 "sha384", 00:23:40.321 "sha512" 00:23:40.321 ], 00:23:40.321 "dhchap_dhgroups": [ 00:23:40.321 "null", 00:23:40.321 "ffdhe2048", 00:23:40.321 "ffdhe3072", 00:23:40.321 "ffdhe4096", 00:23:40.321 "ffdhe6144", 00:23:40.321 "ffdhe8192" 00:23:40.321 ] 00:23:40.321 } 00:23:40.321 }, 00:23:40.321 { 00:23:40.321 "method": "bdev_nvme_set_hotplug", 00:23:40.321 "params": { 00:23:40.321 "period_us": 100000, 00:23:40.321 "enable": false 00:23:40.321 } 00:23:40.321 }, 00:23:40.321 { 00:23:40.321 "method": "bdev_malloc_create", 00:23:40.321 "params": { 00:23:40.321 "name": "malloc0", 00:23:40.321 "num_blocks": 8192, 00:23:40.322 "block_size": 4096, 00:23:40.322 "physical_block_size": 4096, 00:23:40.322 "uuid": "6f99b693-143c-438b-9437-b492701601fe", 00:23:40.322 "optimal_io_boundary": 0, 00:23:40.322 "md_size": 0, 00:23:40.322 "dif_type": 0, 00:23:40.322 "dif_is_head_of_md": false, 00:23:40.322 "dif_pi_format": 0 00:23:40.322 } 00:23:40.322 }, 00:23:40.322 { 00:23:40.322 "method": "bdev_wait_for_examine" 00:23:40.322 } 00:23:40.322 ] 00:23:40.322 }, 00:23:40.322 { 00:23:40.322 "subsystem": "nbd", 00:23:40.322 "config": [] 00:23:40.322 }, 00:23:40.322 { 00:23:40.322 "subsystem": "scheduler", 00:23:40.322 "config": [ 00:23:40.322 { 00:23:40.322 "method": "framework_set_scheduler", 00:23:40.322 "params": { 00:23:40.322 "name": "static" 00:23:40.322 } 00:23:40.322 } 00:23:40.322 ] 00:23:40.322 }, 00:23:40.322 { 00:23:40.322 "subsystem": "nvmf", 00:23:40.322 "config": [ 00:23:40.322 { 00:23:40.322 "method": "nvmf_set_config", 00:23:40.322 "params": { 00:23:40.322 "discovery_filter": "match_any", 00:23:40.322 "admin_cmd_passthru": { 00:23:40.322 "identify_ctrlr": false 00:23:40.322 }, 00:23:40.322 "dhchap_digests": [ 00:23:40.322 "sha256", 00:23:40.322 "sha384", 00:23:40.322 "sha512" 00:23:40.322 ], 00:23:40.322 "dhchap_dhgroups": [ 00:23:40.322 "null", 00:23:40.322 "ffdhe2048", 00:23:40.322 "ffdhe3072", 00:23:40.322 "ffdhe4096", 00:23:40.322 "ffdhe6144", 00:23:40.322 "ffdhe8192" 00:23:40.322 ] 00:23:40.322 } 00:23:40.322 }, 00:23:40.322 { 00:23:40.322 "method": "nvmf_set_max_subsystems", 00:23:40.322 "params": { 00:23:40.322 "max_subsystems": 1024 00:23:40.322 } 00:23:40.322 }, 00:23:40.322 { 00:23:40.322 "method": "nvmf_set_crdt", 00:23:40.322 "params": { 00:23:40.322 "crdt1": 0, 00:23:40.322 "crdt2": 0, 00:23:40.322 "crdt3": 0 00:23:40.322 } 00:23:40.322 }, 00:23:40.322 { 00:23:40.322 "method": "nvmf_create_transport", 00:23:40.322 "params": { 00:23:40.322 "trtype": "TCP", 00:23:40.322 "max_queue_depth": 128, 00:23:40.322 "max_io_qpairs_per_ctrlr": 127, 00:23:40.322 "in_capsule_data_size": 4096, 00:23:40.322 "max_io_size": 131072, 00:23:40.322 "io_unit_size": 131072, 00:23:40.322 "max_aq_depth": 128, 00:23:40.322 "num_shared_buffers": 511, 00:23:40.322 "buf_cache_size": 4294967295, 00:23:40.322 "dif_insert_or_strip": false, 00:23:40.322 "zcopy": false, 00:23:40.322 "c2h_success": false, 00:23:40.322 "sock_priority": 0, 00:23:40.322 "abort_timeout_sec": 1, 00:23:40.322 "ack_timeout": 0, 00:23:40.322 "data_wr_pool_size": 0 00:23:40.322 } 00:23:40.322 }, 00:23:40.322 { 00:23:40.322 "method": "nvmf_create_subsystem", 00:23:40.322 "params": { 00:23:40.322 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.322 "allow_any_host": false, 00:23:40.322 "serial_number": "SPDK00000000000001", 00:23:40.322 "model_number": "SPDK bdev Controller", 00:23:40.322 "max_namespaces": 10, 00:23:40.322 "min_cntlid": 1, 00:23:40.322 "max_cntlid": 65519, 00:23:40.322 "ana_reporting": false 00:23:40.322 } 00:23:40.322 }, 00:23:40.322 { 00:23:40.322 "method": "nvmf_subsystem_add_host", 00:23:40.322 "params": { 00:23:40.322 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.322 "host": "nqn.2016-06.io.spdk:host1", 00:23:40.322 "psk": "key0" 00:23:40.322 } 00:23:40.322 }, 00:23:40.322 { 00:23:40.322 "method": "nvmf_subsystem_add_ns", 00:23:40.322 "params": { 00:23:40.322 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.322 "namespace": { 00:23:40.322 "nsid": 1, 00:23:40.322 "bdev_name": "malloc0", 00:23:40.322 "nguid": "6F99B693143C438B9437B492701601FE", 00:23:40.322 "uuid": "6f99b693-143c-438b-9437-b492701601fe", 00:23:40.322 "no_auto_visible": false 00:23:40.322 } 00:23:40.322 } 00:23:40.322 }, 00:23:40.322 { 00:23:40.322 "method": "nvmf_subsystem_add_listener", 00:23:40.322 "params": { 00:23:40.322 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.322 "listen_address": { 00:23:40.322 "trtype": "TCP", 00:23:40.322 "adrfam": "IPv4", 00:23:40.322 "traddr": "10.0.0.2", 00:23:40.322 "trsvcid": "4420" 00:23:40.322 }, 00:23:40.322 "secure_channel": true 00:23:40.322 } 00:23:40.322 } 00:23:40.322 ] 00:23:40.322 } 00:23:40.322 ] 00:23:40.322 }' 00:23:40.322 00:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:40.322 00:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:40.322 00:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=278485 00:23:40.322 00:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:40.322 00:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 278485 00:23:40.322 00:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 278485 ']' 00:23:40.322 00:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:40.322 00:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:40.322 00:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:40.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:40.322 00:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:40.322 00:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:40.322 [2024-11-18 00:29:03.953665] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:23:40.322 [2024-11-18 00:29:03.953776] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:40.322 [2024-11-18 00:29:04.026921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:40.322 [2024-11-18 00:29:04.074220] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:40.322 [2024-11-18 00:29:04.074287] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:40.322 [2024-11-18 00:29:04.074300] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:40.322 [2024-11-18 00:29:04.074317] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:40.322 [2024-11-18 00:29:04.074343] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:40.322 [2024-11-18 00:29:04.074989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:40.581 [2024-11-18 00:29:04.313008] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:40.581 [2024-11-18 00:29:04.345037] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:40.581 [2024-11-18 00:29:04.345283] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:41.147 00:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:41.147 00:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:41.147 00:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:41.147 00:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:41.147 00:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:41.405 00:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:41.405 00:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=278640 00:23:41.405 00:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 278640 /var/tmp/bdevperf.sock 00:23:41.405 00:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 278640 ']' 00:23:41.405 00:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:41.405 00:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:41.405 00:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:41.405 00:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:23:41.405 "subsystems": [ 00:23:41.405 { 00:23:41.405 "subsystem": "keyring", 00:23:41.405 "config": [ 00:23:41.405 { 00:23:41.405 "method": "keyring_file_add_key", 00:23:41.405 "params": { 00:23:41.406 "name": "key0", 00:23:41.406 "path": "/tmp/tmp.f9zssfYYJu" 00:23:41.406 } 00:23:41.406 } 00:23:41.406 ] 00:23:41.406 }, 00:23:41.406 { 00:23:41.406 "subsystem": "iobuf", 00:23:41.406 "config": [ 00:23:41.406 { 00:23:41.406 "method": "iobuf_set_options", 00:23:41.406 "params": { 00:23:41.406 "small_pool_count": 8192, 00:23:41.406 "large_pool_count": 1024, 00:23:41.406 "small_bufsize": 8192, 00:23:41.406 "large_bufsize": 135168, 00:23:41.406 "enable_numa": false 00:23:41.406 } 00:23:41.406 } 00:23:41.406 ] 00:23:41.406 }, 00:23:41.406 { 00:23:41.406 "subsystem": "sock", 00:23:41.406 "config": [ 00:23:41.406 { 00:23:41.406 "method": "sock_set_default_impl", 00:23:41.406 "params": { 00:23:41.406 "impl_name": "posix" 00:23:41.406 } 00:23:41.406 }, 00:23:41.406 { 00:23:41.406 "method": "sock_impl_set_options", 00:23:41.406 "params": { 00:23:41.406 "impl_name": "ssl", 00:23:41.406 "recv_buf_size": 4096, 00:23:41.406 "send_buf_size": 4096, 00:23:41.406 "enable_recv_pipe": true, 00:23:41.406 "enable_quickack": false, 00:23:41.406 "enable_placement_id": 0, 00:23:41.406 "enable_zerocopy_send_server": true, 00:23:41.406 "enable_zerocopy_send_client": false, 00:23:41.406 "zerocopy_threshold": 0, 00:23:41.406 "tls_version": 0, 00:23:41.406 "enable_ktls": false 00:23:41.406 } 00:23:41.406 }, 00:23:41.406 { 00:23:41.406 "method": "sock_impl_set_options", 00:23:41.406 "params": { 00:23:41.406 "impl_name": "posix", 00:23:41.406 "recv_buf_size": 2097152, 00:23:41.406 "send_buf_size": 2097152, 00:23:41.406 "enable_recv_pipe": true, 00:23:41.406 "enable_quickack": false, 00:23:41.406 "enable_placement_id": 0, 00:23:41.406 "enable_zerocopy_send_server": true, 00:23:41.406 "enable_zerocopy_send_client": false, 00:23:41.406 "zerocopy_threshold": 0, 00:23:41.406 "tls_version": 0, 00:23:41.406 "enable_ktls": false 00:23:41.406 } 00:23:41.406 } 00:23:41.406 ] 00:23:41.406 }, 00:23:41.406 { 00:23:41.406 "subsystem": "vmd", 00:23:41.406 "config": [] 00:23:41.406 }, 00:23:41.406 { 00:23:41.406 "subsystem": "accel", 00:23:41.406 "config": [ 00:23:41.406 { 00:23:41.406 "method": "accel_set_options", 00:23:41.406 "params": { 00:23:41.406 "small_cache_size": 128, 00:23:41.406 "large_cache_size": 16, 00:23:41.406 "task_count": 2048, 00:23:41.406 "sequence_count": 2048, 00:23:41.406 "buf_count": 2048 00:23:41.406 } 00:23:41.406 } 00:23:41.406 ] 00:23:41.406 }, 00:23:41.406 { 00:23:41.406 "subsystem": "bdev", 00:23:41.406 "config": [ 00:23:41.406 { 00:23:41.406 "method": "bdev_set_options", 00:23:41.406 "params": { 00:23:41.406 "bdev_io_pool_size": 65535, 00:23:41.406 "bdev_io_cache_size": 256, 00:23:41.406 "bdev_auto_examine": true, 00:23:41.406 "iobuf_small_cache_size": 128, 00:23:41.406 "iobuf_large_cache_size": 16 00:23:41.406 } 00:23:41.406 }, 00:23:41.406 { 00:23:41.406 "method": "bdev_raid_set_options", 00:23:41.406 "params": { 00:23:41.406 "process_window_size_kb": 1024, 00:23:41.406 "process_max_bandwidth_mb_sec": 0 00:23:41.406 } 00:23:41.406 }, 00:23:41.406 { 00:23:41.406 "method": "bdev_iscsi_set_options", 00:23:41.406 "params": { 00:23:41.406 "timeout_sec": 30 00:23:41.406 } 00:23:41.406 }, 00:23:41.406 { 00:23:41.406 "method": "bdev_nvme_set_options", 00:23:41.406 "params": { 00:23:41.406 "action_on_timeout": "none", 00:23:41.406 "timeout_us": 0, 00:23:41.406 "timeout_admin_us": 0, 00:23:41.406 "keep_alive_timeout_ms": 10000, 00:23:41.406 "arbitration_burst": 0, 00:23:41.406 "low_priority_weight": 0, 00:23:41.406 "medium_priority_weight": 0, 00:23:41.406 "high_priority_weight": 0, 00:23:41.406 "nvme_adminq_poll_period_us": 10000, 00:23:41.406 "nvme_ioq_poll_period_us": 0, 00:23:41.406 "io_queue_requests": 512, 00:23:41.406 "delay_cmd_submit": true, 00:23:41.406 "transport_retry_count": 4, 00:23:41.406 "bdev_retry_count": 3, 00:23:41.406 "transport_ack_timeout": 0, 00:23:41.406 "ctrlr_loss_timeout_sec": 0, 00:23:41.406 "reconnect_delay_sec": 0, 00:23:41.406 "fast_io_fail_timeout_sec": 0, 00:23:41.406 "disable_auto_failback": false, 00:23:41.406 "generate_uuids": false, 00:23:41.406 "transport_tos": 0, 00:23:41.406 "nvme_error_stat": false, 00:23:41.406 "rdma_srq_size": 0, 00:23:41.406 "io_path_stat": false, 00:23:41.406 "allow_accel_sequence": false, 00:23:41.406 "rdma_max_cq_size": 0, 00:23:41.406 "rdma_cm_event_timeout_ms": 0, 00:23:41.406 "dhchap_digests": [ 00:23:41.406 "sha256", 00:23:41.406 "sha384", 00:23:41.406 "sha512" 00:23:41.406 ], 00:23:41.406 "dhchap_dhgroups": [ 00:23:41.406 "null", 00:23:41.406 "ffdhe2048", 00:23:41.406 "ffdhe3072", 00:23:41.406 "ffdhe4096", 00:23:41.406 "ffdhe6144", 00:23:41.406 "ffdhe8192" 00:23:41.406 ] 00:23:41.406 } 00:23:41.406 }, 00:23:41.406 { 00:23:41.406 "method": "bdev_nvme_attach_controller", 00:23:41.406 "params": { 00:23:41.406 "name": "TLSTEST", 00:23:41.406 "trtype": "TCP", 00:23:41.406 "adrfam": "IPv4", 00:23:41.406 "traddr": "10.0.0.2", 00:23:41.406 "trsvcid": "4420", 00:23:41.406 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:41.406 "prchk_reftag": false, 00:23:41.406 "prchk_guard": false, 00:23:41.406 "ctrlr_loss_timeout_sec": 0, 00:23:41.406 "reconnect_delay_sec": 0, 00:23:41.406 "fast_io_fail_timeout_sec": 0, 00:23:41.406 "psk": "key0", 00:23:41.406 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:41.406 "hdgst": false, 00:23:41.406 "ddgst": false, 00:23:41.406 "multipath": "multipath" 00:23:41.406 } 00:23:41.406 }, 00:23:41.406 { 00:23:41.406 "method": "bdev_nvme_set_hotplug", 00:23:41.406 "params": { 00:23:41.406 "period_us": 100000, 00:23:41.406 "enable": false 00:23:41.406 } 00:23:41.406 }, 00:23:41.406 { 00:23:41.406 "method": "bdev_wait_for_examine" 00:23:41.406 } 00:23:41.406 ] 00:23:41.406 }, 00:23:41.406 { 00:23:41.406 "subsystem": "nbd", 00:23:41.406 "config": [] 00:23:41.406 } 00:23:41.406 ] 00:23:41.406 }' 00:23:41.406 00:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:41.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:41.406 00:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:41.406 00:29:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:41.406 [2024-11-18 00:29:05.035877] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:23:41.406 [2024-11-18 00:29:05.035958] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid278640 ] 00:23:41.406 [2024-11-18 00:29:05.103977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.406 [2024-11-18 00:29:05.152584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:41.664 [2024-11-18 00:29:05.334731] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:41.664 00:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:41.664 00:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:41.664 00:29:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:41.922 Running I/O for 10 seconds... 00:23:43.801 3337.00 IOPS, 13.04 MiB/s [2024-11-17T23:29:08.995Z] 3383.00 IOPS, 13.21 MiB/s [2024-11-17T23:29:09.927Z] 3428.33 IOPS, 13.39 MiB/s [2024-11-17T23:29:10.861Z] 3402.00 IOPS, 13.29 MiB/s [2024-11-17T23:29:11.794Z] 3372.60 IOPS, 13.17 MiB/s [2024-11-17T23:29:12.727Z] 3364.00 IOPS, 13.14 MiB/s [2024-11-17T23:29:13.660Z] 3348.57 IOPS, 13.08 MiB/s [2024-11-17T23:29:14.593Z] 3358.62 IOPS, 13.12 MiB/s [2024-11-17T23:29:15.965Z] 3359.56 IOPS, 13.12 MiB/s [2024-11-17T23:29:15.965Z] 3365.00 IOPS, 13.14 MiB/s 00:23:52.143 Latency(us) 00:23:52.143 [2024-11-17T23:29:15.965Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:52.143 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:52.143 Verification LBA range: start 0x0 length 0x2000 00:23:52.143 TLSTESTn1 : 10.03 3366.55 13.15 0.00 0.00 37940.92 6310.87 48156.82 00:23:52.143 [2024-11-17T23:29:15.965Z] =================================================================================================================== 00:23:52.143 [2024-11-17T23:29:15.965Z] Total : 3366.55 13.15 0.00 0.00 37940.92 6310.87 48156.82 00:23:52.143 { 00:23:52.143 "results": [ 00:23:52.143 { 00:23:52.143 "job": "TLSTESTn1", 00:23:52.143 "core_mask": "0x4", 00:23:52.143 "workload": "verify", 00:23:52.143 "status": "finished", 00:23:52.143 "verify_range": { 00:23:52.143 "start": 0, 00:23:52.143 "length": 8192 00:23:52.143 }, 00:23:52.143 "queue_depth": 128, 00:23:52.143 "io_size": 4096, 00:23:52.143 "runtime": 10.032522, 00:23:52.143 "iops": 3366.5513018561037, 00:23:52.143 "mibps": 13.150591022875405, 00:23:52.143 "io_failed": 0, 00:23:52.143 "io_timeout": 0, 00:23:52.143 "avg_latency_us": 37940.92319013077, 00:23:52.143 "min_latency_us": 6310.874074074074, 00:23:52.143 "max_latency_us": 48156.8237037037 00:23:52.143 } 00:23:52.143 ], 00:23:52.143 "core_count": 1 00:23:52.143 } 00:23:52.143 00:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:52.143 00:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 278640 00:23:52.143 00:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 278640 ']' 00:23:52.143 00:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 278640 00:23:52.143 00:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:52.143 00:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:52.143 00:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 278640 00:23:52.143 00:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:52.143 00:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:52.143 00:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 278640' 00:23:52.143 killing process with pid 278640 00:23:52.143 00:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 278640 00:23:52.143 Received shutdown signal, test time was about 10.000000 seconds 00:23:52.143 00:23:52.143 Latency(us) 00:23:52.143 [2024-11-17T23:29:15.965Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:52.143 [2024-11-17T23:29:15.965Z] =================================================================================================================== 00:23:52.143 [2024-11-17T23:29:15.965Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:52.143 00:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 278640 00:23:52.143 00:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 278485 00:23:52.143 00:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 278485 ']' 00:23:52.143 00:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 278485 00:23:52.143 00:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:52.143 00:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:52.143 00:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 278485 00:23:52.143 00:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:52.143 00:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:52.143 00:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 278485' 00:23:52.143 killing process with pid 278485 00:23:52.143 00:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 278485 00:23:52.143 00:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 278485 00:23:52.402 00:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:23:52.402 00:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:52.402 00:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:52.402 00:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:52.402 00:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:52.402 00:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=279842 00:23:52.402 00:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 279842 00:23:52.402 00:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 279842 ']' 00:23:52.402 00:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:52.402 00:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:52.402 00:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:52.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:52.402 00:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:52.402 00:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:52.402 [2024-11-18 00:29:16.178529] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:23:52.402 [2024-11-18 00:29:16.178629] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:52.661 [2024-11-18 00:29:16.253116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.661 [2024-11-18 00:29:16.297946] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:52.661 [2024-11-18 00:29:16.298002] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:52.661 [2024-11-18 00:29:16.298031] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:52.661 [2024-11-18 00:29:16.298042] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:52.661 [2024-11-18 00:29:16.298051] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:52.661 [2024-11-18 00:29:16.298652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:52.661 00:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:52.661 00:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:52.661 00:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:52.661 00:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:52.661 00:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:52.661 00:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:52.661 00:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.f9zssfYYJu 00:23:52.661 00:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.f9zssfYYJu 00:23:52.661 00:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:52.919 [2024-11-18 00:29:16.694928] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:52.919 00:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:53.180 00:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:53.438 [2024-11-18 00:29:17.212276] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:53.438 [2024-11-18 00:29:17.212507] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:53.438 00:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:54.004 malloc0 00:23:54.004 00:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:54.263 00:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.f9zssfYYJu 00:23:54.522 00:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:54.781 00:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=280129 00:23:54.781 00:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:54.781 00:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:54.781 00:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 280129 /var/tmp/bdevperf.sock 00:23:54.781 00:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 280129 ']' 00:23:54.781 00:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:54.781 00:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:54.781 00:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:54.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:54.781 00:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:54.781 00:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:54.781 [2024-11-18 00:29:18.473368] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:23:54.781 [2024-11-18 00:29:18.473446] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid280129 ] 00:23:54.781 [2024-11-18 00:29:18.542686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.781 [2024-11-18 00:29:18.589648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:55.040 00:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:55.040 00:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:55.040 00:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.f9zssfYYJu 00:23:55.298 00:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:55.563 [2024-11-18 00:29:19.210249] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:55.563 nvme0n1 00:23:55.563 00:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:55.821 Running I/O for 1 seconds... 00:23:56.754 3417.00 IOPS, 13.35 MiB/s 00:23:56.754 Latency(us) 00:23:56.754 [2024-11-17T23:29:20.576Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:56.754 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:56.754 Verification LBA range: start 0x0 length 0x2000 00:23:56.754 nvme0n1 : 1.03 3442.60 13.45 0.00 0.00 36706.85 11796.48 46409.20 00:23:56.754 [2024-11-17T23:29:20.576Z] =================================================================================================================== 00:23:56.754 [2024-11-17T23:29:20.576Z] Total : 3442.60 13.45 0.00 0.00 36706.85 11796.48 46409.20 00:23:56.754 { 00:23:56.754 "results": [ 00:23:56.754 { 00:23:56.754 "job": "nvme0n1", 00:23:56.754 "core_mask": "0x2", 00:23:56.754 "workload": "verify", 00:23:56.754 "status": "finished", 00:23:56.754 "verify_range": { 00:23:56.754 "start": 0, 00:23:56.754 "length": 8192 00:23:56.754 }, 00:23:56.754 "queue_depth": 128, 00:23:56.754 "io_size": 4096, 00:23:56.754 "runtime": 1.030035, 00:23:56.754 "iops": 3442.6014649987624, 00:23:56.754 "mibps": 13.447661972651415, 00:23:56.754 "io_failed": 0, 00:23:56.754 "io_timeout": 0, 00:23:56.754 "avg_latency_us": 36706.847626329094, 00:23:56.754 "min_latency_us": 11796.48, 00:23:56.754 "max_latency_us": 46409.19703703704 00:23:56.754 } 00:23:56.754 ], 00:23:56.754 "core_count": 1 00:23:56.754 } 00:23:56.754 00:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 280129 00:23:56.754 00:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 280129 ']' 00:23:56.754 00:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 280129 00:23:56.754 00:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:56.754 00:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:56.754 00:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 280129 00:23:56.754 00:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:56.754 00:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:56.754 00:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 280129' 00:23:56.754 killing process with pid 280129 00:23:56.754 00:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 280129 00:23:56.754 Received shutdown signal, test time was about 1.000000 seconds 00:23:56.754 00:23:56.754 Latency(us) 00:23:56.754 [2024-11-17T23:29:20.576Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:56.754 [2024-11-17T23:29:20.576Z] =================================================================================================================== 00:23:56.754 [2024-11-17T23:29:20.576Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:56.754 00:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 280129 00:23:57.013 00:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 279842 00:23:57.013 00:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 279842 ']' 00:23:57.013 00:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 279842 00:23:57.013 00:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:57.013 00:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:57.013 00:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 279842 00:23:57.013 00:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:57.013 00:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:57.013 00:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 279842' 00:23:57.013 killing process with pid 279842 00:23:57.013 00:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 279842 00:23:57.013 00:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 279842 00:23:57.271 00:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:23:57.271 00:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:57.271 00:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:57.271 00:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:57.271 00:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=280525 00:23:57.271 00:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:57.271 00:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 280525 00:23:57.271 00:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 280525 ']' 00:23:57.271 00:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:57.271 00:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:57.271 00:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:57.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:57.271 00:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:57.271 00:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:57.271 [2024-11-18 00:29:21.000494] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:23:57.271 [2024-11-18 00:29:21.000575] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:57.272 [2024-11-18 00:29:21.072348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.530 [2024-11-18 00:29:21.120708] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:57.530 [2024-11-18 00:29:21.120758] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:57.530 [2024-11-18 00:29:21.120787] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:57.530 [2024-11-18 00:29:21.120798] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:57.530 [2024-11-18 00:29:21.120807] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:57.530 [2024-11-18 00:29:21.121385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:57.530 00:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:57.530 00:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:57.530 00:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:57.530 00:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:57.530 00:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:57.530 00:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:57.530 00:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:23:57.530 00:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.530 00:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:57.530 [2024-11-18 00:29:21.260821] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:57.530 malloc0 00:23:57.530 [2024-11-18 00:29:21.292782] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:57.530 [2024-11-18 00:29:21.293043] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:57.530 00:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.530 00:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=280554 00:23:57.530 00:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 280554 /var/tmp/bdevperf.sock 00:23:57.530 00:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 280554 ']' 00:23:57.530 00:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:57.530 00:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:57.530 00:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:57.530 00:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:57.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:57.530 00:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:57.530 00:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:57.788 [2024-11-18 00:29:21.369151] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:23:57.789 [2024-11-18 00:29:21.369225] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid280554 ] 00:23:57.789 [2024-11-18 00:29:21.435577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.789 [2024-11-18 00:29:21.481662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:57.789 00:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:57.789 00:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:57.789 00:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.f9zssfYYJu 00:23:58.047 00:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:58.612 [2024-11-18 00:29:22.130389] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:58.612 nvme0n1 00:23:58.612 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:58.612 Running I/O for 1 seconds... 00:23:59.801 3276.00 IOPS, 12.80 MiB/s 00:23:59.801 Latency(us) 00:23:59.801 [2024-11-17T23:29:23.623Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:59.801 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:59.801 Verification LBA range: start 0x0 length 0x2000 00:23:59.801 nvme0n1 : 1.04 3279.35 12.81 0.00 0.00 38375.79 6990.51 29321.29 00:23:59.801 [2024-11-17T23:29:23.623Z] =================================================================================================================== 00:23:59.801 [2024-11-17T23:29:23.623Z] Total : 3279.35 12.81 0.00 0.00 38375.79 6990.51 29321.29 00:23:59.801 { 00:23:59.801 "results": [ 00:23:59.801 { 00:23:59.801 "job": "nvme0n1", 00:23:59.801 "core_mask": "0x2", 00:23:59.801 "workload": "verify", 00:23:59.801 "status": "finished", 00:23:59.801 "verify_range": { 00:23:59.801 "start": 0, 00:23:59.801 "length": 8192 00:23:59.801 }, 00:23:59.801 "queue_depth": 128, 00:23:59.801 "io_size": 4096, 00:23:59.801 "runtime": 1.03801, 00:23:59.801 "iops": 3279.351836687508, 00:23:59.801 "mibps": 12.809968112060577, 00:23:59.801 "io_failed": 0, 00:23:59.801 "io_timeout": 0, 00:23:59.801 "avg_latency_us": 38375.78890890891, 00:23:59.801 "min_latency_us": 6990.506666666667, 00:23:59.801 "max_latency_us": 29321.291851851853 00:23:59.801 } 00:23:59.801 ], 00:23:59.801 "core_count": 1 00:23:59.801 } 00:23:59.801 00:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:23:59.801 00:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.801 00:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:59.801 00:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.801 00:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:23:59.801 "subsystems": [ 00:23:59.801 { 00:23:59.801 "subsystem": "keyring", 00:23:59.801 "config": [ 00:23:59.801 { 00:23:59.801 "method": "keyring_file_add_key", 00:23:59.801 "params": { 00:23:59.801 "name": "key0", 00:23:59.801 "path": "/tmp/tmp.f9zssfYYJu" 00:23:59.801 } 00:23:59.801 } 00:23:59.801 ] 00:23:59.801 }, 00:23:59.801 { 00:23:59.801 "subsystem": "iobuf", 00:23:59.801 "config": [ 00:23:59.801 { 00:23:59.801 "method": "iobuf_set_options", 00:23:59.801 "params": { 00:23:59.801 "small_pool_count": 8192, 00:23:59.801 "large_pool_count": 1024, 00:23:59.801 "small_bufsize": 8192, 00:23:59.801 "large_bufsize": 135168, 00:23:59.801 "enable_numa": false 00:23:59.801 } 00:23:59.801 } 00:23:59.801 ] 00:23:59.801 }, 00:23:59.801 { 00:23:59.801 "subsystem": "sock", 00:23:59.801 "config": [ 00:23:59.801 { 00:23:59.801 "method": "sock_set_default_impl", 00:23:59.801 "params": { 00:23:59.801 "impl_name": "posix" 00:23:59.801 } 00:23:59.801 }, 00:23:59.801 { 00:23:59.801 "method": "sock_impl_set_options", 00:23:59.801 "params": { 00:23:59.801 "impl_name": "ssl", 00:23:59.801 "recv_buf_size": 4096, 00:23:59.801 "send_buf_size": 4096, 00:23:59.801 "enable_recv_pipe": true, 00:23:59.801 "enable_quickack": false, 00:23:59.801 "enable_placement_id": 0, 00:23:59.801 "enable_zerocopy_send_server": true, 00:23:59.801 "enable_zerocopy_send_client": false, 00:23:59.801 "zerocopy_threshold": 0, 00:23:59.801 "tls_version": 0, 00:23:59.801 "enable_ktls": false 00:23:59.801 } 00:23:59.801 }, 00:23:59.801 { 00:23:59.801 "method": "sock_impl_set_options", 00:23:59.801 "params": { 00:23:59.801 "impl_name": "posix", 00:23:59.801 "recv_buf_size": 2097152, 00:23:59.801 "send_buf_size": 2097152, 00:23:59.801 "enable_recv_pipe": true, 00:23:59.801 "enable_quickack": false, 00:23:59.801 "enable_placement_id": 0, 00:23:59.801 "enable_zerocopy_send_server": true, 00:23:59.801 "enable_zerocopy_send_client": false, 00:23:59.801 "zerocopy_threshold": 0, 00:23:59.801 "tls_version": 0, 00:23:59.801 "enable_ktls": false 00:23:59.801 } 00:23:59.801 } 00:23:59.801 ] 00:23:59.801 }, 00:23:59.801 { 00:23:59.801 "subsystem": "vmd", 00:23:59.801 "config": [] 00:23:59.801 }, 00:23:59.801 { 00:23:59.801 "subsystem": "accel", 00:23:59.801 "config": [ 00:23:59.801 { 00:23:59.801 "method": "accel_set_options", 00:23:59.801 "params": { 00:23:59.801 "small_cache_size": 128, 00:23:59.801 "large_cache_size": 16, 00:23:59.801 "task_count": 2048, 00:23:59.801 "sequence_count": 2048, 00:23:59.801 "buf_count": 2048 00:23:59.801 } 00:23:59.801 } 00:23:59.801 ] 00:23:59.801 }, 00:23:59.801 { 00:23:59.801 "subsystem": "bdev", 00:23:59.801 "config": [ 00:23:59.801 { 00:23:59.801 "method": "bdev_set_options", 00:23:59.801 "params": { 00:23:59.801 "bdev_io_pool_size": 65535, 00:23:59.801 "bdev_io_cache_size": 256, 00:23:59.801 "bdev_auto_examine": true, 00:23:59.801 "iobuf_small_cache_size": 128, 00:23:59.801 "iobuf_large_cache_size": 16 00:23:59.801 } 00:23:59.801 }, 00:23:59.801 { 00:23:59.801 "method": "bdev_raid_set_options", 00:23:59.801 "params": { 00:23:59.801 "process_window_size_kb": 1024, 00:23:59.801 "process_max_bandwidth_mb_sec": 0 00:23:59.801 } 00:23:59.801 }, 00:23:59.801 { 00:23:59.801 "method": "bdev_iscsi_set_options", 00:23:59.801 "params": { 00:23:59.801 "timeout_sec": 30 00:23:59.801 } 00:23:59.801 }, 00:23:59.801 { 00:23:59.801 "method": "bdev_nvme_set_options", 00:23:59.802 "params": { 00:23:59.802 "action_on_timeout": "none", 00:23:59.802 "timeout_us": 0, 00:23:59.802 "timeout_admin_us": 0, 00:23:59.802 "keep_alive_timeout_ms": 10000, 00:23:59.802 "arbitration_burst": 0, 00:23:59.802 "low_priority_weight": 0, 00:23:59.802 "medium_priority_weight": 0, 00:23:59.802 "high_priority_weight": 0, 00:23:59.802 "nvme_adminq_poll_period_us": 10000, 00:23:59.802 "nvme_ioq_poll_period_us": 0, 00:23:59.802 "io_queue_requests": 0, 00:23:59.802 "delay_cmd_submit": true, 00:23:59.802 "transport_retry_count": 4, 00:23:59.802 "bdev_retry_count": 3, 00:23:59.802 "transport_ack_timeout": 0, 00:23:59.802 "ctrlr_loss_timeout_sec": 0, 00:23:59.802 "reconnect_delay_sec": 0, 00:23:59.802 "fast_io_fail_timeout_sec": 0, 00:23:59.802 "disable_auto_failback": false, 00:23:59.802 "generate_uuids": false, 00:23:59.802 "transport_tos": 0, 00:23:59.802 "nvme_error_stat": false, 00:23:59.802 "rdma_srq_size": 0, 00:23:59.802 "io_path_stat": false, 00:23:59.802 "allow_accel_sequence": false, 00:23:59.802 "rdma_max_cq_size": 0, 00:23:59.802 "rdma_cm_event_timeout_ms": 0, 00:23:59.802 "dhchap_digests": [ 00:23:59.802 "sha256", 00:23:59.802 "sha384", 00:23:59.802 "sha512" 00:23:59.802 ], 00:23:59.802 "dhchap_dhgroups": [ 00:23:59.802 "null", 00:23:59.802 "ffdhe2048", 00:23:59.802 "ffdhe3072", 00:23:59.802 "ffdhe4096", 00:23:59.802 "ffdhe6144", 00:23:59.802 "ffdhe8192" 00:23:59.802 ] 00:23:59.802 } 00:23:59.802 }, 00:23:59.802 { 00:23:59.802 "method": "bdev_nvme_set_hotplug", 00:23:59.802 "params": { 00:23:59.802 "period_us": 100000, 00:23:59.802 "enable": false 00:23:59.802 } 00:23:59.802 }, 00:23:59.802 { 00:23:59.802 "method": "bdev_malloc_create", 00:23:59.802 "params": { 00:23:59.802 "name": "malloc0", 00:23:59.802 "num_blocks": 8192, 00:23:59.802 "block_size": 4096, 00:23:59.802 "physical_block_size": 4096, 00:23:59.802 "uuid": "3a80e85e-bd3b-4336-acd7-20940f82e7b4", 00:23:59.802 "optimal_io_boundary": 0, 00:23:59.802 "md_size": 0, 00:23:59.802 "dif_type": 0, 00:23:59.802 "dif_is_head_of_md": false, 00:23:59.802 "dif_pi_format": 0 00:23:59.802 } 00:23:59.802 }, 00:23:59.802 { 00:23:59.802 "method": "bdev_wait_for_examine" 00:23:59.802 } 00:23:59.802 ] 00:23:59.802 }, 00:23:59.802 { 00:23:59.802 "subsystem": "nbd", 00:23:59.802 "config": [] 00:23:59.802 }, 00:23:59.802 { 00:23:59.802 "subsystem": "scheduler", 00:23:59.802 "config": [ 00:23:59.802 { 00:23:59.802 "method": "framework_set_scheduler", 00:23:59.802 "params": { 00:23:59.802 "name": "static" 00:23:59.802 } 00:23:59.802 } 00:23:59.802 ] 00:23:59.802 }, 00:23:59.802 { 00:23:59.802 "subsystem": "nvmf", 00:23:59.802 "config": [ 00:23:59.802 { 00:23:59.802 "method": "nvmf_set_config", 00:23:59.802 "params": { 00:23:59.802 "discovery_filter": "match_any", 00:23:59.802 "admin_cmd_passthru": { 00:23:59.802 "identify_ctrlr": false 00:23:59.802 }, 00:23:59.802 "dhchap_digests": [ 00:23:59.802 "sha256", 00:23:59.802 "sha384", 00:23:59.802 "sha512" 00:23:59.802 ], 00:23:59.802 "dhchap_dhgroups": [ 00:23:59.802 "null", 00:23:59.802 "ffdhe2048", 00:23:59.802 "ffdhe3072", 00:23:59.802 "ffdhe4096", 00:23:59.802 "ffdhe6144", 00:23:59.802 "ffdhe8192" 00:23:59.802 ] 00:23:59.802 } 00:23:59.802 }, 00:23:59.802 { 00:23:59.802 "method": "nvmf_set_max_subsystems", 00:23:59.802 "params": { 00:23:59.802 "max_subsystems": 1024 00:23:59.802 } 00:23:59.802 }, 00:23:59.802 { 00:23:59.802 "method": "nvmf_set_crdt", 00:23:59.802 "params": { 00:23:59.802 "crdt1": 0, 00:23:59.802 "crdt2": 0, 00:23:59.802 "crdt3": 0 00:23:59.802 } 00:23:59.802 }, 00:23:59.802 { 00:23:59.802 "method": "nvmf_create_transport", 00:23:59.802 "params": { 00:23:59.802 "trtype": "TCP", 00:23:59.802 "max_queue_depth": 128, 00:23:59.802 "max_io_qpairs_per_ctrlr": 127, 00:23:59.802 "in_capsule_data_size": 4096, 00:23:59.802 "max_io_size": 131072, 00:23:59.802 "io_unit_size": 131072, 00:23:59.802 "max_aq_depth": 128, 00:23:59.802 "num_shared_buffers": 511, 00:23:59.802 "buf_cache_size": 4294967295, 00:23:59.802 "dif_insert_or_strip": false, 00:23:59.802 "zcopy": false, 00:23:59.802 "c2h_success": false, 00:23:59.802 "sock_priority": 0, 00:23:59.802 "abort_timeout_sec": 1, 00:23:59.802 "ack_timeout": 0, 00:23:59.802 "data_wr_pool_size": 0 00:23:59.802 } 00:23:59.802 }, 00:23:59.802 { 00:23:59.802 "method": "nvmf_create_subsystem", 00:23:59.802 "params": { 00:23:59.802 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:59.802 "allow_any_host": false, 00:23:59.802 "serial_number": "00000000000000000000", 00:23:59.802 "model_number": "SPDK bdev Controller", 00:23:59.802 "max_namespaces": 32, 00:23:59.802 "min_cntlid": 1, 00:23:59.802 "max_cntlid": 65519, 00:23:59.802 "ana_reporting": false 00:23:59.802 } 00:23:59.802 }, 00:23:59.802 { 00:23:59.802 "method": "nvmf_subsystem_add_host", 00:23:59.802 "params": { 00:23:59.802 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:59.802 "host": "nqn.2016-06.io.spdk:host1", 00:23:59.802 "psk": "key0" 00:23:59.802 } 00:23:59.802 }, 00:23:59.802 { 00:23:59.802 "method": "nvmf_subsystem_add_ns", 00:23:59.802 "params": { 00:23:59.802 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:59.802 "namespace": { 00:23:59.802 "nsid": 1, 00:23:59.802 "bdev_name": "malloc0", 00:23:59.802 "nguid": "3A80E85EBD3B4336ACD720940F82E7B4", 00:23:59.802 "uuid": "3a80e85e-bd3b-4336-acd7-20940f82e7b4", 00:23:59.802 "no_auto_visible": false 00:23:59.802 } 00:23:59.802 } 00:23:59.802 }, 00:23:59.802 { 00:23:59.802 "method": "nvmf_subsystem_add_listener", 00:23:59.802 "params": { 00:23:59.802 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:59.802 "listen_address": { 00:23:59.802 "trtype": "TCP", 00:23:59.802 "adrfam": "IPv4", 00:23:59.802 "traddr": "10.0.0.2", 00:23:59.802 "trsvcid": "4420" 00:23:59.803 }, 00:23:59.803 "secure_channel": false, 00:23:59.803 "sock_impl": "ssl" 00:23:59.803 } 00:23:59.803 } 00:23:59.803 ] 00:23:59.803 } 00:23:59.803 ] 00:23:59.803 }' 00:23:59.803 00:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:00.061 00:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:24:00.061 "subsystems": [ 00:24:00.061 { 00:24:00.061 "subsystem": "keyring", 00:24:00.061 "config": [ 00:24:00.061 { 00:24:00.061 "method": "keyring_file_add_key", 00:24:00.061 "params": { 00:24:00.061 "name": "key0", 00:24:00.061 "path": "/tmp/tmp.f9zssfYYJu" 00:24:00.061 } 00:24:00.061 } 00:24:00.061 ] 00:24:00.061 }, 00:24:00.061 { 00:24:00.061 "subsystem": "iobuf", 00:24:00.061 "config": [ 00:24:00.061 { 00:24:00.061 "method": "iobuf_set_options", 00:24:00.061 "params": { 00:24:00.061 "small_pool_count": 8192, 00:24:00.061 "large_pool_count": 1024, 00:24:00.061 "small_bufsize": 8192, 00:24:00.061 "large_bufsize": 135168, 00:24:00.061 "enable_numa": false 00:24:00.061 } 00:24:00.061 } 00:24:00.061 ] 00:24:00.061 }, 00:24:00.061 { 00:24:00.061 "subsystem": "sock", 00:24:00.061 "config": [ 00:24:00.061 { 00:24:00.061 "method": "sock_set_default_impl", 00:24:00.061 "params": { 00:24:00.061 "impl_name": "posix" 00:24:00.061 } 00:24:00.061 }, 00:24:00.061 { 00:24:00.061 "method": "sock_impl_set_options", 00:24:00.061 "params": { 00:24:00.061 "impl_name": "ssl", 00:24:00.061 "recv_buf_size": 4096, 00:24:00.061 "send_buf_size": 4096, 00:24:00.061 "enable_recv_pipe": true, 00:24:00.061 "enable_quickack": false, 00:24:00.061 "enable_placement_id": 0, 00:24:00.061 "enable_zerocopy_send_server": true, 00:24:00.062 "enable_zerocopy_send_client": false, 00:24:00.062 "zerocopy_threshold": 0, 00:24:00.062 "tls_version": 0, 00:24:00.062 "enable_ktls": false 00:24:00.062 } 00:24:00.062 }, 00:24:00.062 { 00:24:00.062 "method": "sock_impl_set_options", 00:24:00.062 "params": { 00:24:00.062 "impl_name": "posix", 00:24:00.062 "recv_buf_size": 2097152, 00:24:00.062 "send_buf_size": 2097152, 00:24:00.062 "enable_recv_pipe": true, 00:24:00.062 "enable_quickack": false, 00:24:00.062 "enable_placement_id": 0, 00:24:00.062 "enable_zerocopy_send_server": true, 00:24:00.062 "enable_zerocopy_send_client": false, 00:24:00.062 "zerocopy_threshold": 0, 00:24:00.062 "tls_version": 0, 00:24:00.062 "enable_ktls": false 00:24:00.062 } 00:24:00.062 } 00:24:00.062 ] 00:24:00.062 }, 00:24:00.062 { 00:24:00.062 "subsystem": "vmd", 00:24:00.062 "config": [] 00:24:00.062 }, 00:24:00.062 { 00:24:00.062 "subsystem": "accel", 00:24:00.062 "config": [ 00:24:00.062 { 00:24:00.062 "method": "accel_set_options", 00:24:00.062 "params": { 00:24:00.062 "small_cache_size": 128, 00:24:00.062 "large_cache_size": 16, 00:24:00.062 "task_count": 2048, 00:24:00.062 "sequence_count": 2048, 00:24:00.062 "buf_count": 2048 00:24:00.062 } 00:24:00.062 } 00:24:00.062 ] 00:24:00.062 }, 00:24:00.062 { 00:24:00.062 "subsystem": "bdev", 00:24:00.062 "config": [ 00:24:00.062 { 00:24:00.062 "method": "bdev_set_options", 00:24:00.062 "params": { 00:24:00.062 "bdev_io_pool_size": 65535, 00:24:00.062 "bdev_io_cache_size": 256, 00:24:00.062 "bdev_auto_examine": true, 00:24:00.062 "iobuf_small_cache_size": 128, 00:24:00.062 "iobuf_large_cache_size": 16 00:24:00.062 } 00:24:00.062 }, 00:24:00.062 { 00:24:00.062 "method": "bdev_raid_set_options", 00:24:00.062 "params": { 00:24:00.062 "process_window_size_kb": 1024, 00:24:00.062 "process_max_bandwidth_mb_sec": 0 00:24:00.062 } 00:24:00.062 }, 00:24:00.062 { 00:24:00.062 "method": "bdev_iscsi_set_options", 00:24:00.062 "params": { 00:24:00.062 "timeout_sec": 30 00:24:00.062 } 00:24:00.062 }, 00:24:00.062 { 00:24:00.062 "method": "bdev_nvme_set_options", 00:24:00.062 "params": { 00:24:00.062 "action_on_timeout": "none", 00:24:00.062 "timeout_us": 0, 00:24:00.062 "timeout_admin_us": 0, 00:24:00.062 "keep_alive_timeout_ms": 10000, 00:24:00.062 "arbitration_burst": 0, 00:24:00.062 "low_priority_weight": 0, 00:24:00.062 "medium_priority_weight": 0, 00:24:00.062 "high_priority_weight": 0, 00:24:00.062 "nvme_adminq_poll_period_us": 10000, 00:24:00.062 "nvme_ioq_poll_period_us": 0, 00:24:00.062 "io_queue_requests": 512, 00:24:00.062 "delay_cmd_submit": true, 00:24:00.062 "transport_retry_count": 4, 00:24:00.062 "bdev_retry_count": 3, 00:24:00.062 "transport_ack_timeout": 0, 00:24:00.062 "ctrlr_loss_timeout_sec": 0, 00:24:00.062 "reconnect_delay_sec": 0, 00:24:00.062 "fast_io_fail_timeout_sec": 0, 00:24:00.062 "disable_auto_failback": false, 00:24:00.062 "generate_uuids": false, 00:24:00.062 "transport_tos": 0, 00:24:00.062 "nvme_error_stat": false, 00:24:00.062 "rdma_srq_size": 0, 00:24:00.062 "io_path_stat": false, 00:24:00.062 "allow_accel_sequence": false, 00:24:00.062 "rdma_max_cq_size": 0, 00:24:00.062 "rdma_cm_event_timeout_ms": 0, 00:24:00.062 "dhchap_digests": [ 00:24:00.062 "sha256", 00:24:00.062 "sha384", 00:24:00.062 "sha512" 00:24:00.062 ], 00:24:00.062 "dhchap_dhgroups": [ 00:24:00.062 "null", 00:24:00.062 "ffdhe2048", 00:24:00.062 "ffdhe3072", 00:24:00.062 "ffdhe4096", 00:24:00.062 "ffdhe6144", 00:24:00.062 "ffdhe8192" 00:24:00.062 ] 00:24:00.062 } 00:24:00.062 }, 00:24:00.062 { 00:24:00.062 "method": "bdev_nvme_attach_controller", 00:24:00.062 "params": { 00:24:00.062 "name": "nvme0", 00:24:00.062 "trtype": "TCP", 00:24:00.062 "adrfam": "IPv4", 00:24:00.062 "traddr": "10.0.0.2", 00:24:00.062 "trsvcid": "4420", 00:24:00.062 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:00.062 "prchk_reftag": false, 00:24:00.062 "prchk_guard": false, 00:24:00.062 "ctrlr_loss_timeout_sec": 0, 00:24:00.062 "reconnect_delay_sec": 0, 00:24:00.062 "fast_io_fail_timeout_sec": 0, 00:24:00.062 "psk": "key0", 00:24:00.062 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:00.062 "hdgst": false, 00:24:00.062 "ddgst": false, 00:24:00.062 "multipath": "multipath" 00:24:00.062 } 00:24:00.062 }, 00:24:00.062 { 00:24:00.062 "method": "bdev_nvme_set_hotplug", 00:24:00.062 "params": { 00:24:00.062 "period_us": 100000, 00:24:00.062 "enable": false 00:24:00.062 } 00:24:00.062 }, 00:24:00.062 { 00:24:00.062 "method": "bdev_enable_histogram", 00:24:00.062 "params": { 00:24:00.062 "name": "nvme0n1", 00:24:00.062 "enable": true 00:24:00.062 } 00:24:00.062 }, 00:24:00.062 { 00:24:00.062 "method": "bdev_wait_for_examine" 00:24:00.062 } 00:24:00.062 ] 00:24:00.062 }, 00:24:00.062 { 00:24:00.062 "subsystem": "nbd", 00:24:00.062 "config": [] 00:24:00.062 } 00:24:00.062 ] 00:24:00.062 }' 00:24:00.062 00:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 280554 00:24:00.062 00:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 280554 ']' 00:24:00.062 00:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 280554 00:24:00.062 00:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:00.062 00:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:00.062 00:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 280554 00:24:00.062 00:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:00.062 00:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:00.062 00:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 280554' 00:24:00.062 killing process with pid 280554 00:24:00.062 00:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 280554 00:24:00.062 Received shutdown signal, test time was about 1.000000 seconds 00:24:00.063 00:24:00.063 Latency(us) 00:24:00.063 [2024-11-17T23:29:23.885Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:00.063 [2024-11-17T23:29:23.885Z] =================================================================================================================== 00:24:00.063 [2024-11-17T23:29:23.885Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:00.063 00:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 280554 00:24:00.321 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 280525 00:24:00.321 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 280525 ']' 00:24:00.321 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 280525 00:24:00.321 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:00.321 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:00.321 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 280525 00:24:00.322 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:00.322 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:00.322 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 280525' 00:24:00.322 killing process with pid 280525 00:24:00.322 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 280525 00:24:00.322 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 280525 00:24:00.580 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:24:00.580 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:00.580 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:24:00.580 "subsystems": [ 00:24:00.580 { 00:24:00.580 "subsystem": "keyring", 00:24:00.580 "config": [ 00:24:00.580 { 00:24:00.580 "method": "keyring_file_add_key", 00:24:00.580 "params": { 00:24:00.580 "name": "key0", 00:24:00.580 "path": "/tmp/tmp.f9zssfYYJu" 00:24:00.580 } 00:24:00.580 } 00:24:00.580 ] 00:24:00.580 }, 00:24:00.580 { 00:24:00.580 "subsystem": "iobuf", 00:24:00.580 "config": [ 00:24:00.580 { 00:24:00.580 "method": "iobuf_set_options", 00:24:00.580 "params": { 00:24:00.580 "small_pool_count": 8192, 00:24:00.580 "large_pool_count": 1024, 00:24:00.580 "small_bufsize": 8192, 00:24:00.580 "large_bufsize": 135168, 00:24:00.580 "enable_numa": false 00:24:00.580 } 00:24:00.580 } 00:24:00.580 ] 00:24:00.580 }, 00:24:00.580 { 00:24:00.580 "subsystem": "sock", 00:24:00.580 "config": [ 00:24:00.580 { 00:24:00.580 "method": "sock_set_default_impl", 00:24:00.580 "params": { 00:24:00.580 "impl_name": "posix" 00:24:00.580 } 00:24:00.580 }, 00:24:00.580 { 00:24:00.580 "method": "sock_impl_set_options", 00:24:00.580 "params": { 00:24:00.580 "impl_name": "ssl", 00:24:00.580 "recv_buf_size": 4096, 00:24:00.580 "send_buf_size": 4096, 00:24:00.580 "enable_recv_pipe": true, 00:24:00.580 "enable_quickack": false, 00:24:00.580 "enable_placement_id": 0, 00:24:00.580 "enable_zerocopy_send_server": true, 00:24:00.580 "enable_zerocopy_send_client": false, 00:24:00.580 "zerocopy_threshold": 0, 00:24:00.580 "tls_version": 0, 00:24:00.580 "enable_ktls": false 00:24:00.580 } 00:24:00.580 }, 00:24:00.580 { 00:24:00.580 "method": "sock_impl_set_options", 00:24:00.580 "params": { 00:24:00.580 "impl_name": "posix", 00:24:00.580 "recv_buf_size": 2097152, 00:24:00.580 "send_buf_size": 2097152, 00:24:00.580 "enable_recv_pipe": true, 00:24:00.580 "enable_quickack": false, 00:24:00.580 "enable_placement_id": 0, 00:24:00.580 "enable_zerocopy_send_server": true, 00:24:00.580 "enable_zerocopy_send_client": false, 00:24:00.580 "zerocopy_threshold": 0, 00:24:00.580 "tls_version": 0, 00:24:00.580 "enable_ktls": false 00:24:00.580 } 00:24:00.580 } 00:24:00.580 ] 00:24:00.580 }, 00:24:00.580 { 00:24:00.580 "subsystem": "vmd", 00:24:00.580 "config": [] 00:24:00.580 }, 00:24:00.580 { 00:24:00.580 "subsystem": "accel", 00:24:00.580 "config": [ 00:24:00.580 { 00:24:00.581 "method": "accel_set_options", 00:24:00.581 "params": { 00:24:00.581 "small_cache_size": 128, 00:24:00.581 "large_cache_size": 16, 00:24:00.581 "task_count": 2048, 00:24:00.581 "sequence_count": 2048, 00:24:00.581 "buf_count": 2048 00:24:00.581 } 00:24:00.581 } 00:24:00.581 ] 00:24:00.581 }, 00:24:00.581 { 00:24:00.581 "subsystem": "bdev", 00:24:00.581 "config": [ 00:24:00.581 { 00:24:00.581 "method": "bdev_set_options", 00:24:00.581 "params": { 00:24:00.581 "bdev_io_pool_size": 65535, 00:24:00.581 "bdev_io_cache_size": 256, 00:24:00.581 "bdev_auto_examine": true, 00:24:00.581 "iobuf_small_cache_size": 128, 00:24:00.581 "iobuf_large_cache_size": 16 00:24:00.581 } 00:24:00.581 }, 00:24:00.581 { 00:24:00.581 "method": "bdev_raid_set_options", 00:24:00.581 "params": { 00:24:00.581 "process_window_size_kb": 1024, 00:24:00.581 "process_max_bandwidth_mb_sec": 0 00:24:00.581 } 00:24:00.581 }, 00:24:00.581 { 00:24:00.581 "method": "bdev_iscsi_set_options", 00:24:00.581 "params": { 00:24:00.581 "timeout_sec": 30 00:24:00.581 } 00:24:00.581 }, 00:24:00.581 { 00:24:00.581 "method": "bdev_nvme_set_options", 00:24:00.581 "params": { 00:24:00.581 "action_on_timeout": "none", 00:24:00.581 "timeout_us": 0, 00:24:00.581 "timeout_admin_us": 0, 00:24:00.581 "keep_alive_timeout_ms": 10000, 00:24:00.581 "arbitration_burst": 0, 00:24:00.581 "low_priority_weight": 0, 00:24:00.581 "medium_priority_weight": 0, 00:24:00.581 "high_priority_weight": 0, 00:24:00.581 "nvme_adminq_poll_period_us": 10000, 00:24:00.581 "nvme_ioq_poll_period_us": 0, 00:24:00.581 "io_queue_requests": 0, 00:24:00.581 "delay_cmd_submit": true, 00:24:00.581 "transport_retry_count": 4, 00:24:00.581 "bdev_retry_count": 3, 00:24:00.581 "transport_ack_timeout": 0, 00:24:00.581 "ctrlr_loss_timeout_sec": 0, 00:24:00.581 "reconnect_delay_sec": 0, 00:24:00.581 "fast_io_fail_timeout_sec": 0, 00:24:00.581 "disable_auto_failback": false, 00:24:00.581 "generate_uuids": false, 00:24:00.581 "transport_tos": 0, 00:24:00.581 "nvme_error_stat": false, 00:24:00.581 "rdma_srq_size": 0, 00:24:00.581 "io_path_stat": false, 00:24:00.581 "allow_accel_sequence": false, 00:24:00.581 "rdma_max_cq_size": 0, 00:24:00.581 "rdma_cm_event_timeout_ms": 0, 00:24:00.581 "dhchap_digests": [ 00:24:00.581 "sha256", 00:24:00.581 "sha384", 00:24:00.581 "sha512" 00:24:00.581 ], 00:24:00.581 "dhchap_dhgroups": [ 00:24:00.581 "null", 00:24:00.581 "ffdhe2048", 00:24:00.581 "ffdhe3072", 00:24:00.581 "ffdhe4096", 00:24:00.581 "ffdhe6144", 00:24:00.581 "ffdhe8192" 00:24:00.581 ] 00:24:00.581 } 00:24:00.581 }, 00:24:00.581 { 00:24:00.581 "method": "bdev_nvme_set_hotplug", 00:24:00.581 "params": { 00:24:00.581 "period_us": 100000, 00:24:00.581 "enable": false 00:24:00.581 } 00:24:00.581 }, 00:24:00.581 { 00:24:00.581 "method": "bdev_malloc_create", 00:24:00.581 "params": { 00:24:00.581 "name": "malloc0", 00:24:00.581 "num_blocks": 8192, 00:24:00.581 "block_size": 4096, 00:24:00.581 "physical_block_size": 4096, 00:24:00.581 "uuid": "3a80e85e-bd3b-4336-acd7-20940f82e7b4", 00:24:00.581 "optimal_io_boundary": 0, 00:24:00.581 "md_size": 0, 00:24:00.581 "dif_type": 0, 00:24:00.581 "dif_is_head_of_md": false, 00:24:00.581 "dif_pi_format": 0 00:24:00.581 } 00:24:00.581 }, 00:24:00.581 { 00:24:00.581 "method": "bdev_wait_for_examine" 00:24:00.581 } 00:24:00.581 ] 00:24:00.581 }, 00:24:00.581 { 00:24:00.581 "subsystem": "nbd", 00:24:00.581 "config": [] 00:24:00.581 }, 00:24:00.581 { 00:24:00.581 "subsystem": "scheduler", 00:24:00.581 "config": [ 00:24:00.581 { 00:24:00.581 "method": "framework_set_scheduler", 00:24:00.581 "params": { 00:24:00.581 "name": "static" 00:24:00.581 } 00:24:00.581 } 00:24:00.581 ] 00:24:00.581 }, 00:24:00.581 { 00:24:00.581 "subsystem": "nvmf", 00:24:00.581 "config": [ 00:24:00.581 { 00:24:00.581 "method": "nvmf_set_config", 00:24:00.581 "params": { 00:24:00.581 "discovery_filter": "match_any", 00:24:00.581 "admin_cmd_passthru": { 00:24:00.581 "identify_ctrlr": false 00:24:00.581 }, 00:24:00.581 "dhchap_digests": [ 00:24:00.581 "sha256", 00:24:00.581 "sha384", 00:24:00.581 "sha512" 00:24:00.581 ], 00:24:00.581 "dhchap_dhgroups": [ 00:24:00.581 "null", 00:24:00.581 "ffdhe2048", 00:24:00.581 "ffdhe3072", 00:24:00.581 "ffdhe4096", 00:24:00.581 "ffdhe6144", 00:24:00.581 "ffdhe8192" 00:24:00.581 ] 00:24:00.581 } 00:24:00.581 }, 00:24:00.581 { 00:24:00.581 "method": "nvmf_set_max_subsystems", 00:24:00.581 "params": { 00:24:00.581 "max_subsystems": 1024 00:24:00.581 } 00:24:00.581 }, 00:24:00.581 { 00:24:00.581 "method": "nvmf_set_crdt", 00:24:00.581 "params": { 00:24:00.581 "crdt1": 0, 00:24:00.581 "crdt2": 0, 00:24:00.581 "crdt3": 0 00:24:00.581 } 00:24:00.581 }, 00:24:00.581 { 00:24:00.581 "method": "nvmf_create_transport", 00:24:00.581 "params": { 00:24:00.581 "trtype": "TCP", 00:24:00.581 "max_queue_depth": 128, 00:24:00.581 "max_io_qpairs_per_ctrlr": 127, 00:24:00.581 "in_capsule_data_size": 4096, 00:24:00.581 "max_io_size": 131072, 00:24:00.581 "io_unit_size": 131072, 00:24:00.581 "max_aq_depth": 128, 00:24:00.581 "num_shared_buffers": 511, 00:24:00.581 "buf_cache_size": 4294967295, 00:24:00.581 "dif_insert_or_strip": false, 00:24:00.581 "zcopy": false, 00:24:00.581 "c2h_success": false, 00:24:00.581 "sock_priority": 0, 00:24:00.581 "abort_timeout_sec": 1, 00:24:00.581 "ack_timeout": 0, 00:24:00.581 "data_wr_pool_size": 0 00:24:00.582 } 00:24:00.582 }, 00:24:00.582 { 00:24:00.582 "method": "nvmf_create_subsystem", 00:24:00.582 "params": { 00:24:00.582 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:00.582 "allow_any_host": false, 00:24:00.582 "serial_number": "00000000000000000000", 00:24:00.582 "model_number": "SPDK bdev Controller", 00:24:00.582 "max_namespaces": 32, 00:24:00.582 "min_cntlid": 1, 00:24:00.582 "max_cntlid": 65519, 00:24:00.582 "ana_reporting": false 00:24:00.582 } 00:24:00.582 }, 00:24:00.582 { 00:24:00.582 "method": "nvmf_subsystem_add_host", 00:24:00.582 "params": { 00:24:00.582 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:00.582 "host": "nqn.2016-06.io.spdk:host1", 00:24:00.582 "psk": "key0" 00:24:00.582 } 00:24:00.582 }, 00:24:00.582 { 00:24:00.582 "method": "nvmf_subsystem_add_ns", 00:24:00.582 "params": { 00:24:00.582 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:00.582 "namespace": { 00:24:00.582 "nsid": 1, 00:24:00.582 "bdev_name": "malloc0", 00:24:00.582 "nguid": "3A80E85EBD3B4336ACD720940F82E7B4", 00:24:00.582 "uuid": "3a80e85e-bd3b-4336-acd7-20940f82e7b4", 00:24:00.582 "no_auto_visible": false 00:24:00.582 } 00:24:00.582 } 00:24:00.582 }, 00:24:00.582 { 00:24:00.582 "method": "nvmf_subsystem_add_listener", 00:24:00.582 "params": { 00:24:00.582 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:00.582 "listen_address": { 00:24:00.582 "trtype": "TCP", 00:24:00.582 "adrfam": "IPv4", 00:24:00.582 "traddr": "10.0.0.2", 00:24:00.582 "trsvcid": "4420" 00:24:00.582 }, 00:24:00.582 "secure_channel": false, 00:24:00.582 "sock_impl": "ssl" 00:24:00.582 } 00:24:00.582 } 00:24:00.582 ] 00:24:00.582 } 00:24:00.582 ] 00:24:00.582 }' 00:24:00.582 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:00.582 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:00.582 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=280955 00:24:00.582 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:24:00.582 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 280955 00:24:00.582 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 280955 ']' 00:24:00.582 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:00.582 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:00.582 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:00.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:00.582 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:00.582 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:00.582 [2024-11-18 00:29:24.350863] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:24:00.582 [2024-11-18 00:29:24.350963] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:00.844 [2024-11-18 00:29:24.422753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:00.844 [2024-11-18 00:29:24.462575] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:00.844 [2024-11-18 00:29:24.462650] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:00.844 [2024-11-18 00:29:24.462676] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:00.844 [2024-11-18 00:29:24.462687] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:00.844 [2024-11-18 00:29:24.462696] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:00.844 [2024-11-18 00:29:24.463295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:01.105 [2024-11-18 00:29:24.697724] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:01.105 [2024-11-18 00:29:24.729736] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:01.105 [2024-11-18 00:29:24.729977] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:01.684 00:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:01.684 00:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:01.684 00:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:01.684 00:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:01.685 00:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:01.685 00:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:01.685 00:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=281106 00:24:01.685 00:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 281106 /var/tmp/bdevperf.sock 00:24:01.685 00:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 281106 ']' 00:24:01.685 00:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:01.685 00:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:24:01.685 00:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:01.685 00:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:01.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:01.685 00:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:24:01.685 "subsystems": [ 00:24:01.685 { 00:24:01.685 "subsystem": "keyring", 00:24:01.685 "config": [ 00:24:01.685 { 00:24:01.685 "method": "keyring_file_add_key", 00:24:01.685 "params": { 00:24:01.685 "name": "key0", 00:24:01.685 "path": "/tmp/tmp.f9zssfYYJu" 00:24:01.685 } 00:24:01.685 } 00:24:01.685 ] 00:24:01.685 }, 00:24:01.685 { 00:24:01.685 "subsystem": "iobuf", 00:24:01.685 "config": [ 00:24:01.685 { 00:24:01.685 "method": "iobuf_set_options", 00:24:01.685 "params": { 00:24:01.685 "small_pool_count": 8192, 00:24:01.685 "large_pool_count": 1024, 00:24:01.685 "small_bufsize": 8192, 00:24:01.685 "large_bufsize": 135168, 00:24:01.685 "enable_numa": false 00:24:01.685 } 00:24:01.685 } 00:24:01.685 ] 00:24:01.685 }, 00:24:01.686 { 00:24:01.686 "subsystem": "sock", 00:24:01.686 "config": [ 00:24:01.686 { 00:24:01.686 "method": "sock_set_default_impl", 00:24:01.686 "params": { 00:24:01.686 "impl_name": "posix" 00:24:01.686 } 00:24:01.686 }, 00:24:01.686 { 00:24:01.686 "method": "sock_impl_set_options", 00:24:01.686 "params": { 00:24:01.686 "impl_name": "ssl", 00:24:01.686 "recv_buf_size": 4096, 00:24:01.686 "send_buf_size": 4096, 00:24:01.686 "enable_recv_pipe": true, 00:24:01.686 "enable_quickack": false, 00:24:01.686 "enable_placement_id": 0, 00:24:01.686 "enable_zerocopy_send_server": true, 00:24:01.686 "enable_zerocopy_send_client": false, 00:24:01.686 "zerocopy_threshold": 0, 00:24:01.686 "tls_version": 0, 00:24:01.686 "enable_ktls": false 00:24:01.686 } 00:24:01.686 }, 00:24:01.686 { 00:24:01.686 "method": "sock_impl_set_options", 00:24:01.686 "params": { 00:24:01.686 "impl_name": "posix", 00:24:01.686 "recv_buf_size": 2097152, 00:24:01.686 "send_buf_size": 2097152, 00:24:01.686 "enable_recv_pipe": true, 00:24:01.686 "enable_quickack": false, 00:24:01.686 "enable_placement_id": 0, 00:24:01.686 "enable_zerocopy_send_server": true, 00:24:01.686 "enable_zerocopy_send_client": false, 00:24:01.686 "zerocopy_threshold": 0, 00:24:01.686 "tls_version": 0, 00:24:01.686 "enable_ktls": false 00:24:01.686 } 00:24:01.686 } 00:24:01.686 ] 00:24:01.686 }, 00:24:01.686 { 00:24:01.686 "subsystem": "vmd", 00:24:01.686 "config": [] 00:24:01.686 }, 00:24:01.686 { 00:24:01.686 "subsystem": "accel", 00:24:01.686 "config": [ 00:24:01.686 { 00:24:01.686 "method": "accel_set_options", 00:24:01.686 "params": { 00:24:01.686 "small_cache_size": 128, 00:24:01.686 "large_cache_size": 16, 00:24:01.686 "task_count": 2048, 00:24:01.687 "sequence_count": 2048, 00:24:01.687 "buf_count": 2048 00:24:01.687 } 00:24:01.687 } 00:24:01.687 ] 00:24:01.687 }, 00:24:01.687 { 00:24:01.687 "subsystem": "bdev", 00:24:01.687 "config": [ 00:24:01.687 { 00:24:01.687 "method": "bdev_set_options", 00:24:01.687 "params": { 00:24:01.687 "bdev_io_pool_size": 65535, 00:24:01.687 "bdev_io_cache_size": 256, 00:24:01.687 "bdev_auto_examine": true, 00:24:01.687 "iobuf_small_cache_size": 128, 00:24:01.687 "iobuf_large_cache_size": 16 00:24:01.687 } 00:24:01.687 }, 00:24:01.687 { 00:24:01.687 "method": "bdev_raid_set_options", 00:24:01.687 "params": { 00:24:01.687 "process_window_size_kb": 1024, 00:24:01.687 "process_max_bandwidth_mb_sec": 0 00:24:01.687 } 00:24:01.687 }, 00:24:01.687 { 00:24:01.687 "method": "bdev_iscsi_set_options", 00:24:01.687 "params": { 00:24:01.687 "timeout_sec": 30 00:24:01.687 } 00:24:01.687 }, 00:24:01.687 { 00:24:01.687 "method": "bdev_nvme_set_options", 00:24:01.687 "params": { 00:24:01.687 "action_on_timeout": "none", 00:24:01.687 "timeout_us": 0, 00:24:01.687 "timeout_admin_us": 0, 00:24:01.687 "keep_alive_timeout_ms": 10000, 00:24:01.687 "arbitration_burst": 0, 00:24:01.687 "low_priority_weight": 0, 00:24:01.687 "medium_priority_weight": 0, 00:24:01.687 "high_priority_weight": 0, 00:24:01.687 "nvme_adminq_poll_period_us": 10000, 00:24:01.687 "nvme_ioq_poll_period_us": 0, 00:24:01.687 "io_queue_requests": 512, 00:24:01.687 "delay_cmd_submit": true, 00:24:01.687 "transport_retry_count": 4, 00:24:01.687 "bdev_retry_count": 3, 00:24:01.687 "transport_ack_timeout": 0, 00:24:01.687 "ctrlr_loss_timeout_sec": 0, 00:24:01.687 "reconnect_delay_sec": 0, 00:24:01.687 "fast_io_fail_timeout_sec": 0, 00:24:01.687 "disable_auto_failback": false, 00:24:01.687 "generate_uuids": false, 00:24:01.687 "transport_tos": 0, 00:24:01.687 "nvme_error_stat": false, 00:24:01.687 "rdma_srq_size": 0, 00:24:01.687 "io_path_stat": false, 00:24:01.687 "allow_accel_sequence": false, 00:24:01.687 "rdma_max_cq_size": 0, 00:24:01.687 "rdma_cm_event_timeout_ms": 0, 00:24:01.687 "dhchap_digests": [ 00:24:01.687 "sha256", 00:24:01.687 "sha384", 00:24:01.687 "sha512" 00:24:01.687 ], 00:24:01.687 "dhchap_dhgroups": [ 00:24:01.687 "null", 00:24:01.687 "ffdhe2048", 00:24:01.687 "ffdhe3072", 00:24:01.687 "ffdhe4096", 00:24:01.687 "ffdhe6144", 00:24:01.687 "ffdhe8192" 00:24:01.687 ] 00:24:01.688 } 00:24:01.688 }, 00:24:01.688 { 00:24:01.688 "method": "bdev_nvme_attach_controller", 00:24:01.688 "params": { 00:24:01.688 "name": "nvme0", 00:24:01.688 "trtype": "TCP", 00:24:01.688 "adrfam": "IPv4", 00:24:01.688 "traddr": "10.0.0.2", 00:24:01.688 "trsvcid": "4420", 00:24:01.688 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:01.688 "prchk_reftag": false, 00:24:01.688 "prchk_guard": false, 00:24:01.688 "ctrlr_loss_timeout_sec": 0, 00:24:01.688 "reconnect_delay_sec": 0, 00:24:01.688 "fast_io_fail_timeout_sec": 0, 00:24:01.688 "psk": "key0", 00:24:01.688 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:01.688 "hdgst": false, 00:24:01.688 "ddgst": false, 00:24:01.688 "multipath": "multipath" 00:24:01.688 } 00:24:01.688 }, 00:24:01.688 { 00:24:01.688 "method": "bdev_nvme_set_hotplug", 00:24:01.688 "params": { 00:24:01.688 "period_us": 100000, 00:24:01.688 "enable": false 00:24:01.688 } 00:24:01.688 }, 00:24:01.688 { 00:24:01.688 "method": "bdev_enable_histogram", 00:24:01.688 "params": { 00:24:01.688 "name": "nvme0n1", 00:24:01.688 "enable": true 00:24:01.688 } 00:24:01.688 }, 00:24:01.688 { 00:24:01.688 "method": "bdev_wait_for_examine" 00:24:01.688 } 00:24:01.688 ] 00:24:01.688 }, 00:24:01.688 { 00:24:01.688 "subsystem": "nbd", 00:24:01.688 "config": [] 00:24:01.688 } 00:24:01.688 ] 00:24:01.688 }' 00:24:01.688 00:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:01.688 00:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:01.688 [2024-11-18 00:29:25.450841] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:24:01.688 [2024-11-18 00:29:25.450919] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid281106 ] 00:24:01.948 [2024-11-18 00:29:25.517719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.948 [2024-11-18 00:29:25.563612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:01.948 [2024-11-18 00:29:25.736943] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:02.205 00:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:02.205 00:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:02.205 00:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:02.205 00:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:24:02.466 00:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:02.467 00:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:02.467 Running I/O for 1 seconds... 00:24:03.842 3433.00 IOPS, 13.41 MiB/s 00:24:03.842 Latency(us) 00:24:03.842 [2024-11-17T23:29:27.664Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:03.842 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:03.842 Verification LBA range: start 0x0 length 0x2000 00:24:03.842 nvme0n1 : 1.02 3494.95 13.65 0.00 0.00 36297.81 6699.24 31457.28 00:24:03.842 [2024-11-17T23:29:27.664Z] =================================================================================================================== 00:24:03.842 [2024-11-17T23:29:27.664Z] Total : 3494.95 13.65 0.00 0.00 36297.81 6699.24 31457.28 00:24:03.842 { 00:24:03.842 "results": [ 00:24:03.842 { 00:24:03.842 "job": "nvme0n1", 00:24:03.842 "core_mask": "0x2", 00:24:03.842 "workload": "verify", 00:24:03.842 "status": "finished", 00:24:03.842 "verify_range": { 00:24:03.842 "start": 0, 00:24:03.842 "length": 8192 00:24:03.842 }, 00:24:03.842 "queue_depth": 128, 00:24:03.842 "io_size": 4096, 00:24:03.842 "runtime": 1.019186, 00:24:03.842 "iops": 3494.945966683216, 00:24:03.842 "mibps": 13.652132682356312, 00:24:03.842 "io_failed": 0, 00:24:03.842 "io_timeout": 0, 00:24:03.842 "avg_latency_us": 36297.80872834654, 00:24:03.842 "min_latency_us": 6699.235555555556, 00:24:03.842 "max_latency_us": 31457.28 00:24:03.842 } 00:24:03.842 ], 00:24:03.842 "core_count": 1 00:24:03.842 } 00:24:03.842 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:24:03.842 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:24:03.842 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:24:03.842 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:24:03.842 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:24:03.842 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:24:03.842 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:03.842 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:24:03.842 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:24:03.842 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:24:03.842 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:03.842 nvmf_trace.0 00:24:03.842 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:24:03.842 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 281106 00:24:03.842 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 281106 ']' 00:24:03.842 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 281106 00:24:03.842 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:03.842 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:03.842 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 281106 00:24:03.842 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:03.842 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:03.842 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 281106' 00:24:03.842 killing process with pid 281106 00:24:03.842 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 281106 00:24:03.842 Received shutdown signal, test time was about 1.000000 seconds 00:24:03.842 00:24:03.842 Latency(us) 00:24:03.842 [2024-11-17T23:29:27.664Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:03.842 [2024-11-17T23:29:27.664Z] =================================================================================================================== 00:24:03.842 [2024-11-17T23:29:27.664Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:03.842 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 281106 00:24:03.842 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:24:03.842 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:03.842 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:24:03.842 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:03.842 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:24:03.842 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:03.842 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:03.842 rmmod nvme_tcp 00:24:03.842 rmmod nvme_fabrics 00:24:03.842 rmmod nvme_keyring 00:24:03.842 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:03.842 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:24:03.842 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:24:03.842 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 280955 ']' 00:24:03.842 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 280955 00:24:03.842 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 280955 ']' 00:24:03.842 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 280955 00:24:03.842 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:03.842 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:03.842 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 280955 00:24:04.102 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:04.102 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:04.102 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 280955' 00:24:04.102 killing process with pid 280955 00:24:04.102 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 280955 00:24:04.102 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 280955 00:24:04.103 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:04.103 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:04.103 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:04.103 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:24:04.103 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:24:04.103 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:04.103 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:24:04.103 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:04.103 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:04.103 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.103 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:04.103 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:06.647 00:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:06.647 00:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.NPcwb9lkxb /tmp/tmp.c3siypWjgW /tmp/tmp.f9zssfYYJu 00:24:06.647 00:24:06.647 real 1m22.155s 00:24:06.647 user 2m15.491s 00:24:06.647 sys 0m25.422s 00:24:06.647 00:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:06.647 00:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:06.647 ************************************ 00:24:06.647 END TEST nvmf_tls 00:24:06.647 ************************************ 00:24:06.647 00:29:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:06.647 00:29:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:06.647 00:29:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:06.647 00:29:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:06.648 ************************************ 00:24:06.648 START TEST nvmf_fips 00:24:06.648 ************************************ 00:24:06.648 00:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:06.648 * Looking for test storage... 00:24:06.648 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:24:06.648 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:06.648 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:24:06.648 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:06.648 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:06.648 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:06.648 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:06.648 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:06.648 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:06.648 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:06.648 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:06.649 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:06.649 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:24:06.649 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:24:06.649 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:24:06.649 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:06.649 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:06.649 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:24:06.649 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:06.649 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:06.649 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:06.649 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:06.649 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:06.649 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:06.649 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:06.649 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:24:06.649 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:24:06.649 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:06.649 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:24:06.649 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:24:06.650 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:06.650 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:06.650 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:24:06.650 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:06.650 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:06.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.650 --rc genhtml_branch_coverage=1 00:24:06.650 --rc genhtml_function_coverage=1 00:24:06.650 --rc genhtml_legend=1 00:24:06.650 --rc geninfo_all_blocks=1 00:24:06.650 --rc geninfo_unexecuted_blocks=1 00:24:06.650 00:24:06.650 ' 00:24:06.650 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:06.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.650 --rc genhtml_branch_coverage=1 00:24:06.650 --rc genhtml_function_coverage=1 00:24:06.650 --rc genhtml_legend=1 00:24:06.650 --rc geninfo_all_blocks=1 00:24:06.650 --rc geninfo_unexecuted_blocks=1 00:24:06.650 00:24:06.650 ' 00:24:06.650 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:06.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.650 --rc genhtml_branch_coverage=1 00:24:06.650 --rc genhtml_function_coverage=1 00:24:06.650 --rc genhtml_legend=1 00:24:06.650 --rc geninfo_all_blocks=1 00:24:06.650 --rc geninfo_unexecuted_blocks=1 00:24:06.650 00:24:06.650 ' 00:24:06.650 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:06.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.650 --rc genhtml_branch_coverage=1 00:24:06.650 --rc genhtml_function_coverage=1 00:24:06.650 --rc genhtml_legend=1 00:24:06.650 --rc geninfo_all_blocks=1 00:24:06.650 --rc geninfo_unexecuted_blocks=1 00:24:06.650 00:24:06.650 ' 00:24:06.651 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:06.651 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:24:06.651 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:06.651 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:06.651 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:06.651 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:06.651 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:06.651 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:06.651 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:06.651 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:06.651 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:06.651 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:06.652 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:06.652 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:06.652 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:06.652 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:06.652 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:06.652 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:06.652 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:06.652 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:24:06.652 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:06.652 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:06.652 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:06.653 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.653 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.653 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.653 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:24:06.654 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.654 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:24:06.654 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:06.654 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:06.654 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:06.654 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:06.654 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:06.654 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:06.654 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:06.654 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:06.654 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:06.654 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:06.654 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:06.654 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:24:06.654 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:24:06.654 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:24:06.654 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:24:06.655 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:24:06.655 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:24:06.655 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:06.655 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:06.655 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:06.655 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:06.655 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:06.655 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:06.655 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:24:06.655 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:24:06.655 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:24:06.655 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:06.655 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:06.655 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:24:06.655 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:06.655 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:06.655 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:24:06.655 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:06.655 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:06.656 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:06.656 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:24:06.656 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:24:06.656 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:06.656 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:06.656 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:06.656 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:24:06.656 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:06.656 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:06.656 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:24:06.656 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:06.656 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:06.656 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:06.656 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:06.656 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:06.656 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:06.657 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:24:06.657 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:24:06.657 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:06.657 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:24:06.657 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:24:06.657 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:06.657 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:24:06.657 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:24:06.657 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:06.657 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:24:06.657 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:06.657 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:06.657 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:24:06.657 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:24:06.657 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:24:06.657 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:24:06.657 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:24:06.657 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:24:06.657 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:06.657 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:24:06.658 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:24:06.658 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:24:06.658 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:24:06.658 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:24:06.658 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:24:06.658 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:06.658 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:24:06.658 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:24:06.658 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:24:06.658 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:06.658 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:24:06.658 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:06.658 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:24:06.658 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:06.658 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:24:06.659 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:06.659 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:24:06.659 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:24:06.659 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:24:06.659 Error setting digest 00:24:06.659 408254C5467F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:24:06.659 408254C5467F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:24:06.659 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:24:06.659 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:06.659 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:06.659 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:06.659 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:24:06.659 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:06.659 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:06.659 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:06.659 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:06.659 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:06.659 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:06.659 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:06.660 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:06.660 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:06.660 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:06.660 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:24:06.660 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:09.196 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:09.196 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:09.196 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:09.196 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:09.196 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:09.196 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:24:09.196 00:24:09.196 --- 10.0.0.2 ping statistics --- 00:24:09.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:09.196 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:24:09.196 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:09.196 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:09.196 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:24:09.196 00:24:09.196 --- 10.0.0.1 ping statistics --- 00:24:09.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:09.197 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:24:09.197 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:09.197 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:24:09.197 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:09.197 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:09.197 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:09.197 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:09.197 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:09.197 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:09.197 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:09.197 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:24:09.197 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:09.197 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:09.197 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:09.197 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=283347 00:24:09.197 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:09.197 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 283347 00:24:09.197 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 283347 ']' 00:24:09.197 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:09.197 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:09.197 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:09.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:09.197 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:09.197 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:09.197 [2024-11-18 00:29:32.763160] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:24:09.197 [2024-11-18 00:29:32.763239] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:09.197 [2024-11-18 00:29:32.836201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:09.197 [2024-11-18 00:29:32.881964] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:09.197 [2024-11-18 00:29:32.882023] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:09.197 [2024-11-18 00:29:32.882036] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:09.197 [2024-11-18 00:29:32.882047] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:09.197 [2024-11-18 00:29:32.882056] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:09.197 [2024-11-18 00:29:32.882657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:09.197 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:09.197 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:24:09.197 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:09.197 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:09.197 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:09.455 00:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:09.455 00:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:24:09.455 00:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:09.455 00:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:24:09.455 00:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.Rrt 00:24:09.455 00:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:09.455 00:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.Rrt 00:24:09.455 00:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.Rrt 00:24:09.455 00:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.Rrt 00:24:09.455 00:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:09.713 [2024-11-18 00:29:33.335151] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:09.713 [2024-11-18 00:29:33.351165] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:09.713 [2024-11-18 00:29:33.351427] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:09.713 malloc0 00:24:09.713 00:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:09.713 00:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=283490 00:24:09.713 00:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:09.713 00:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 283490 /var/tmp/bdevperf.sock 00:24:09.713 00:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 283490 ']' 00:24:09.713 00:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:09.713 00:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:09.713 00:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:09.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:09.713 00:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:09.713 00:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:09.713 [2024-11-18 00:29:33.484173] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:24:09.713 [2024-11-18 00:29:33.484259] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid283490 ] 00:24:09.972 [2024-11-18 00:29:33.550879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:09.972 [2024-11-18 00:29:33.596139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:09.972 00:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:09.972 00:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:24:09.972 00:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.Rrt 00:24:10.230 00:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:10.489 [2024-11-18 00:29:34.224117] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:10.489 TLSTESTn1 00:24:10.747 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:10.747 Running I/O for 10 seconds... 00:24:12.630 2722.00 IOPS, 10.63 MiB/s [2024-11-17T23:29:37.824Z] 2779.50 IOPS, 10.86 MiB/s [2024-11-17T23:29:38.835Z] 2779.67 IOPS, 10.86 MiB/s [2024-11-17T23:29:39.479Z] 2812.50 IOPS, 10.99 MiB/s [2024-11-17T23:29:40.533Z] 2805.00 IOPS, 10.96 MiB/s [2024-11-17T23:29:41.473Z] 2822.50 IOPS, 11.03 MiB/s [2024-11-17T23:29:42.846Z] 2821.29 IOPS, 11.02 MiB/s [2024-11-17T23:29:43.784Z] 2832.62 IOPS, 11.06 MiB/s [2024-11-17T23:29:44.717Z] 2828.11 IOPS, 11.05 MiB/s [2024-11-17T23:29:44.717Z] 2830.60 IOPS, 11.06 MiB/s 00:24:20.895 Latency(us) 00:24:20.895 [2024-11-17T23:29:44.717Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:20.895 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:20.895 Verification LBA range: start 0x0 length 0x2000 00:24:20.895 TLSTESTn1 : 10.03 2833.56 11.07 0.00 0.00 45075.10 10291.58 91653.31 00:24:20.895 [2024-11-17T23:29:44.717Z] =================================================================================================================== 00:24:20.895 [2024-11-17T23:29:44.717Z] Total : 2833.56 11.07 0.00 0.00 45075.10 10291.58 91653.31 00:24:20.895 { 00:24:20.895 "results": [ 00:24:20.895 { 00:24:20.895 "job": "TLSTESTn1", 00:24:20.895 "core_mask": "0x4", 00:24:20.895 "workload": "verify", 00:24:20.895 "status": "finished", 00:24:20.895 "verify_range": { 00:24:20.895 "start": 0, 00:24:20.895 "length": 8192 00:24:20.895 }, 00:24:20.895 "queue_depth": 128, 00:24:20.895 "io_size": 4096, 00:24:20.895 "runtime": 10.03438, 00:24:20.895 "iops": 2833.5582268162057, 00:24:20.895 "mibps": 11.068586823500803, 00:24:20.895 "io_failed": 0, 00:24:20.895 "io_timeout": 0, 00:24:20.895 "avg_latency_us": 45075.09909773594, 00:24:20.895 "min_latency_us": 10291.579259259259, 00:24:20.895 "max_latency_us": 91653.30962962963 00:24:20.895 } 00:24:20.895 ], 00:24:20.895 "core_count": 1 00:24:20.895 } 00:24:20.895 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:20.895 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:20.895 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:24:20.895 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:24:20.895 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:24:20.896 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:20.896 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:24:20.896 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:24:20.896 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:24:20.896 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:20.896 nvmf_trace.0 00:24:20.896 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:24:20.896 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 283490 00:24:20.896 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 283490 ']' 00:24:20.896 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 283490 00:24:20.896 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:20.896 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:20.896 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 283490 00:24:20.896 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:20.896 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:20.896 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 283490' 00:24:20.896 killing process with pid 283490 00:24:20.896 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 283490 00:24:20.896 Received shutdown signal, test time was about 10.000000 seconds 00:24:20.896 00:24:20.896 Latency(us) 00:24:20.896 [2024-11-17T23:29:44.718Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:20.896 [2024-11-17T23:29:44.718Z] =================================================================================================================== 00:24:20.896 [2024-11-17T23:29:44.718Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:20.896 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 283490 00:24:21.154 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:21.154 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:21.154 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:24:21.154 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:21.154 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:24:21.154 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:21.154 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:21.154 rmmod nvme_tcp 00:24:21.154 rmmod nvme_fabrics 00:24:21.154 rmmod nvme_keyring 00:24:21.154 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:21.154 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:24:21.154 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:24:21.154 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 283347 ']' 00:24:21.154 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 283347 00:24:21.154 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 283347 ']' 00:24:21.154 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 283347 00:24:21.154 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:21.154 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:21.154 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 283347 00:24:21.154 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:21.154 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:21.154 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 283347' 00:24:21.154 killing process with pid 283347 00:24:21.154 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 283347 00:24:21.154 00:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 283347 00:24:21.413 00:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:21.413 00:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:21.413 00:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:21.413 00:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:24:21.413 00:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:24:21.413 00:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:21.413 00:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:24:21.413 00:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:21.413 00:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:21.413 00:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:21.413 00:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:21.413 00:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:23.961 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:23.961 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.Rrt 00:24:23.961 00:24:23.961 real 0m17.192s 00:24:23.961 user 0m18.680s 00:24:23.961 sys 0m7.299s 00:24:23.961 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:23.961 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:23.961 ************************************ 00:24:23.961 END TEST nvmf_fips 00:24:23.961 ************************************ 00:24:23.961 00:29:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:23.961 00:29:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:23.961 00:29:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:23.961 00:29:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:23.961 ************************************ 00:24:23.961 START TEST nvmf_control_msg_list 00:24:23.961 ************************************ 00:24:23.961 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:23.961 * Looking for test storage... 00:24:23.961 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:23.961 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:23.961 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:24:23.961 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:23.961 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:23.961 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:23.961 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:23.961 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:23.961 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:24:23.961 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:24:23.961 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:24:23.961 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:24:23.961 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:24:23.961 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:24:23.961 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:24:23.961 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:23.961 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:24:23.961 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:24:23.961 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:23.961 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:23.961 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:24:23.961 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:24:23.961 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:23.961 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:24:23.961 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:24:23.961 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:24:23.961 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:24:23.961 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:23.961 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:24:23.961 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:24:23.961 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:23.961 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:23.961 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:24:23.961 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:23.961 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:23.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.961 --rc genhtml_branch_coverage=1 00:24:23.961 --rc genhtml_function_coverage=1 00:24:23.961 --rc genhtml_legend=1 00:24:23.961 --rc geninfo_all_blocks=1 00:24:23.961 --rc geninfo_unexecuted_blocks=1 00:24:23.961 00:24:23.961 ' 00:24:23.961 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:23.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.961 --rc genhtml_branch_coverage=1 00:24:23.961 --rc genhtml_function_coverage=1 00:24:23.961 --rc genhtml_legend=1 00:24:23.961 --rc geninfo_all_blocks=1 00:24:23.961 --rc geninfo_unexecuted_blocks=1 00:24:23.961 00:24:23.961 ' 00:24:23.961 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:23.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.961 --rc genhtml_branch_coverage=1 00:24:23.961 --rc genhtml_function_coverage=1 00:24:23.961 --rc genhtml_legend=1 00:24:23.961 --rc geninfo_all_blocks=1 00:24:23.961 --rc geninfo_unexecuted_blocks=1 00:24:23.961 00:24:23.961 ' 00:24:23.961 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:23.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.961 --rc genhtml_branch_coverage=1 00:24:23.961 --rc genhtml_function_coverage=1 00:24:23.961 --rc genhtml_legend=1 00:24:23.961 --rc geninfo_all_blocks=1 00:24:23.961 --rc geninfo_unexecuted_blocks=1 00:24:23.961 00:24:23.961 ' 00:24:23.961 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:23.961 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:24:23.961 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:23.961 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:23.961 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:23.961 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:23.961 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:23.961 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:23.961 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:23.961 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:23.961 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:23.961 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:23.961 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:23.961 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:23.961 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:23.962 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:23.962 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:23.962 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:23.962 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:23.962 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:24:23.962 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:23.962 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:23.962 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:23.962 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.962 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.962 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.962 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:24:23.962 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.962 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:24:23.962 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:23.962 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:23.962 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:23.962 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:23.962 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:23.962 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:23.962 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:23.962 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:23.962 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:23.962 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:23.962 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:24:23.962 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:23.962 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:23.962 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:23.962 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:23.962 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:23.962 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:23.962 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:23.962 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:23.962 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:23.962 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:23.962 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:24:23.962 00:29:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:25.864 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:25.864 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:24:25.864 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:25.864 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:25.864 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:25.864 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:25.864 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:25.864 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:24:25.864 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:25.864 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:24:25.864 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:24:25.864 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:24:25.864 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:24:25.864 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:24:25.864 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:24:25.864 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:25.864 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:25.864 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:25.864 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:25.864 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:25.864 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:25.864 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:25.864 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:25.864 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:25.864 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:25.864 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:25.864 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:25.864 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:25.864 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:25.864 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:25.864 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:25.864 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:25.864 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:25.864 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:25.864 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:25.864 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:25.864 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:25.864 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:25.864 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:25.864 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:25.864 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:25.864 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:25.864 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:25.864 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:25.864 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:25.864 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:25.864 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:25.864 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:25.864 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:25.864 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:25.864 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:25.864 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:25.864 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:25.864 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:25.864 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:25.864 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:25.864 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:25.864 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:25.864 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:25.864 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:25.864 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:25.864 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:25.864 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:25.864 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:25.864 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:25.864 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:25.864 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:25.865 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:25.865 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:25.865 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:25.865 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:25.865 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:25.865 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:25.865 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:24:25.865 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:25.865 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:25.865 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:25.865 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:25.865 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:25.865 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:25.865 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:25.865 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:25.865 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:25.865 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:25.865 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:25.865 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:25.865 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:25.865 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:25.865 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:25.865 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:25.865 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:25.865 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:25.865 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:25.865 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:25.865 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:25.865 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:25.865 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:25.865 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:25.865 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:25.865 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:25.865 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:25.865 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.128 ms 00:24:25.865 00:24:25.865 --- 10.0.0.2 ping statistics --- 00:24:25.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:25.865 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:24:25.865 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:25.865 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:25.865 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:24:25.865 00:24:25.865 --- 10.0.0.1 ping statistics --- 00:24:25.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:25.865 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:24:25.865 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:25.865 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:24:25.865 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:25.865 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:25.865 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:25.865 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:25.865 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:25.865 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:25.865 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:25.865 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:24:25.865 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:25.865 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:25.865 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:25.865 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=286772 00:24:25.865 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:25.865 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 286772 00:24:25.865 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 286772 ']' 00:24:25.865 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:25.865 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:25.865 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:25.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:25.865 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:25.865 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:25.865 [2024-11-18 00:29:49.617828] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:24:25.865 [2024-11-18 00:29:49.617934] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:26.123 [2024-11-18 00:29:49.690185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:26.123 [2024-11-18 00:29:49.731907] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:26.123 [2024-11-18 00:29:49.731966] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:26.123 [2024-11-18 00:29:49.731988] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:26.123 [2024-11-18 00:29:49.731999] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:26.123 [2024-11-18 00:29:49.732008] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:26.123 [2024-11-18 00:29:49.732650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:26.123 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:26.123 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:24:26.123 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:26.123 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:26.123 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:26.123 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:26.123 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:26.123 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:26.123 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:24:26.123 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.123 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:26.123 [2024-11-18 00:29:49.866421] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:26.123 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.123 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:24:26.123 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.124 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:26.124 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.124 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:26.124 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.124 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:26.124 Malloc0 00:24:26.124 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.124 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:26.124 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.124 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:26.124 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.124 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:26.124 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.124 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:26.124 [2024-11-18 00:29:49.906340] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:26.124 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.124 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=286798 00:24:26.124 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:26.124 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=286799 00:24:26.124 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:26.124 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=286800 00:24:26.124 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:26.124 00:29:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 286798 00:24:26.382 [2024-11-18 00:29:49.975251] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:26.382 [2024-11-18 00:29:49.975577] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:26.382 [2024-11-18 00:29:49.984883] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:27.319 Initializing NVMe Controllers 00:24:27.319 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:27.319 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:24:27.319 Initialization complete. Launching workers. 00:24:27.319 ======================================================== 00:24:27.319 Latency(us) 00:24:27.319 Device Information : IOPS MiB/s Average min max 00:24:27.319 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 4541.00 17.74 219.73 151.94 505.38 00:24:27.319 ======================================================== 00:24:27.319 Total : 4541.00 17.74 219.73 151.94 505.38 00:24:27.319 00:24:27.319 Initializing NVMe Controllers 00:24:27.319 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:27.319 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:24:27.319 Initialization complete. Launching workers. 00:24:27.319 ======================================================== 00:24:27.319 Latency(us) 00:24:27.319 Device Information : IOPS MiB/s Average min max 00:24:27.319 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40908.71 40775.89 41200.90 00:24:27.319 ======================================================== 00:24:27.319 Total : 25.00 0.10 40908.71 40775.89 41200.90 00:24:27.319 00:24:27.577 Initializing NVMe Controllers 00:24:27.577 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:27.577 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:24:27.577 Initialization complete. Launching workers. 00:24:27.577 ======================================================== 00:24:27.577 Latency(us) 00:24:27.577 Device Information : IOPS MiB/s Average min max 00:24:27.577 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 4520.98 17.66 220.78 156.39 458.02 00:24:27.577 ======================================================== 00:24:27.578 Total : 4520.98 17.66 220.78 156.39 458.02 00:24:27.578 00:24:27.578 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 286799 00:24:27.578 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 286800 00:24:27.578 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:27.578 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:24:27.578 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:27.578 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:24:27.578 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:27.578 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:24:27.578 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:27.578 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:27.578 rmmod nvme_tcp 00:24:27.578 rmmod nvme_fabrics 00:24:27.578 rmmod nvme_keyring 00:24:27.578 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:27.578 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:24:27.578 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:24:27.578 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 286772 ']' 00:24:27.578 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 286772 00:24:27.578 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 286772 ']' 00:24:27.578 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 286772 00:24:27.578 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:24:27.578 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:27.578 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 286772 00:24:27.578 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:27.578 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:27.578 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 286772' 00:24:27.578 killing process with pid 286772 00:24:27.578 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 286772 00:24:27.578 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 286772 00:24:27.838 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:27.838 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:27.838 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:27.838 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:24:27.838 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:24:27.838 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:27.838 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:24:27.838 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:27.838 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:27.838 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:27.838 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:27.838 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:29.752 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:29.752 00:24:29.752 real 0m6.261s 00:24:29.752 user 0m5.435s 00:24:29.752 sys 0m2.672s 00:24:29.752 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:29.752 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:29.752 ************************************ 00:24:29.752 END TEST nvmf_control_msg_list 00:24:29.752 ************************************ 00:24:29.752 00:29:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:29.752 00:29:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:29.752 00:29:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:29.752 00:29:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:29.752 ************************************ 00:24:29.752 START TEST nvmf_wait_for_buf 00:24:29.753 ************************************ 00:24:29.753 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:30.018 * Looking for test storage... 00:24:30.018 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:30.018 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:30.018 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:24:30.018 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:30.018 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:30.018 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:30.018 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:30.018 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:30.018 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:24:30.018 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:24:30.018 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:24:30.018 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:24:30.018 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:24:30.018 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:24:30.018 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:24:30.018 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:30.018 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:24:30.018 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:24:30.018 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:30.018 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:30.018 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:24:30.018 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:24:30.018 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:30.018 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:24:30.018 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:30.018 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:24:30.018 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:24:30.018 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:30.018 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:24:30.018 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:30.018 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:30.018 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:30.018 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:24:30.018 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:30.018 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:30.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.018 --rc genhtml_branch_coverage=1 00:24:30.018 --rc genhtml_function_coverage=1 00:24:30.018 --rc genhtml_legend=1 00:24:30.018 --rc geninfo_all_blocks=1 00:24:30.018 --rc geninfo_unexecuted_blocks=1 00:24:30.018 00:24:30.018 ' 00:24:30.018 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:30.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.018 --rc genhtml_branch_coverage=1 00:24:30.018 --rc genhtml_function_coverage=1 00:24:30.018 --rc genhtml_legend=1 00:24:30.018 --rc geninfo_all_blocks=1 00:24:30.018 --rc geninfo_unexecuted_blocks=1 00:24:30.018 00:24:30.018 ' 00:24:30.018 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:30.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.018 --rc genhtml_branch_coverage=1 00:24:30.018 --rc genhtml_function_coverage=1 00:24:30.018 --rc genhtml_legend=1 00:24:30.018 --rc geninfo_all_blocks=1 00:24:30.018 --rc geninfo_unexecuted_blocks=1 00:24:30.018 00:24:30.018 ' 00:24:30.018 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:30.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.018 --rc genhtml_branch_coverage=1 00:24:30.018 --rc genhtml_function_coverage=1 00:24:30.018 --rc genhtml_legend=1 00:24:30.018 --rc geninfo_all_blocks=1 00:24:30.018 --rc geninfo_unexecuted_blocks=1 00:24:30.018 00:24:30.018 ' 00:24:30.018 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:30.018 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:24:30.018 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:30.018 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:30.018 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:30.018 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:30.018 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:30.018 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:30.018 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:30.018 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:30.018 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:30.018 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:30.018 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:30.018 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:30.018 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:30.018 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:30.018 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:30.018 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:30.019 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:30.019 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:30.019 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:30.019 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:30.019 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:30.019 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.019 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.019 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.019 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:24:30.019 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.019 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:24:30.019 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:30.019 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:30.019 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:30.019 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:30.019 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:30.019 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:30.019 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:30.019 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:30.019 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:30.019 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:30.019 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:24:30.019 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:30.019 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:30.019 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:30.019 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:30.019 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:30.019 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:30.019 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:30.019 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.019 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:30.019 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:30.019 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:30.019 00:29:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:32.554 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:32.554 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:32.554 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:32.554 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:32.554 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:32.554 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:32.554 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:32.554 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:24:32.554 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:32.554 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:24:32.554 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:24:32.554 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:24:32.554 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:32.555 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:32.555 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:32.555 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:32.555 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:32.555 00:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:32.555 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:32.555 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:32.555 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:32.555 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:32.555 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:32.555 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:32.555 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:32.555 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:32.555 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:32.555 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:24:32.555 00:24:32.555 --- 10.0.0.2 ping statistics --- 00:24:32.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.555 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:24:32.555 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:32.555 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:32.555 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:24:32.555 00:24:32.555 --- 10.0.0.1 ping statistics --- 00:24:32.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.555 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:24:32.555 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:32.555 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:24:32.555 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:32.555 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:32.555 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:32.555 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:32.555 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:32.555 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:32.555 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:32.555 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:24:32.556 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:32.556 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:32.556 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:32.556 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=288876 00:24:32.556 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:32.556 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 288876 00:24:32.556 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 288876 ']' 00:24:32.556 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:32.556 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:32.556 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:32.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:32.556 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:32.556 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:32.556 [2024-11-18 00:29:56.159462] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:24:32.556 [2024-11-18 00:29:56.159556] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:32.556 [2024-11-18 00:29:56.230512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:32.556 [2024-11-18 00:29:56.276000] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:32.556 [2024-11-18 00:29:56.276054] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:32.556 [2024-11-18 00:29:56.276079] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:32.556 [2024-11-18 00:29:56.276090] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:32.556 [2024-11-18 00:29:56.276099] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:32.556 [2024-11-18 00:29:56.276790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:32.814 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:32.814 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:24:32.814 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:32.814 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:32.814 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:32.814 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:32.814 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:32.814 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:32.814 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:24:32.814 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.814 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:32.814 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.814 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:24:32.814 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.814 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:32.814 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.814 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:24:32.814 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.814 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:32.814 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.814 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:32.814 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.814 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:32.814 Malloc0 00:24:32.814 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.814 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:24:32.814 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.814 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:32.814 [2024-11-18 00:29:56.539947] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:32.814 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.815 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:24:32.815 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.815 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:32.815 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.815 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:32.815 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.815 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:32.815 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.815 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:32.815 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.815 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:32.815 [2024-11-18 00:29:56.564138] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:32.815 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.815 00:29:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:33.073 [2024-11-18 00:29:56.652454] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:34.446 Initializing NVMe Controllers 00:24:34.446 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:34.446 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:24:34.446 Initialization complete. Launching workers. 00:24:34.446 ======================================================== 00:24:34.446 Latency(us) 00:24:34.446 Device Information : IOPS MiB/s Average min max 00:24:34.446 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 114.00 14.25 37169.46 7984.64 74813.95 00:24:34.446 ======================================================== 00:24:34.446 Total : 114.00 14.25 37169.46 7984.64 74813.95 00:24:34.446 00:24:34.446 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:24:34.446 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:24:34.446 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.446 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:34.447 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.447 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1798 00:24:34.447 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1798 -eq 0 ]] 00:24:34.447 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:34.447 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:24:34.447 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:34.447 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:24:34.447 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:34.447 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:24:34.447 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:34.447 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:34.447 rmmod nvme_tcp 00:24:34.447 rmmod nvme_fabrics 00:24:34.447 rmmod nvme_keyring 00:24:34.705 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:34.705 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:24:34.705 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:24:34.705 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 288876 ']' 00:24:34.705 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 288876 00:24:34.705 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 288876 ']' 00:24:34.705 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 288876 00:24:34.705 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:24:34.705 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:34.705 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 288876 00:24:34.705 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:34.705 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:34.705 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 288876' 00:24:34.705 killing process with pid 288876 00:24:34.705 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 288876 00:24:34.705 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 288876 00:24:34.705 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:34.705 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:34.705 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:34.705 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:24:34.705 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:24:34.705 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:34.705 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:24:34.705 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:34.705 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:34.705 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:34.705 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:34.705 00:29:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:37.255 00:24:37.255 real 0m6.976s 00:24:37.255 user 0m3.330s 00:24:37.255 sys 0m2.106s 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:37.255 ************************************ 00:24:37.255 END TEST nvmf_wait_for_buf 00:24:37.255 ************************************ 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:37.255 ************************************ 00:24:37.255 START TEST nvmf_fuzz 00:24:37.255 ************************************ 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:37.255 * Looking for test storage... 00:24:37.255 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:37.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:37.255 --rc genhtml_branch_coverage=1 00:24:37.255 --rc genhtml_function_coverage=1 00:24:37.255 --rc genhtml_legend=1 00:24:37.255 --rc geninfo_all_blocks=1 00:24:37.255 --rc geninfo_unexecuted_blocks=1 00:24:37.255 00:24:37.255 ' 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:37.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:37.255 --rc genhtml_branch_coverage=1 00:24:37.255 --rc genhtml_function_coverage=1 00:24:37.255 --rc genhtml_legend=1 00:24:37.255 --rc geninfo_all_blocks=1 00:24:37.255 --rc geninfo_unexecuted_blocks=1 00:24:37.255 00:24:37.255 ' 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:37.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:37.255 --rc genhtml_branch_coverage=1 00:24:37.255 --rc genhtml_function_coverage=1 00:24:37.255 --rc genhtml_legend=1 00:24:37.255 --rc geninfo_all_blocks=1 00:24:37.255 --rc geninfo_unexecuted_blocks=1 00:24:37.255 00:24:37.255 ' 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:37.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:37.255 --rc genhtml_branch_coverage=1 00:24:37.255 --rc genhtml_function_coverage=1 00:24:37.255 --rc genhtml_legend=1 00:24:37.255 --rc geninfo_all_blocks=1 00:24:37.255 --rc geninfo_unexecuted_blocks=1 00:24:37.255 00:24:37.255 ' 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:37.255 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:24:37.255 00:30:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:39.155 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:39.155 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:39.155 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:39.155 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # is_hw=yes 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:39.155 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:39.155 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:24:39.155 00:24:39.155 --- 10.0.0.2 ping statistics --- 00:24:39.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.155 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:24:39.155 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:39.156 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:39.156 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:24:39.156 00:24:39.156 --- 10.0.0.1 ping statistics --- 00:24:39.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.156 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:24:39.156 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:39.156 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # return 0 00:24:39.156 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:39.156 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:39.156 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:39.156 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:39.156 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:39.156 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:39.156 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:39.156 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=291203 00:24:39.156 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:39.156 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:39.156 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 291203 00:24:39.156 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # '[' -z 291203 ']' 00:24:39.156 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:39.156 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:39.156 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:39.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:39.156 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:39.156 00:30:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:39.738 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:39.738 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@868 -- # return 0 00:24:39.738 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:39.738 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.738 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:39.738 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.738 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:24:39.738 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.738 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:39.738 Malloc0 00:24:39.738 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.738 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:39.738 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.738 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:39.738 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.738 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:39.738 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.738 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:39.739 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.739 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:39.739 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.739 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:39.739 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.739 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:24:39.739 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:25:11.807 Fuzzing completed. Shutting down the fuzz application 00:25:11.807 00:25:11.807 Dumping successful admin opcodes: 00:25:11.807 8, 9, 10, 24, 00:25:11.807 Dumping successful io opcodes: 00:25:11.807 0, 9, 00:25:11.807 NS: 0x2000008eff00 I/O qp, Total commands completed: 492299, total successful commands: 2831, random_seed: 3003131648 00:25:11.807 NS: 0x2000008eff00 admin qp, Total commands completed: 59312, total successful commands: 470, random_seed: 3922258688 00:25:11.807 00:30:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:25:11.807 Fuzzing completed. Shutting down the fuzz application 00:25:11.807 00:25:11.807 Dumping successful admin opcodes: 00:25:11.807 24, 00:25:11.807 Dumping successful io opcodes: 00:25:11.807 00:25:11.807 NS: 0x2000008eff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 3957024755 00:25:11.807 NS: 0x2000008eff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 3957133385 00:25:11.807 00:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:11.807 00:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.807 00:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:11.807 00:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.807 00:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:25:11.807 00:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:25:11.807 00:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:11.807 00:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:25:11.807 00:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:11.807 00:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:25:11.807 00:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:11.807 00:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:11.807 rmmod nvme_tcp 00:25:11.807 rmmod nvme_fabrics 00:25:11.807 rmmod nvme_keyring 00:25:11.807 00:30:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:11.807 00:30:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:25:11.807 00:30:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:25:11.807 00:30:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 291203 ']' 00:25:11.807 00:30:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 291203 00:25:11.807 00:30:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' -z 291203 ']' 00:25:11.807 00:30:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # kill -0 291203 00:25:11.807 00:30:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # uname 00:25:11.807 00:30:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:11.807 00:30:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 291203 00:25:11.807 00:30:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:11.807 00:30:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:11.807 00:30:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 291203' 00:25:11.807 killing process with pid 291203 00:25:11.807 00:30:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@973 -- # kill 291203 00:25:11.807 00:30:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@978 -- # wait 291203 00:25:11.807 00:30:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:11.807 00:30:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:11.807 00:30:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:11.807 00:30:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:25:11.807 00:30:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-save 00:25:11.807 00:30:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:11.807 00:30:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-restore 00:25:11.807 00:30:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:11.807 00:30:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:11.807 00:30:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:11.807 00:30:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:11.807 00:30:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:13.715 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:13.715 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:25:13.715 00:25:13.715 real 0m36.827s 00:25:13.715 user 0m51.106s 00:25:13.715 sys 0m14.802s 00:25:13.715 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:13.715 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:13.715 ************************************ 00:25:13.715 END TEST nvmf_fuzz 00:25:13.715 ************************************ 00:25:13.715 00:30:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:13.715 00:30:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:13.715 00:30:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:13.715 00:30:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:13.715 ************************************ 00:25:13.715 START TEST nvmf_multiconnection 00:25:13.715 ************************************ 00:25:13.715 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:13.715 * Looking for test storage... 00:25:13.715 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:13.715 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:13.715 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # lcov --version 00:25:13.715 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:13.974 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:13.974 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:13.974 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:13.974 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:13.974 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:25:13.974 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:25:13.974 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:25:13.974 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:13.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:13.975 --rc genhtml_branch_coverage=1 00:25:13.975 --rc genhtml_function_coverage=1 00:25:13.975 --rc genhtml_legend=1 00:25:13.975 --rc geninfo_all_blocks=1 00:25:13.975 --rc geninfo_unexecuted_blocks=1 00:25:13.975 00:25:13.975 ' 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:13.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:13.975 --rc genhtml_branch_coverage=1 00:25:13.975 --rc genhtml_function_coverage=1 00:25:13.975 --rc genhtml_legend=1 00:25:13.975 --rc geninfo_all_blocks=1 00:25:13.975 --rc geninfo_unexecuted_blocks=1 00:25:13.975 00:25:13.975 ' 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:13.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:13.975 --rc genhtml_branch_coverage=1 00:25:13.975 --rc genhtml_function_coverage=1 00:25:13.975 --rc genhtml_legend=1 00:25:13.975 --rc geninfo_all_blocks=1 00:25:13.975 --rc geninfo_unexecuted_blocks=1 00:25:13.975 00:25:13.975 ' 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:13.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:13.975 --rc genhtml_branch_coverage=1 00:25:13.975 --rc genhtml_function_coverage=1 00:25:13.975 --rc genhtml_legend=1 00:25:13.975 --rc geninfo_all_blocks=1 00:25:13.975 --rc geninfo_unexecuted_blocks=1 00:25:13.975 00:25:13.975 ' 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:13.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:13.975 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:13.976 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:13.976 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:13.976 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:13.976 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:13.976 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:13.976 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:13.976 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:13.976 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:13.976 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:25:13.976 00:30:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:16.508 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:16.508 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:25:16.508 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:16.508 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:16.508 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:16.508 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:16.508 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:16.508 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:25:16.508 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:16.508 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:25:16.508 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:25:16.508 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:25:16.508 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:25:16.508 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:25:16.508 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:25:16.508 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:16.508 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:16.508 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:16.508 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:16.508 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:16.508 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:16.508 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:16.508 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:16.508 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:16.508 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:16.508 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:16.508 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:16.508 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:16.508 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:16.508 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:16.508 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:16.508 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:16.508 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:16.508 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:16.508 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:16.508 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:16.508 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:16.508 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:16.508 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:16.508 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:16.508 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:16.508 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:16.508 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:16.508 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:16.508 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:16.508 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:16.508 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:16.508 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:16.508 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:16.508 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:16.508 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:16.509 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:16.509 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # is_hw=yes 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:16.509 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:16.509 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:25:16.509 00:25:16.509 --- 10.0.0.2 ping statistics --- 00:25:16.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:16.509 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:16.509 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:16.509 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:25:16.509 00:25:16.509 --- 10.0.0.1 ping statistics --- 00:25:16.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:16.509 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # return 0 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=297432 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 297432 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # '[' -z 297432 ']' 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:16.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:16.509 00:30:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:16.509 [2024-11-18 00:30:39.991332] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:25:16.509 [2024-11-18 00:30:39.991438] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:16.509 [2024-11-18 00:30:40.071536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:16.509 [2024-11-18 00:30:40.119477] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:16.509 [2024-11-18 00:30:40.119532] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:16.509 [2024-11-18 00:30:40.119555] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:16.509 [2024-11-18 00:30:40.119566] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:16.509 [2024-11-18 00:30:40.119576] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:16.509 [2024-11-18 00:30:40.121174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:16.509 [2024-11-18 00:30:40.121233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:16.509 [2024-11-18 00:30:40.121369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:16.510 [2024-11-18 00:30:40.121374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:16.510 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:16.510 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@868 -- # return 0 00:25:16.510 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:16.510 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:16.510 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:16.510 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:16.510 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:16.510 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.510 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:16.510 [2024-11-18 00:30:40.259719] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:16.510 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.510 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:25:16.510 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:16.510 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:16.510 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.510 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:16.510 Malloc1 00:25:16.510 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.510 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:25:16.510 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.510 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:16.510 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.510 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:16.510 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.510 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:16.769 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.769 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:16.769 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.769 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:16.769 [2024-11-18 00:30:40.335983] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:16.769 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.769 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:16.769 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:25:16.769 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.769 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:16.769 Malloc2 00:25:16.769 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.769 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:25:16.769 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.769 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:16.769 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.769 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:25:16.769 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.769 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:16.769 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.769 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:16.769 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.769 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:16.769 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.769 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:16.769 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:25:16.769 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.769 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:16.769 Malloc3 00:25:16.769 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.769 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:25:16.769 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.769 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:16.769 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.769 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:25:16.769 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.769 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:16.769 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.769 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:25:16.769 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.769 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:16.769 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.769 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:16.769 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:25:16.769 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.769 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:16.769 Malloc4 00:25:16.769 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.769 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:25:16.769 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.769 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:16.769 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.769 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:25:16.769 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.769 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:16.769 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.769 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:25:16.769 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.769 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:16.769 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.769 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:16.769 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:25:16.769 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.769 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:16.769 Malloc5 00:25:16.769 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.769 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:25:16.769 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.769 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:16.769 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.769 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:25:16.769 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.770 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:16.770 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.770 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:25:16.770 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.770 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:16.770 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.770 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:16.770 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:25:16.770 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.770 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:16.770 Malloc6 00:25:16.770 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.770 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:25:16.770 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.770 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:16.770 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.770 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:25:16.770 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.770 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:16.770 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.770 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:25:16.770 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.770 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:16.770 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.770 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:16.770 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:25:16.770 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.770 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:17.027 Malloc7 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:17.027 Malloc8 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:17.027 Malloc9 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:17.027 Malloc10 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:17.027 Malloc11 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:25:17.027 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:17.028 00:30:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:17.960 00:30:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:25:17.960 00:30:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:17.960 00:30:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:17.960 00:30:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:17.960 00:30:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:19.858 00:30:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:19.858 00:30:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:19.858 00:30:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK1 00:25:19.858 00:30:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:19.858 00:30:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:19.858 00:30:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:19.858 00:30:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:19.858 00:30:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:25:20.423 00:30:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:25:20.423 00:30:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:20.423 00:30:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:20.423 00:30:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:20.423 00:30:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:22.949 00:30:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:22.949 00:30:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:22.949 00:30:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK2 00:25:22.949 00:30:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:22.949 00:30:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:22.949 00:30:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:22.949 00:30:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:22.949 00:30:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:25:23.207 00:30:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:25:23.207 00:30:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:23.207 00:30:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:23.207 00:30:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:23.207 00:30:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:25.110 00:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:25.110 00:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:25.110 00:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK3 00:25:25.110 00:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:25.110 00:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:25.110 00:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:25.110 00:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:25.110 00:30:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:25:26.044 00:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:25:26.044 00:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:26.044 00:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:26.044 00:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:26.044 00:30:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:27.940 00:30:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:27.940 00:30:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:27.940 00:30:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK4 00:25:27.940 00:30:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:27.940 00:30:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:27.940 00:30:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:27.940 00:30:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:27.940 00:30:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:25:28.507 00:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:25:28.507 00:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:28.507 00:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:28.507 00:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:28.507 00:30:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:31.055 00:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:31.055 00:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:31.055 00:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK5 00:25:31.055 00:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:31.055 00:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:31.055 00:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:31.055 00:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:31.055 00:30:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:25:31.313 00:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:25:31.313 00:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:31.313 00:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:31.313 00:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:31.313 00:30:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:33.841 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:33.841 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:33.841 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK6 00:25:33.841 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:33.841 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:33.841 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:33.841 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:33.841 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:25:34.406 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:25:34.406 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:34.406 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:34.406 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:34.406 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:36.309 00:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:36.309 00:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:36.309 00:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK7 00:25:36.309 00:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:36.309 00:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:36.309 00:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:36.309 00:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:36.309 00:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:25:37.248 00:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:25:37.248 00:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:37.248 00:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:37.248 00:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:37.248 00:31:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:39.154 00:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:39.155 00:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:39.155 00:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK8 00:25:39.155 00:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:39.155 00:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:39.155 00:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:39.155 00:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:39.155 00:31:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:25:40.090 00:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:25:40.090 00:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:40.090 00:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:40.090 00:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:40.090 00:31:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:42.614 00:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:42.614 00:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:42.614 00:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK9 00:25:42.614 00:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:42.614 00:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:42.614 00:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:42.614 00:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:42.614 00:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:25:43.180 00:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:25:43.180 00:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:43.180 00:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:43.180 00:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:43.180 00:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:45.090 00:31:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:45.090 00:31:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:45.090 00:31:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK10 00:25:45.090 00:31:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:45.090 00:31:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:45.090 00:31:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:45.090 00:31:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:45.090 00:31:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:25:46.036 00:31:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:25:46.036 00:31:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:46.036 00:31:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:46.036 00:31:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:46.036 00:31:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:47.939 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:47.939 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:47.939 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK11 00:25:47.939 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:47.939 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:47.939 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:47.939 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:25:47.939 [global] 00:25:47.939 thread=1 00:25:47.939 invalidate=1 00:25:47.939 rw=read 00:25:47.939 time_based=1 00:25:47.939 runtime=10 00:25:47.939 ioengine=libaio 00:25:47.939 direct=1 00:25:47.939 bs=262144 00:25:47.939 iodepth=64 00:25:47.939 norandommap=1 00:25:47.939 numjobs=1 00:25:47.939 00:25:47.939 [job0] 00:25:47.939 filename=/dev/nvme0n1 00:25:47.939 [job1] 00:25:47.939 filename=/dev/nvme10n1 00:25:47.939 [job2] 00:25:47.939 filename=/dev/nvme1n1 00:25:47.939 [job3] 00:25:47.939 filename=/dev/nvme2n1 00:25:47.939 [job4] 00:25:47.939 filename=/dev/nvme3n1 00:25:47.939 [job5] 00:25:47.939 filename=/dev/nvme4n1 00:25:47.939 [job6] 00:25:47.939 filename=/dev/nvme5n1 00:25:47.939 [job7] 00:25:47.939 filename=/dev/nvme6n1 00:25:47.939 [job8] 00:25:47.939 filename=/dev/nvme7n1 00:25:47.939 [job9] 00:25:47.939 filename=/dev/nvme8n1 00:25:47.939 [job10] 00:25:47.939 filename=/dev/nvme9n1 00:25:47.939 Could not set queue depth (nvme0n1) 00:25:47.939 Could not set queue depth (nvme10n1) 00:25:47.939 Could not set queue depth (nvme1n1) 00:25:47.939 Could not set queue depth (nvme2n1) 00:25:47.939 Could not set queue depth (nvme3n1) 00:25:47.939 Could not set queue depth (nvme4n1) 00:25:47.939 Could not set queue depth (nvme5n1) 00:25:47.939 Could not set queue depth (nvme6n1) 00:25:47.939 Could not set queue depth (nvme7n1) 00:25:47.939 Could not set queue depth (nvme8n1) 00:25:47.939 Could not set queue depth (nvme9n1) 00:25:48.197 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:48.197 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:48.197 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:48.197 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:48.197 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:48.197 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:48.197 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:48.197 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:48.197 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:48.197 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:48.197 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:48.197 fio-3.35 00:25:48.197 Starting 11 threads 00:26:00.429 00:26:00.429 job0: (groupid=0, jobs=1): err= 0: pid=301689: Mon Nov 18 00:31:22 2024 00:26:00.429 read: IOPS=199, BW=49.9MiB/s (52.3MB/s)(511MiB/10246msec) 00:26:00.429 slat (usec): min=12, max=215130, avg=4908.28, stdev=20097.23 00:26:00.429 clat (msec): min=47, max=1040, avg=315.61, stdev=163.97 00:26:00.429 lat (msec): min=47, max=1040, avg=320.52, stdev=166.50 00:26:00.429 clat percentiles (msec): 00:26:00.429 | 1.00th=[ 50], 5.00th=[ 71], 10.00th=[ 84], 20.00th=[ 197], 00:26:00.429 | 30.00th=[ 255], 40.00th=[ 292], 50.00th=[ 317], 60.00th=[ 347], 00:26:00.429 | 70.00th=[ 363], 80.00th=[ 405], 90.00th=[ 567], 95.00th=[ 642], 00:26:00.429 | 99.00th=[ 768], 99.50th=[ 894], 99.90th=[ 894], 99.95th=[ 894], 00:26:00.429 | 99.99th=[ 1045] 00:26:00.429 bw ( KiB/s): min=16384, max=166912, per=6.62%, avg=50692.20, stdev=31344.32, samples=20 00:26:00.429 iops : min= 64, max= 652, avg=198.00, stdev=122.44, samples=20 00:26:00.429 lat (msec) : 50=1.03%, 100=15.41%, 250=12.08%, 500=60.42%, 750=9.74% 00:26:00.429 lat (msec) : 1000=1.27%, 2000=0.05% 00:26:00.429 cpu : usr=0.10%, sys=0.67%, ctx=223, majf=0, minf=4097 00:26:00.429 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9% 00:26:00.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.429 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:00.429 issued rwts: total=2044,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:00.429 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:00.429 job1: (groupid=0, jobs=1): err= 0: pid=301690: Mon Nov 18 00:31:22 2024 00:26:00.429 read: IOPS=403, BW=101MiB/s (106MB/s)(1011MiB/10028msec) 00:26:00.429 slat (usec): min=12, max=176784, avg=2336.11, stdev=10751.73 00:26:00.429 clat (usec): min=1885, max=697902, avg=156333.07, stdev=127949.55 00:26:00.429 lat (usec): min=1916, max=697939, avg=158669.19, stdev=129781.53 00:26:00.430 clat percentiles (msec): 00:26:00.430 | 1.00th=[ 9], 5.00th=[ 43], 10.00th=[ 61], 20.00th=[ 70], 00:26:00.430 | 30.00th=[ 81], 40.00th=[ 94], 50.00th=[ 111], 60.00th=[ 131], 00:26:00.430 | 70.00th=[ 167], 80.00th=[ 224], 90.00th=[ 330], 95.00th=[ 481], 00:26:00.430 | 99.00th=[ 584], 99.50th=[ 634], 99.90th=[ 684], 99.95th=[ 684], 00:26:00.430 | 99.99th=[ 701] 00:26:00.430 bw ( KiB/s): min=29184, max=220160, per=13.31%, avg=101862.40, stdev=61704.52, samples=20 00:26:00.430 iops : min= 114, max= 860, avg=397.90, stdev=241.03, samples=20 00:26:00.430 lat (msec) : 2=0.02%, 4=0.25%, 10=1.86%, 20=0.94%, 50=3.36% 00:26:00.430 lat (msec) : 100=37.31%, 250=39.61%, 500=12.49%, 750=4.16% 00:26:00.430 cpu : usr=0.24%, sys=1.48%, ctx=619, majf=0, minf=4097 00:26:00.430 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:26:00.430 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.430 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:00.430 issued rwts: total=4042,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:00.430 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:00.430 job2: (groupid=0, jobs=1): err= 0: pid=301691: Mon Nov 18 00:31:22 2024 00:26:00.430 read: IOPS=177, BW=44.3MiB/s (46.4MB/s)(454MiB/10249msec) 00:26:00.430 slat (usec): min=9, max=366047, avg=4741.26, stdev=23126.61 00:26:00.430 clat (usec): min=1841, max=1036.0k, avg=356353.31, stdev=175138.24 00:26:00.430 lat (usec): min=1872, max=1036.1k, avg=361094.57, stdev=177986.13 00:26:00.430 clat percentiles (msec): 00:26:00.430 | 1.00th=[ 9], 5.00th=[ 150], 10.00th=[ 184], 20.00th=[ 230], 00:26:00.430 | 30.00th=[ 257], 40.00th=[ 279], 50.00th=[ 321], 60.00th=[ 355], 00:26:00.430 | 70.00th=[ 414], 80.00th=[ 493], 90.00th=[ 625], 95.00th=[ 667], 00:26:00.430 | 99.00th=[ 869], 99.50th=[ 978], 99.90th=[ 1036], 99.95th=[ 1036], 00:26:00.430 | 99.99th=[ 1036] 00:26:00.430 bw ( KiB/s): min=14848, max=72192, per=5.86%, avg=44825.60, stdev=17339.29, samples=20 00:26:00.430 iops : min= 58, max= 282, avg=175.10, stdev=67.73, samples=20 00:26:00.430 lat (msec) : 2=0.06%, 4=0.17%, 10=2.20%, 20=0.06%, 50=0.44% 00:26:00.430 lat (msec) : 100=1.38%, 250=23.42%, 500=52.62%, 750=16.86%, 1000=2.70% 00:26:00.430 lat (msec) : 2000=0.11% 00:26:00.430 cpu : usr=0.08%, sys=0.52%, ctx=318, majf=0, minf=4097 00:26:00.430 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.8%, >=64=96.5% 00:26:00.430 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.430 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:00.430 issued rwts: total=1815,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:00.430 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:00.430 job3: (groupid=0, jobs=1): err= 0: pid=301692: Mon Nov 18 00:31:22 2024 00:26:00.430 read: IOPS=289, BW=72.3MiB/s (75.8MB/s)(740MiB/10244msec) 00:26:00.430 slat (usec): min=8, max=253693, avg=2199.65, stdev=12946.58 00:26:00.430 clat (usec): min=1771, max=890722, avg=219026.00, stdev=162444.07 00:26:00.430 lat (msec): min=2, max=890, avg=221.23, stdev=164.14 00:26:00.430 clat percentiles (msec): 00:26:00.430 | 1.00th=[ 7], 5.00th=[ 26], 10.00th=[ 41], 20.00th=[ 91], 00:26:00.430 | 30.00th=[ 121], 40.00th=[ 146], 50.00th=[ 180], 60.00th=[ 228], 00:26:00.430 | 70.00th=[ 262], 80.00th=[ 330], 90.00th=[ 443], 95.00th=[ 567], 00:26:00.430 | 99.00th=[ 735], 99.50th=[ 844], 99.90th=[ 894], 99.95th=[ 894], 00:26:00.430 | 99.99th=[ 894] 00:26:00.430 bw ( KiB/s): min=16384, max=171520, per=9.69%, avg=74163.20, stdev=43246.74, samples=20 00:26:00.430 iops : min= 64, max= 670, avg=289.70, stdev=168.93, samples=20 00:26:00.430 lat (msec) : 2=0.03%, 4=0.24%, 10=1.79%, 20=2.20%, 50=7.90% 00:26:00.430 lat (msec) : 100=11.01%, 250=45.19%, 500=24.69%, 750=6.32%, 1000=0.64% 00:26:00.430 cpu : usr=0.06%, sys=0.82%, ctx=553, majf=0, minf=4098 00:26:00.430 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:26:00.430 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.430 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:00.430 issued rwts: total=2961,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:00.430 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:00.430 job4: (groupid=0, jobs=1): err= 0: pid=301693: Mon Nov 18 00:31:22 2024 00:26:00.430 read: IOPS=210, BW=52.7MiB/s (55.2MB/s)(540MiB/10249msec) 00:26:00.430 slat (usec): min=12, max=241634, avg=4593.48, stdev=19146.51 00:26:00.430 clat (msec): min=62, max=973, avg=298.81, stdev=173.30 00:26:00.430 lat (msec): min=62, max=973, avg=303.40, stdev=175.75 00:26:00.430 clat percentiles (msec): 00:26:00.430 | 1.00th=[ 65], 5.00th=[ 81], 10.00th=[ 93], 20.00th=[ 118], 00:26:00.430 | 30.00th=[ 203], 40.00th=[ 257], 50.00th=[ 288], 60.00th=[ 321], 00:26:00.430 | 70.00th=[ 351], 80.00th=[ 384], 90.00th=[ 558], 95.00th=[ 684], 00:26:00.430 | 99.00th=[ 768], 99.50th=[ 902], 99.90th=[ 978], 99.95th=[ 978], 00:26:00.430 | 99.99th=[ 978] 00:26:00.430 bw ( KiB/s): min=18432, max=171008, per=7.01%, avg=53657.60, stdev=33383.52, samples=20 00:26:00.430 iops : min= 72, max= 668, avg=209.60, stdev=130.40, samples=20 00:26:00.430 lat (msec) : 100=14.95%, 250=22.78%, 500=50.60%, 750=10.00%, 1000=1.67% 00:26:00.430 cpu : usr=0.10%, sys=0.73%, ctx=250, majf=0, minf=4097 00:26:00.430 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.5%, >=64=97.1% 00:26:00.430 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.430 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:00.430 issued rwts: total=2160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:00.430 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:00.430 job5: (groupid=0, jobs=1): err= 0: pid=301694: Mon Nov 18 00:31:22 2024 00:26:00.430 read: IOPS=243, BW=60.8MiB/s (63.7MB/s)(623MiB/10248msec) 00:26:00.430 slat (usec): min=9, max=433493, avg=2603.16, stdev=17544.05 00:26:00.430 clat (usec): min=697, max=997875, avg=260488.25, stdev=221680.99 00:26:00.430 lat (usec): min=768, max=1094.3k, avg=263091.41, stdev=224775.53 00:26:00.430 clat percentiles (msec): 00:26:00.430 | 1.00th=[ 4], 5.00th=[ 15], 10.00th=[ 17], 20.00th=[ 28], 00:26:00.430 | 30.00th=[ 105], 40.00th=[ 194], 50.00th=[ 241], 60.00th=[ 279], 00:26:00.430 | 70.00th=[ 338], 80.00th=[ 393], 90.00th=[ 592], 95.00th=[ 776], 00:26:00.430 | 99.00th=[ 919], 99.50th=[ 953], 99.90th=[ 969], 99.95th=[ 995], 00:26:00.430 | 99.99th=[ 995] 00:26:00.430 bw ( KiB/s): min=12288, max=242688, per=8.12%, avg=62131.20, stdev=54610.40, samples=20 00:26:00.430 iops : min= 48, max= 948, avg=242.70, stdev=213.32, samples=20 00:26:00.430 lat (usec) : 750=0.04%, 1000=0.20% 00:26:00.430 lat (msec) : 2=0.52%, 4=0.24%, 10=1.08%, 20=12.73%, 50=9.31% 00:26:00.430 lat (msec) : 100=5.38%, 250=22.92%, 500=36.41%, 750=5.38%, 1000=5.78% 00:26:00.430 cpu : usr=0.18%, sys=0.84%, ctx=579, majf=0, minf=4097 00:26:00.430 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:26:00.430 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.430 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:00.430 issued rwts: total=2491,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:00.430 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:00.430 job6: (groupid=0, jobs=1): err= 0: pid=301695: Mon Nov 18 00:31:22 2024 00:26:00.430 read: IOPS=331, BW=82.9MiB/s (86.9MB/s)(831MiB/10029msec) 00:26:00.430 slat (usec): min=13, max=233543, avg=3005.31, stdev=13200.12 00:26:00.430 clat (msec): min=20, max=748, avg=189.96, stdev=135.94 00:26:00.430 lat (msec): min=28, max=772, avg=192.97, stdev=137.99 00:26:00.430 clat percentiles (msec): 00:26:00.430 | 1.00th=[ 46], 5.00th=[ 64], 10.00th=[ 68], 20.00th=[ 80], 00:26:00.430 | 30.00th=[ 95], 40.00th=[ 126], 50.00th=[ 153], 60.00th=[ 178], 00:26:00.430 | 70.00th=[ 220], 80.00th=[ 266], 90.00th=[ 393], 95.00th=[ 514], 00:26:00.430 | 99.00th=[ 642], 99.50th=[ 651], 99.90th=[ 684], 99.95th=[ 751], 00:26:00.430 | 99.99th=[ 751] 00:26:00.430 bw ( KiB/s): min=25088, max=216064, per=10.91%, avg=83481.60, stdev=52922.46, samples=20 00:26:00.430 iops : min= 98, max= 844, avg=326.10, stdev=206.73, samples=20 00:26:00.430 lat (msec) : 50=1.26%, 100=31.14%, 250=44.86%, 500=16.88%, 750=5.87% 00:26:00.430 cpu : usr=0.11%, sys=1.29%, ctx=425, majf=0, minf=4097 00:26:00.430 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:26:00.430 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.430 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:00.430 issued rwts: total=3324,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:00.430 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:00.430 job7: (groupid=0, jobs=1): err= 0: pid=301696: Mon Nov 18 00:31:22 2024 00:26:00.430 read: IOPS=390, BW=97.6MiB/s (102MB/s)(981MiB/10043msec) 00:26:00.430 slat (usec): min=8, max=312621, avg=1537.73, stdev=10339.24 00:26:00.430 clat (usec): min=906, max=1054.4k, avg=162255.59, stdev=163629.39 00:26:00.430 lat (usec): min=936, max=1054.4k, avg=163793.32, stdev=165354.30 00:26:00.430 clat percentiles (msec): 00:26:00.430 | 1.00th=[ 3], 5.00th=[ 12], 10.00th=[ 27], 20.00th=[ 51], 00:26:00.430 | 30.00th=[ 71], 40.00th=[ 80], 50.00th=[ 99], 60.00th=[ 123], 00:26:00.430 | 70.00th=[ 167], 80.00th=[ 279], 90.00th=[ 376], 95.00th=[ 472], 00:26:00.430 | 99.00th=[ 785], 99.50th=[ 827], 99.90th=[ 936], 99.95th=[ 1053], 00:26:00.430 | 99.99th=[ 1053] 00:26:00.430 bw ( KiB/s): min=22528, max=278016, per=12.91%, avg=98793.90, stdev=70540.41, samples=20 00:26:00.430 iops : min= 88, max= 1086, avg=385.90, stdev=275.56, samples=20 00:26:00.430 lat (usec) : 1000=0.03% 00:26:00.430 lat (msec) : 2=0.61%, 4=1.40%, 10=2.63%, 20=2.93%, 50=12.37% 00:26:00.430 lat (msec) : 100=30.62%, 250=26.01%, 500=18.79%, 750=3.16%, 1000=1.38% 00:26:00.430 lat (msec) : 2000=0.08% 00:26:00.430 cpu : usr=0.13%, sys=1.10%, ctx=1336, majf=0, minf=4097 00:26:00.430 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:26:00.430 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.430 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:00.430 issued rwts: total=3922,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:00.430 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:00.430 job8: (groupid=0, jobs=1): err= 0: pid=301697: Mon Nov 18 00:31:22 2024 00:26:00.430 read: IOPS=328, BW=82.2MiB/s (86.2MB/s)(842MiB/10246msec) 00:26:00.430 slat (usec): min=8, max=438680, avg=2129.65, stdev=15163.08 00:26:00.430 clat (usec): min=1194, max=1054.3k, avg=192405.04, stdev=167320.52 00:26:00.431 lat (usec): min=1222, max=1054.3k, avg=194534.69, stdev=169329.38 00:26:00.431 clat percentiles (msec): 00:26:00.431 | 1.00th=[ 3], 5.00th=[ 10], 10.00th=[ 31], 20.00th=[ 66], 00:26:00.431 | 30.00th=[ 100], 40.00th=[ 117], 50.00th=[ 140], 60.00th=[ 169], 00:26:00.431 | 70.00th=[ 226], 80.00th=[ 305], 90.00th=[ 447], 95.00th=[ 527], 00:26:00.431 | 99.00th=[ 693], 99.50th=[ 793], 99.90th=[ 1053], 99.95th=[ 1053], 00:26:00.431 | 99.99th=[ 1053] 00:26:00.431 bw ( KiB/s): min=31232, max=192512, per=11.05%, avg=84582.40, stdev=43145.60, samples=20 00:26:00.431 iops : min= 122, max= 752, avg=330.40, stdev=168.54, samples=20 00:26:00.431 lat (msec) : 2=0.53%, 4=1.66%, 10=2.97%, 20=2.02%, 50=10.96% 00:26:00.431 lat (msec) : 100=12.53%, 250=43.26%, 500=19.42%, 750=6.12%, 1000=0.12% 00:26:00.431 lat (msec) : 2000=0.42% 00:26:00.431 cpu : usr=0.16%, sys=0.78%, ctx=952, majf=0, minf=3721 00:26:00.431 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:26:00.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.431 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:00.431 issued rwts: total=3368,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:00.431 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:00.431 job9: (groupid=0, jobs=1): err= 0: pid=301698: Mon Nov 18 00:31:22 2024 00:26:00.431 read: IOPS=185, BW=46.4MiB/s (48.7MB/s)(476MiB/10248msec) 00:26:00.431 slat (usec): min=8, max=263981, avg=4522.34, stdev=18454.84 00:26:00.431 clat (msec): min=42, max=973, avg=339.82, stdev=169.22 00:26:00.431 lat (msec): min=42, max=973, avg=344.34, stdev=171.61 00:26:00.431 clat percentiles (msec): 00:26:00.431 | 1.00th=[ 54], 5.00th=[ 132], 10.00th=[ 174], 20.00th=[ 209], 00:26:00.431 | 30.00th=[ 249], 40.00th=[ 268], 50.00th=[ 296], 60.00th=[ 342], 00:26:00.431 | 70.00th=[ 397], 80.00th=[ 468], 90.00th=[ 575], 95.00th=[ 693], 00:26:00.431 | 99.00th=[ 818], 99.50th=[ 969], 99.90th=[ 978], 99.95th=[ 978], 00:26:00.431 | 99.99th=[ 978] 00:26:00.431 bw ( KiB/s): min=19456, max=81920, per=6.15%, avg=47078.40, stdev=19056.54, samples=20 00:26:00.431 iops : min= 76, max= 320, avg=183.90, stdev=74.44, samples=20 00:26:00.431 lat (msec) : 50=0.11%, 100=3.78%, 250=27.59%, 500=51.76%, 750=13.40% 00:26:00.431 lat (msec) : 1000=3.36% 00:26:00.431 cpu : usr=0.08%, sys=0.53%, ctx=304, majf=0, minf=4097 00:26:00.431 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.7%, >=64=96.7% 00:26:00.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.431 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:00.431 issued rwts: total=1903,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:00.431 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:00.431 job10: (groupid=0, jobs=1): err= 0: pid=301699: Mon Nov 18 00:31:22 2024 00:26:00.431 read: IOPS=254, BW=63.7MiB/s (66.8MB/s)(653MiB/10246msec) 00:26:00.431 slat (usec): min=9, max=499003, avg=1321.98, stdev=12042.33 00:26:00.431 clat (usec): min=739, max=1101.9k, avg=249700.98, stdev=204320.42 00:26:00.431 lat (usec): min=795, max=1101.9k, avg=251022.97, stdev=205172.04 00:26:00.431 clat percentiles (usec): 00:26:00.431 | 1.00th=[ 1029], 5.00th=[ 5145], 10.00th=[ 13042], 00:26:00.431 | 20.00th=[ 50594], 30.00th=[ 98042], 40.00th=[ 193987], 00:26:00.431 | 50.00th=[ 238027], 60.00th=[ 270533], 70.00th=[ 308282], 00:26:00.431 | 80.00th=[ 371196], 90.00th=[ 557843], 95.00th=[ 658506], 00:26:00.431 | 99.00th=[ 859833], 99.50th=[ 910164], 99.90th=[ 926942], 00:26:00.431 | 99.95th=[ 926942], 99.99th=[1098908] 00:26:00.431 bw ( KiB/s): min=20992, max=249856, per=8.52%, avg=65177.60, stdev=50087.96, samples=20 00:26:00.431 iops : min= 82, max= 976, avg=254.60, stdev=195.66, samples=20 00:26:00.431 lat (usec) : 750=0.04%, 1000=0.61% 00:26:00.431 lat (msec) : 2=3.75%, 4=0.23%, 10=4.90%, 20=1.95%, 50=8.39% 00:26:00.431 lat (msec) : 100=10.23%, 250=24.18%, 500=32.30%, 750=10.46%, 1000=2.91% 00:26:00.431 lat (msec) : 2000=0.04% 00:26:00.431 cpu : usr=0.12%, sys=0.93%, ctx=986, majf=0, minf=4097 00:26:00.431 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:26:00.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.431 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:00.431 issued rwts: total=2610,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:00.431 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:00.431 00:26:00.431 Run status group 0 (all jobs): 00:26:00.431 READ: bw=747MiB/s (784MB/s), 44.3MiB/s-101MiB/s (46.4MB/s-106MB/s), io=7660MiB (8032MB), run=10028-10249msec 00:26:00.431 00:26:00.431 Disk stats (read/write): 00:26:00.431 nvme0n1: ios=4003/0, merge=0/0, ticks=1230499/0, in_queue=1230499, util=96.51% 00:26:00.431 nvme10n1: ios=7719/0, merge=0/0, ticks=1233360/0, in_queue=1233360, util=96.83% 00:26:00.431 nvme1n1: ios=3523/0, merge=0/0, ticks=1208017/0, in_queue=1208017, util=97.42% 00:26:00.431 nvme2n1: ios=5832/0, merge=0/0, ticks=1233039/0, in_queue=1233039, util=97.70% 00:26:00.431 nvme3n1: ios=4218/0, merge=0/0, ticks=1226241/0, in_queue=1226241, util=97.84% 00:26:00.431 nvme4n1: ios=4865/0, merge=0/0, ticks=1214929/0, in_queue=1214929, util=98.36% 00:26:00.431 nvme5n1: ios=6288/0, merge=0/0, ticks=1233152/0, in_queue=1233152, util=98.47% 00:26:00.431 nvme6n1: ios=7477/0, merge=0/0, ticks=1226646/0, in_queue=1226646, util=98.56% 00:26:00.431 nvme7n1: ios=6638/0, merge=0/0, ticks=1234010/0, in_queue=1234010, util=98.97% 00:26:00.431 nvme8n1: ios=3696/0, merge=0/0, ticks=1216636/0, in_queue=1216636, util=99.13% 00:26:00.431 nvme9n1: ios=5158/0, merge=0/0, ticks=1256480/0, in_queue=1256480, util=99.26% 00:26:00.431 00:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:26:00.431 [global] 00:26:00.431 thread=1 00:26:00.431 invalidate=1 00:26:00.431 rw=randwrite 00:26:00.431 time_based=1 00:26:00.431 runtime=10 00:26:00.431 ioengine=libaio 00:26:00.431 direct=1 00:26:00.431 bs=262144 00:26:00.431 iodepth=64 00:26:00.431 norandommap=1 00:26:00.431 numjobs=1 00:26:00.431 00:26:00.431 [job0] 00:26:00.431 filename=/dev/nvme0n1 00:26:00.431 [job1] 00:26:00.431 filename=/dev/nvme10n1 00:26:00.431 [job2] 00:26:00.431 filename=/dev/nvme1n1 00:26:00.431 [job3] 00:26:00.431 filename=/dev/nvme2n1 00:26:00.431 [job4] 00:26:00.431 filename=/dev/nvme3n1 00:26:00.431 [job5] 00:26:00.431 filename=/dev/nvme4n1 00:26:00.431 [job6] 00:26:00.431 filename=/dev/nvme5n1 00:26:00.431 [job7] 00:26:00.431 filename=/dev/nvme6n1 00:26:00.431 [job8] 00:26:00.431 filename=/dev/nvme7n1 00:26:00.431 [job9] 00:26:00.431 filename=/dev/nvme8n1 00:26:00.431 [job10] 00:26:00.431 filename=/dev/nvme9n1 00:26:00.431 Could not set queue depth (nvme0n1) 00:26:00.431 Could not set queue depth (nvme10n1) 00:26:00.431 Could not set queue depth (nvme1n1) 00:26:00.431 Could not set queue depth (nvme2n1) 00:26:00.431 Could not set queue depth (nvme3n1) 00:26:00.431 Could not set queue depth (nvme4n1) 00:26:00.431 Could not set queue depth (nvme5n1) 00:26:00.431 Could not set queue depth (nvme6n1) 00:26:00.431 Could not set queue depth (nvme7n1) 00:26:00.431 Could not set queue depth (nvme8n1) 00:26:00.431 Could not set queue depth (nvme9n1) 00:26:00.431 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:00.431 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:00.431 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:00.431 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:00.431 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:00.431 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:00.431 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:00.431 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:00.431 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:00.431 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:00.431 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:00.431 fio-3.35 00:26:00.431 Starting 11 threads 00:26:10.402 00:26:10.403 job0: (groupid=0, jobs=1): err= 0: pid=302426: Mon Nov 18 00:31:33 2024 00:26:10.403 write: IOPS=302, BW=75.7MiB/s (79.4MB/s)(777MiB/10255msec); 0 zone resets 00:26:10.403 slat (usec): min=16, max=143991, avg=2374.94, stdev=7609.04 00:26:10.403 clat (usec): min=1345, max=819872, avg=208791.93, stdev=163679.58 00:26:10.403 lat (usec): min=1784, max=862740, avg=211166.87, stdev=165326.68 00:26:10.403 clat percentiles (msec): 00:26:10.403 | 1.00th=[ 22], 5.00th=[ 46], 10.00th=[ 50], 20.00th=[ 92], 00:26:10.403 | 30.00th=[ 101], 40.00th=[ 108], 50.00th=[ 144], 60.00th=[ 186], 00:26:10.403 | 70.00th=[ 245], 80.00th=[ 321], 90.00th=[ 481], 95.00th=[ 558], 00:26:10.403 | 99.00th=[ 709], 99.50th=[ 735], 99.90th=[ 818], 99.95th=[ 818], 00:26:10.403 | 99.99th=[ 818] 00:26:10.403 bw ( KiB/s): min=18432, max=210944, per=7.03%, avg=77879.65, stdev=51527.78, samples=20 00:26:10.403 iops : min= 72, max= 824, avg=304.15, stdev=201.23, samples=20 00:26:10.403 lat (msec) : 2=0.10%, 4=0.71%, 10=0.03%, 20=0.13%, 50=9.11% 00:26:10.403 lat (msec) : 100=19.61%, 250=41.24%, 500=20.25%, 750=8.56%, 1000=0.26% 00:26:10.403 cpu : usr=1.03%, sys=1.11%, ctx=1273, majf=0, minf=1 00:26:10.403 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:26:10.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.403 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:10.403 issued rwts: total=0,3106,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:10.403 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:10.403 job1: (groupid=0, jobs=1): err= 0: pid=302429: Mon Nov 18 00:31:33 2024 00:26:10.403 write: IOPS=296, BW=74.1MiB/s (77.7MB/s)(762MiB/10280msec); 0 zone resets 00:26:10.403 slat (usec): min=19, max=144721, avg=2315.36, stdev=8239.31 00:26:10.403 clat (usec): min=1001, max=838706, avg=213376.99, stdev=171604.78 00:26:10.403 lat (usec): min=1063, max=838746, avg=215692.35, stdev=173782.34 00:26:10.403 clat percentiles (msec): 00:26:10.403 | 1.00th=[ 4], 5.00th=[ 9], 10.00th=[ 17], 20.00th=[ 74], 00:26:10.403 | 30.00th=[ 115], 40.00th=[ 140], 50.00th=[ 190], 60.00th=[ 220], 00:26:10.403 | 70.00th=[ 247], 80.00th=[ 288], 90.00th=[ 518], 95.00th=[ 567], 00:26:10.403 | 99.00th=[ 726], 99.50th=[ 743], 99.90th=[ 793], 99.95th=[ 835], 00:26:10.403 | 99.99th=[ 835] 00:26:10.403 bw ( KiB/s): min=18432, max=150528, per=6.89%, avg=76364.75, stdev=40333.18, samples=20 00:26:10.403 iops : min= 72, max= 588, avg=298.15, stdev=157.58, samples=20 00:26:10.403 lat (msec) : 2=0.30%, 4=1.12%, 10=4.23%, 20=5.78%, 50=3.12% 00:26:10.403 lat (msec) : 100=10.63%, 250=46.34%, 500=17.00%, 750=11.16%, 1000=0.33% 00:26:10.403 cpu : usr=0.90%, sys=1.29%, ctx=1782, majf=0, minf=1 00:26:10.403 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:26:10.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.403 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:10.403 issued rwts: total=0,3047,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:10.403 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:10.403 job2: (groupid=0, jobs=1): err= 0: pid=302430: Mon Nov 18 00:31:33 2024 00:26:10.403 write: IOPS=326, BW=81.5MiB/s (85.5MB/s)(838MiB/10274msec); 0 zone resets 00:26:10.403 slat (usec): min=20, max=50685, avg=2745.26, stdev=6707.99 00:26:10.403 clat (msec): min=3, max=861, avg=193.35, stdev=145.92 00:26:10.403 lat (msec): min=3, max=861, avg=196.09, stdev=147.96 00:26:10.403 clat percentiles (msec): 00:26:10.403 | 1.00th=[ 16], 5.00th=[ 82], 10.00th=[ 84], 20.00th=[ 89], 00:26:10.403 | 30.00th=[ 95], 40.00th=[ 112], 50.00th=[ 142], 60.00th=[ 163], 00:26:10.403 | 70.00th=[ 199], 80.00th=[ 300], 90.00th=[ 443], 95.00th=[ 493], 00:26:10.403 | 99.00th=[ 651], 99.50th=[ 701], 99.90th=[ 818], 99.95th=[ 860], 00:26:10.403 | 99.99th=[ 860] 00:26:10.403 bw ( KiB/s): min=24576, max=186368, per=7.60%, avg=84144.40, stdev=51153.03, samples=20 00:26:10.403 iops : min= 96, max= 728, avg=328.60, stdev=199.80, samples=20 00:26:10.403 lat (msec) : 4=0.03%, 10=0.72%, 20=0.45%, 50=1.10%, 100=32.14% 00:26:10.403 lat (msec) : 250=42.64%, 500=18.53%, 750=4.09%, 1000=0.30% 00:26:10.403 cpu : usr=0.94%, sys=1.16%, ctx=1094, majf=0, minf=1 00:26:10.403 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:26:10.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.403 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:10.403 issued rwts: total=0,3351,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:10.403 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:10.403 job3: (groupid=0, jobs=1): err= 0: pid=302442: Mon Nov 18 00:31:33 2024 00:26:10.403 write: IOPS=746, BW=187MiB/s (196MB/s)(1882MiB/10091msec); 0 zone resets 00:26:10.403 slat (usec): min=15, max=45394, avg=885.89, stdev=2550.31 00:26:10.403 clat (usec): min=732, max=481273, avg=84870.48, stdev=72421.32 00:26:10.403 lat (usec): min=779, max=487015, avg=85756.37, stdev=72952.74 00:26:10.403 clat percentiles (msec): 00:26:10.403 | 1.00th=[ 14], 5.00th=[ 30], 10.00th=[ 41], 20.00th=[ 44], 00:26:10.403 | 30.00th=[ 46], 40.00th=[ 48], 50.00th=[ 52], 60.00th=[ 82], 00:26:10.403 | 70.00th=[ 96], 80.00th=[ 109], 90.00th=[ 159], 95.00th=[ 220], 00:26:10.403 | 99.00th=[ 435], 99.50th=[ 456], 99.90th=[ 472], 99.95th=[ 477], 00:26:10.403 | 99.99th=[ 481] 00:26:10.403 bw ( KiB/s): min=41472, max=377101, per=17.24%, avg=191044.60, stdev=99101.40, samples=20 00:26:10.403 iops : min= 162, max= 1473, avg=746.20, stdev=387.16, samples=20 00:26:10.403 lat (usec) : 750=0.01%, 1000=0.03% 00:26:10.403 lat (msec) : 2=0.01%, 4=0.25%, 10=0.37%, 20=1.61%, 50=45.42% 00:26:10.403 lat (msec) : 100=25.16%, 250=23.26%, 500=3.88% 00:26:10.403 cpu : usr=2.30%, sys=2.50%, ctx=3664, majf=0, minf=2 00:26:10.403 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:26:10.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.403 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:10.403 issued rwts: total=0,7528,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:10.403 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:10.403 job4: (groupid=0, jobs=1): err= 0: pid=302444: Mon Nov 18 00:31:33 2024 00:26:10.403 write: IOPS=431, BW=108MiB/s (113MB/s)(1083MiB/10042msec); 0 zone resets 00:26:10.403 slat (usec): min=24, max=86985, avg=2186.71, stdev=5538.65 00:26:10.403 clat (msec): min=6, max=566, avg=145.91, stdev=116.41 00:26:10.403 lat (msec): min=7, max=566, avg=148.10, stdev=118.09 00:26:10.403 clat percentiles (msec): 00:26:10.403 | 1.00th=[ 31], 5.00th=[ 46], 10.00th=[ 50], 20.00th=[ 53], 00:26:10.403 | 30.00th=[ 62], 40.00th=[ 68], 50.00th=[ 115], 60.00th=[ 148], 00:26:10.403 | 70.00th=[ 178], 80.00th=[ 232], 90.00th=[ 292], 95.00th=[ 414], 00:26:10.403 | 99.00th=[ 542], 99.50th=[ 558], 99.90th=[ 567], 99.95th=[ 567], 00:26:10.403 | 99.99th=[ 567] 00:26:10.403 bw ( KiB/s): min=28614, max=306176, per=9.86%, avg=109288.20, stdev=85159.17, samples=20 00:26:10.403 iops : min= 111, max= 1196, avg=426.80, stdev=332.70, samples=20 00:26:10.403 lat (msec) : 10=0.09%, 20=0.02%, 50=11.47%, 100=37.09%, 250=37.64% 00:26:10.403 lat (msec) : 500=10.96%, 750=2.72% 00:26:10.403 cpu : usr=1.33%, sys=1.46%, ctx=1293, majf=0, minf=1 00:26:10.403 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:26:10.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.403 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:10.403 issued rwts: total=0,4333,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:10.403 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:10.403 job5: (groupid=0, jobs=1): err= 0: pid=302448: Mon Nov 18 00:31:33 2024 00:26:10.403 write: IOPS=332, BW=83.0MiB/s (87.0MB/s)(853MiB/10279msec); 0 zone resets 00:26:10.403 slat (usec): min=15, max=275321, avg=1715.75, stdev=9277.76 00:26:10.403 clat (usec): min=891, max=951744, avg=190917.47, stdev=184267.69 00:26:10.403 lat (usec): min=917, max=951782, avg=192633.22, stdev=186469.09 00:26:10.403 clat percentiles (msec): 00:26:10.403 | 1.00th=[ 6], 5.00th=[ 16], 10.00th=[ 29], 20.00th=[ 50], 00:26:10.403 | 30.00th=[ 72], 40.00th=[ 102], 50.00th=[ 134], 60.00th=[ 165], 00:26:10.403 | 70.00th=[ 207], 80.00th=[ 268], 90.00th=[ 502], 95.00th=[ 634], 00:26:10.403 | 99.00th=[ 743], 99.50th=[ 785], 99.90th=[ 902], 99.95th=[ 953], 00:26:10.403 | 99.99th=[ 953] 00:26:10.403 bw ( KiB/s): min=22483, max=161792, per=7.74%, avg=85721.85, stdev=47716.01, samples=20 00:26:10.403 iops : min= 87, max= 632, avg=334.75, stdev=186.43, samples=20 00:26:10.403 lat (usec) : 1000=0.09% 00:26:10.403 lat (msec) : 2=0.23%, 4=0.32%, 10=2.20%, 20=4.01%, 50=13.48% 00:26:10.403 lat (msec) : 100=17.64%, 250=40.58%, 500=11.43%, 750=9.05%, 1000=0.97% 00:26:10.403 cpu : usr=1.10%, sys=1.10%, ctx=2472, majf=0, minf=2 00:26:10.403 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:26:10.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.403 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:10.403 issued rwts: total=0,3413,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:10.403 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:10.403 job6: (groupid=0, jobs=1): err= 0: pid=302449: Mon Nov 18 00:31:33 2024 00:26:10.403 write: IOPS=351, BW=87.9MiB/s (92.2MB/s)(903MiB/10274msec); 0 zone resets 00:26:10.403 slat (usec): min=15, max=89749, avg=1415.40, stdev=5769.24 00:26:10.403 clat (usec): min=740, max=926540, avg=180447.09, stdev=170578.87 00:26:10.403 lat (usec): min=772, max=926589, avg=181862.49, stdev=172099.28 00:26:10.403 clat percentiles (msec): 00:26:10.404 | 1.00th=[ 3], 5.00th=[ 14], 10.00th=[ 26], 20.00th=[ 44], 00:26:10.404 | 30.00th=[ 58], 40.00th=[ 86], 50.00th=[ 127], 60.00th=[ 165], 00:26:10.404 | 70.00th=[ 209], 80.00th=[ 292], 90.00th=[ 472], 95.00th=[ 527], 00:26:10.404 | 99.00th=[ 642], 99.50th=[ 676], 99.90th=[ 869], 99.95th=[ 919], 00:26:10.404 | 99.99th=[ 927] 00:26:10.404 bw ( KiB/s): min=28160, max=295424, per=8.20%, avg=90844.25, stdev=67956.27, samples=20 00:26:10.404 iops : min= 110, max= 1154, avg=354.80, stdev=265.50, samples=20 00:26:10.404 lat (usec) : 750=0.03%, 1000=0.11% 00:26:10.404 lat (msec) : 2=0.61%, 4=1.11%, 10=2.05%, 20=3.60%, 50=18.52% 00:26:10.404 lat (msec) : 100=17.27%, 250=34.60%, 500=14.86%, 750=6.84%, 1000=0.42% 00:26:10.404 cpu : usr=1.14%, sys=1.35%, ctx=2442, majf=0, minf=1 00:26:10.404 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:26:10.404 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.404 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:10.404 issued rwts: total=0,3613,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:10.404 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:10.404 job7: (groupid=0, jobs=1): err= 0: pid=302450: Mon Nov 18 00:31:33 2024 00:26:10.404 write: IOPS=517, BW=129MiB/s (136MB/s)(1306MiB/10092msec); 0 zone resets 00:26:10.404 slat (usec): min=24, max=57082, avg=1835.42, stdev=4082.23 00:26:10.404 clat (usec): min=1255, max=351542, avg=121681.38, stdev=76019.36 00:26:10.404 lat (usec): min=1906, max=373402, avg=123516.80, stdev=77030.68 00:26:10.404 clat percentiles (msec): 00:26:10.404 | 1.00th=[ 18], 5.00th=[ 43], 10.00th=[ 44], 20.00th=[ 48], 00:26:10.404 | 30.00th=[ 85], 40.00th=[ 90], 50.00th=[ 96], 60.00th=[ 106], 00:26:10.404 | 70.00th=[ 136], 80.00th=[ 194], 90.00th=[ 249], 95.00th=[ 284], 00:26:10.404 | 99.00th=[ 334], 99.50th=[ 342], 99.90th=[ 351], 99.95th=[ 351], 00:26:10.404 | 99.99th=[ 351] 00:26:10.404 bw ( KiB/s): min=50176, max=345600, per=11.92%, avg=132092.40, stdev=77524.82, samples=20 00:26:10.404 iops : min= 196, max= 1350, avg=515.90, stdev=302.81, samples=20 00:26:10.404 lat (msec) : 2=0.04%, 4=0.08%, 10=0.46%, 20=0.57%, 50=19.25% 00:26:10.404 lat (msec) : 100=33.65%, 250=36.27%, 500=9.68% 00:26:10.404 cpu : usr=1.63%, sys=1.58%, ctx=1464, majf=0, minf=1 00:26:10.404 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:26:10.404 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.404 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:10.404 issued rwts: total=0,5225,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:10.404 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:10.404 job8: (groupid=0, jobs=1): err= 0: pid=302452: Mon Nov 18 00:31:33 2024 00:26:10.404 write: IOPS=293, BW=73.4MiB/s (76.9MB/s)(754MiB/10279msec); 0 zone resets 00:26:10.404 slat (usec): min=24, max=82300, avg=2290.34, stdev=7172.21 00:26:10.404 clat (msec): min=2, max=837, avg=214.82, stdev=170.73 00:26:10.404 lat (msec): min=2, max=837, avg=217.11, stdev=172.66 00:26:10.404 clat percentiles (msec): 00:26:10.404 | 1.00th=[ 12], 5.00th=[ 19], 10.00th=[ 33], 20.00th=[ 66], 00:26:10.404 | 30.00th=[ 105], 40.00th=[ 120], 50.00th=[ 159], 60.00th=[ 197], 00:26:10.404 | 70.00th=[ 279], 80.00th=[ 414], 90.00th=[ 485], 95.00th=[ 542], 00:26:10.404 | 99.00th=[ 625], 99.50th=[ 667], 99.90th=[ 810], 99.95th=[ 827], 00:26:10.404 | 99.99th=[ 835] 00:26:10.404 bw ( KiB/s): min=30147, max=200704, per=6.82%, avg=75551.50, stdev=49230.96, samples=20 00:26:10.404 iops : min= 117, max= 784, avg=295.00, stdev=192.32, samples=20 00:26:10.404 lat (msec) : 4=0.03%, 10=0.76%, 20=4.58%, 50=10.78%, 100=12.77% 00:26:10.404 lat (msec) : 250=39.12%, 500=23.51%, 750=8.26%, 1000=0.20% 00:26:10.404 cpu : usr=0.90%, sys=1.14%, ctx=1793, majf=0, minf=1 00:26:10.404 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:26:10.404 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.404 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:10.404 issued rwts: total=0,3016,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:10.404 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:10.404 job9: (groupid=0, jobs=1): err= 0: pid=302453: Mon Nov 18 00:31:33 2024 00:26:10.404 write: IOPS=450, BW=113MiB/s (118MB/s)(1132MiB/10039msec); 0 zone resets 00:26:10.404 slat (usec): min=23, max=238505, avg=1685.63, stdev=6326.82 00:26:10.404 clat (msec): min=3, max=735, avg=140.17, stdev=114.74 00:26:10.404 lat (msec): min=3, max=735, avg=141.86, stdev=115.74 00:26:10.404 clat percentiles (msec): 00:26:10.404 | 1.00th=[ 23], 5.00th=[ 40], 10.00th=[ 43], 20.00th=[ 61], 00:26:10.404 | 30.00th=[ 87], 40.00th=[ 91], 50.00th=[ 96], 60.00th=[ 107], 00:26:10.404 | 70.00th=[ 161], 80.00th=[ 209], 90.00th=[ 255], 95.00th=[ 368], 00:26:10.404 | 99.00th=[ 617], 99.50th=[ 642], 99.90th=[ 718], 99.95th=[ 735], 00:26:10.404 | 99.99th=[ 735] 00:26:10.404 bw ( KiB/s): min=26059, max=311696, per=10.31%, avg=114219.35, stdev=68556.56, samples=20 00:26:10.404 iops : min= 101, max= 1217, avg=446.05, stdev=267.79, samples=20 00:26:10.404 lat (msec) : 4=0.02%, 10=0.11%, 20=0.42%, 50=16.97%, 100=36.83% 00:26:10.404 lat (msec) : 250=35.13%, 500=7.62%, 750=2.89% 00:26:10.404 cpu : usr=1.51%, sys=1.47%, ctx=1824, majf=0, minf=1 00:26:10.404 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:26:10.404 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.404 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:10.404 issued rwts: total=0,4526,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:10.404 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:10.404 job10: (groupid=0, jobs=1): err= 0: pid=302454: Mon Nov 18 00:31:33 2024 00:26:10.404 write: IOPS=323, BW=81.0MiB/s (84.9MB/s)(833MiB/10278msec); 0 zone resets 00:26:10.404 slat (usec): min=22, max=55585, avg=2951.49, stdev=6921.77 00:26:10.404 clat (usec): min=1575, max=843511, avg=194462.86, stdev=147196.45 00:26:10.404 lat (usec): min=1657, max=843543, avg=197414.35, stdev=149197.65 00:26:10.404 clat percentiles (msec): 00:26:10.404 | 1.00th=[ 52], 5.00th=[ 83], 10.00th=[ 87], 20.00th=[ 89], 00:26:10.404 | 30.00th=[ 99], 40.00th=[ 112], 50.00th=[ 129], 60.00th=[ 159], 00:26:10.404 | 70.00th=[ 186], 80.00th=[ 296], 90.00th=[ 451], 95.00th=[ 493], 00:26:10.404 | 99.00th=[ 651], 99.50th=[ 693], 99.90th=[ 802], 99.95th=[ 844], 00:26:10.404 | 99.99th=[ 844] 00:26:10.404 bw ( KiB/s): min=24576, max=186368, per=7.54%, avg=83576.60, stdev=52742.70, samples=20 00:26:10.404 iops : min= 96, max= 728, avg=326.35, stdev=205.97, samples=20 00:26:10.404 lat (msec) : 2=0.03%, 4=0.24%, 20=0.12%, 50=0.60%, 100=30.87% 00:26:10.404 lat (msec) : 250=44.41%, 500=19.31%, 750=4.11%, 1000=0.30% 00:26:10.404 cpu : usr=0.91%, sys=1.17%, ctx=864, majf=0, minf=1 00:26:10.404 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:26:10.404 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.404 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:10.404 issued rwts: total=0,3330,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:10.404 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:10.404 00:26:10.404 Run status group 0 (all jobs): 00:26:10.404 WRITE: bw=1082MiB/s (1134MB/s), 73.4MiB/s-187MiB/s (76.9MB/s-196MB/s), io=10.9GiB (11.7GB), run=10039-10280msec 00:26:10.404 00:26:10.404 Disk stats (read/write): 00:26:10.404 nvme0n1: ios=49/6163, merge=0/0, ticks=127/1239697, in_queue=1239824, util=97.73% 00:26:10.404 nvme10n1: ios=47/6029, merge=0/0, ticks=1659/1233560, in_queue=1235219, util=99.77% 00:26:10.404 nvme1n1: ios=49/6643, merge=0/0, ticks=160/1224692, in_queue=1224852, util=99.00% 00:26:10.404 nvme2n1: ios=44/14861, merge=0/0, ticks=32/1222826, in_queue=1222858, util=97.91% 00:26:10.404 nvme3n1: ios=28/8349, merge=0/0, ticks=2838/1211741, in_queue=1214579, util=100.00% 00:26:10.404 nvme4n1: ios=0/6763, merge=0/0, ticks=0/1237588, in_queue=1237588, util=98.21% 00:26:10.404 nvme5n1: ios=37/7163, merge=0/0, ticks=523/1242320, in_queue=1242843, util=100.00% 00:26:10.404 nvme6n1: ios=31/10249, merge=0/0, ticks=623/1207615, in_queue=1208238, util=100.00% 00:26:10.404 nvme7n1: ios=46/5970, merge=0/0, ticks=1554/1226879, in_queue=1228433, util=100.00% 00:26:10.404 nvme8n1: ios=43/8722, merge=0/0, ticks=3815/1184976, in_queue=1188791, util=100.00% 00:26:10.404 nvme9n1: ios=0/6597, merge=0/0, ticks=0/1221838, in_queue=1221838, util=99.15% 00:26:10.404 00:31:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:26:10.404 00:31:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:26:10.404 00:31:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:10.404 00:31:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:10.404 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:10.404 00:31:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:26:10.404 00:31:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:10.404 00:31:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:10.404 00:31:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:26:10.404 00:31:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:10.404 00:31:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK1 00:26:10.404 00:31:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:10.404 00:31:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:10.404 00:31:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.404 00:31:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:10.404 00:31:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.404 00:31:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:10.404 00:31:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:26:10.662 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:26:10.662 00:31:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:26:10.662 00:31:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:10.662 00:31:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:10.662 00:31:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:26:10.662 00:31:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:10.662 00:31:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK2 00:26:10.662 00:31:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:10.662 00:31:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:10.662 00:31:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.662 00:31:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:10.662 00:31:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.662 00:31:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:10.662 00:31:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:26:10.920 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:26:10.920 00:31:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:26:10.920 00:31:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:10.920 00:31:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:10.920 00:31:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:26:10.920 00:31:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:10.920 00:31:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK3 00:26:10.920 00:31:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:10.920 00:31:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:10.920 00:31:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.920 00:31:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:10.920 00:31:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.920 00:31:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:10.920 00:31:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:26:11.179 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:26:11.179 00:31:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:26:11.179 00:31:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:11.179 00:31:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:11.179 00:31:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:26:11.179 00:31:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:11.179 00:31:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK4 00:26:11.179 00:31:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:11.179 00:31:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:26:11.179 00:31:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.179 00:31:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:11.179 00:31:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.179 00:31:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:11.179 00:31:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:26:11.439 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:26:11.439 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:26:11.439 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:11.439 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:11.439 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:26:11.439 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:11.439 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK5 00:26:11.439 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:11.439 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:26:11.439 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.439 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:11.439 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.439 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:11.439 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:26:11.698 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:26:11.698 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:26:11.698 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:11.698 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:11.698 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:26:11.698 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:11.698 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK6 00:26:11.698 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:11.698 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:26:11.698 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.698 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:11.698 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.698 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:11.698 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:26:11.954 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:26:11.954 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:26:11.954 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:11.954 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:11.954 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:26:11.954 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:11.954 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK7 00:26:11.954 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:11.954 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:26:11.954 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.954 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:11.954 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.954 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:11.954 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:26:12.211 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:26:12.211 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:26:12.211 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:12.211 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:12.211 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:26:12.211 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:12.211 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK8 00:26:12.211 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:12.211 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:26:12.211 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.211 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:12.211 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.211 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:12.211 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:26:12.211 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:26:12.211 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:26:12.211 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:12.211 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:12.211 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:26:12.211 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:12.211 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK9 00:26:12.211 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:12.211 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:26:12.211 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.211 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:12.211 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.211 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:12.211 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:26:12.470 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:26:12.470 00:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:26:12.470 00:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:12.470 00:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:12.470 00:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:26:12.470 00:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:12.470 00:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK10 00:26:12.470 00:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:12.470 00:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:26:12.470 00:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.470 00:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:12.470 00:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.470 00:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:12.470 00:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:26:12.470 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:26:12.470 00:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:26:12.470 00:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:12.470 00:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:12.470 00:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:26:12.470 00:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:12.470 00:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK11 00:26:12.470 00:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:12.471 00:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:26:12.471 00:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.471 00:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:12.471 00:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.471 00:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:26:12.471 00:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:12.471 00:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:26:12.471 00:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:12.471 00:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:26:12.471 00:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:12.471 00:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:26:12.471 00:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:12.471 00:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:12.471 rmmod nvme_tcp 00:26:12.729 rmmod nvme_fabrics 00:26:12.729 rmmod nvme_keyring 00:26:12.729 00:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:12.729 00:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:26:12.729 00:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:26:12.729 00:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 297432 ']' 00:26:12.729 00:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 297432 00:26:12.729 00:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' -z 297432 ']' 00:26:12.729 00:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # kill -0 297432 00:26:12.729 00:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # uname 00:26:12.729 00:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:12.729 00:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 297432 00:26:12.729 00:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:12.729 00:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:12.729 00:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # echo 'killing process with pid 297432' 00:26:12.730 killing process with pid 297432 00:26:12.730 00:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@973 -- # kill 297432 00:26:12.730 00:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@978 -- # wait 297432 00:26:13.297 00:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:13.297 00:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:13.297 00:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:13.297 00:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:26:13.297 00:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-save 00:26:13.297 00:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:13.297 00:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-restore 00:26:13.297 00:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:13.297 00:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:13.297 00:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:13.297 00:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:13.297 00:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:15.568 00:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:15.568 00:26:15.568 real 1m1.416s 00:26:15.568 user 3m32.171s 00:26:15.568 sys 0m18.021s 00:26:15.569 00:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:15.569 00:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:15.569 ************************************ 00:26:15.569 END TEST nvmf_multiconnection 00:26:15.569 ************************************ 00:26:15.569 00:31:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:15.569 00:31:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:15.569 00:31:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:15.569 00:31:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:15.569 ************************************ 00:26:15.569 START TEST nvmf_initiator_timeout 00:26:15.569 ************************************ 00:26:15.569 00:31:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:15.569 * Looking for test storage... 00:26:15.569 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:15.569 00:31:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:15.569 00:31:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # lcov --version 00:26:15.569 00:31:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:15.569 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:15.569 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:15.569 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:15.569 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:15.569 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:26:15.569 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:26:15.569 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:26:15.569 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:26:15.569 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:26:15.569 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:26:15.569 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:26:15.569 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:15.569 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:26:15.569 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:26:15.570 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:15.570 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:15.570 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:26:15.570 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:26:15.570 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:15.570 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:26:15.570 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:26:15.570 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:26:15.570 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:26:15.570 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:15.570 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:26:15.570 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:26:15.570 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:15.570 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:15.570 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:26:15.570 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:15.570 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:15.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.570 --rc genhtml_branch_coverage=1 00:26:15.570 --rc genhtml_function_coverage=1 00:26:15.570 --rc genhtml_legend=1 00:26:15.570 --rc geninfo_all_blocks=1 00:26:15.570 --rc geninfo_unexecuted_blocks=1 00:26:15.570 00:26:15.570 ' 00:26:15.570 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:15.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.570 --rc genhtml_branch_coverage=1 00:26:15.570 --rc genhtml_function_coverage=1 00:26:15.570 --rc genhtml_legend=1 00:26:15.570 --rc geninfo_all_blocks=1 00:26:15.570 --rc geninfo_unexecuted_blocks=1 00:26:15.570 00:26:15.570 ' 00:26:15.571 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:15.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.571 --rc genhtml_branch_coverage=1 00:26:15.571 --rc genhtml_function_coverage=1 00:26:15.571 --rc genhtml_legend=1 00:26:15.571 --rc geninfo_all_blocks=1 00:26:15.571 --rc geninfo_unexecuted_blocks=1 00:26:15.571 00:26:15.571 ' 00:26:15.571 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:15.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.571 --rc genhtml_branch_coverage=1 00:26:15.571 --rc genhtml_function_coverage=1 00:26:15.571 --rc genhtml_legend=1 00:26:15.571 --rc geninfo_all_blocks=1 00:26:15.571 --rc geninfo_unexecuted_blocks=1 00:26:15.571 00:26:15.571 ' 00:26:15.571 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:15.571 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:26:15.571 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:15.571 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:15.571 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:15.571 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:15.571 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:15.571 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:15.571 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:15.571 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:15.571 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:15.571 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:15.571 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:15.571 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:15.571 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:15.571 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:15.571 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:15.571 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:15.571 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:15.571 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:26:15.571 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:15.572 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:15.572 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:15.572 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.572 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.572 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.572 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:26:15.572 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.572 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:26:15.572 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:15.572 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:15.572 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:15.572 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:15.572 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:15.572 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:15.572 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:15.573 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:15.573 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:15.573 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:15.573 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:15.573 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:15.573 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:26:15.573 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:15.573 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:15.573 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:15.573 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:15.573 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:15.573 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:15.573 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:15.573 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:15.573 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:15.573 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:15.573 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:26:15.573 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:17.477 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:17.477 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:26:17.477 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:17.477 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:17.477 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:17.477 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:17.477 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:17.477 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:26:17.477 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:17.477 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:26:17.477 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:26:17.477 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:26:17.477 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:26:17.477 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:26:17.477 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:26:17.477 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:17.477 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:17.477 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:17.477 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:17.477 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:17.477 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:17.477 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:17.477 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:17.477 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:17.477 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:17.477 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:17.477 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:17.477 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:17.477 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:17.477 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:17.477 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:17.477 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:17.477 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:17.477 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:17.477 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:17.477 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:17.477 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:17.477 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:17.477 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:17.477 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:17.477 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:17.477 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:17.477 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:17.477 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:17.477 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:17.478 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:17.478 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:17.478 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:17.478 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:17.478 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:17.478 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:17.478 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:17.478 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:17.478 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:17.478 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:17.478 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:17.478 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:17.478 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:17.478 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:17.478 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:17.478 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:17.478 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:17.478 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:17.478 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:17.478 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:17.478 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:17.478 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:17.478 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:17.478 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:17.478 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:17.478 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:17.478 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:17.478 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:17.478 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # is_hw=yes 00:26:17.478 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:17.478 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:17.478 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:17.478 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:17.478 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:17.478 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:17.478 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:17.478 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:17.478 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:17.478 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:17.478 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:17.478 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:17.478 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:17.478 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:17.478 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:17.478 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:17.478 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:17.478 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:17.737 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:17.737 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:17.737 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:17.737 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:17.737 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:17.737 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:17.737 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:17.737 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:17.737 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:17.737 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:26:17.737 00:26:17.737 --- 10.0.0.2 ping statistics --- 00:26:17.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:17.737 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:26:17.737 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:17.737 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:17.737 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.066 ms 00:26:17.737 00:26:17.737 --- 10.0.0.1 ping statistics --- 00:26:17.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:17.737 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:26:17.737 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:17.737 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # return 0 00:26:17.737 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:17.737 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:17.737 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:17.737 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:17.737 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:17.737 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:17.737 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:17.737 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:26:17.737 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:17.737 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:17.737 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:17.737 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=305772 00:26:17.737 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:17.737 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 305772 00:26:17.737 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # '[' -z 305772 ']' 00:26:17.737 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:17.737 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:17.737 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:17.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:17.737 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:17.737 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:17.737 [2024-11-18 00:31:41.454609] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:26:17.737 [2024-11-18 00:31:41.454726] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:17.737 [2024-11-18 00:31:41.530454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:17.995 [2024-11-18 00:31:41.576058] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:17.995 [2024-11-18 00:31:41.576119] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:17.995 [2024-11-18 00:31:41.576158] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:17.995 [2024-11-18 00:31:41.576169] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:17.995 [2024-11-18 00:31:41.576178] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:17.995 [2024-11-18 00:31:41.577577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:17.995 [2024-11-18 00:31:41.577703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:17.995 [2024-11-18 00:31:41.577763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:17.995 [2024-11-18 00:31:41.577760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:17.995 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:17.995 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@868 -- # return 0 00:26:17.995 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:17.995 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:17.995 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:17.995 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:17.995 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:17.995 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:17.995 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.995 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:17.995 Malloc0 00:26:17.995 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.995 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:26:17.995 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.995 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:17.995 Delay0 00:26:17.995 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.995 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:17.995 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.995 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:17.995 [2024-11-18 00:31:41.773103] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:17.995 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.995 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:17.995 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.995 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:17.995 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.995 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:17.995 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.995 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:17.995 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.995 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:17.995 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.996 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:17.996 [2024-11-18 00:31:41.801398] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:17.996 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.996 00:31:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:18.930 00:31:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:26:18.930 00:31:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # local i=0 00:26:18.930 00:31:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:18.930 00:31:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:18.930 00:31:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # sleep 2 00:26:20.829 00:31:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:20.829 00:31:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:20.829 00:31:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:26:20.829 00:31:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:20.829 00:31:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:20.829 00:31:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # return 0 00:26:20.829 00:31:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=306080 00:26:20.829 00:31:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:26:20.829 00:31:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:26:20.829 [global] 00:26:20.829 thread=1 00:26:20.829 invalidate=1 00:26:20.829 rw=write 00:26:20.829 time_based=1 00:26:20.829 runtime=60 00:26:20.829 ioengine=libaio 00:26:20.829 direct=1 00:26:20.829 bs=4096 00:26:20.829 iodepth=1 00:26:20.829 norandommap=0 00:26:20.829 numjobs=1 00:26:20.829 00:26:20.829 verify_dump=1 00:26:20.829 verify_backlog=512 00:26:20.829 verify_state_save=0 00:26:20.829 do_verify=1 00:26:20.829 verify=crc32c-intel 00:26:20.829 [job0] 00:26:20.829 filename=/dev/nvme0n1 00:26:20.829 Could not set queue depth (nvme0n1) 00:26:20.829 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:20.829 fio-3.35 00:26:20.829 Starting 1 thread 00:26:24.114 00:31:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:26:24.114 00:31:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.114 00:31:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:24.114 true 00:26:24.114 00:31:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.114 00:31:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:26:24.114 00:31:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.114 00:31:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:24.114 true 00:26:24.114 00:31:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.114 00:31:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:26:24.114 00:31:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.114 00:31:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:24.114 true 00:26:24.114 00:31:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.114 00:31:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:26:24.114 00:31:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.114 00:31:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:24.114 true 00:26:24.114 00:31:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.114 00:31:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:26:26.643 00:31:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:26:26.643 00:31:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.901 00:31:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:26.901 true 00:26:26.901 00:31:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.901 00:31:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:26:26.901 00:31:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.901 00:31:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:26.901 true 00:26:26.901 00:31:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.901 00:31:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:26:26.901 00:31:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.901 00:31:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:26.901 true 00:26:26.901 00:31:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.901 00:31:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:26:26.901 00:31:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.901 00:31:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:26.901 true 00:26:26.901 00:31:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.901 00:31:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:26:26.901 00:31:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 306080 00:27:23.114 00:27:23.114 job0: (groupid=0, jobs=1): err= 0: pid=306181: Mon Nov 18 00:32:44 2024 00:27:23.114 read: IOPS=333, BW=1335KiB/s (1367kB/s)(78.2MiB/60002msec) 00:27:23.114 slat (usec): min=4, max=7827, avg=11.92, stdev=55.67 00:27:23.114 clat (usec): min=199, max=40814k, avg=2746.44, stdev=288462.29 00:27:23.114 lat (usec): min=204, max=40814k, avg=2758.36, stdev=288462.51 00:27:23.114 clat percentiles (usec): 00:27:23.114 | 1.00th=[ 215], 5.00th=[ 225], 10.00th=[ 231], 20.00th=[ 237], 00:27:23.114 | 30.00th=[ 243], 40.00th=[ 249], 50.00th=[ 255], 60.00th=[ 262], 00:27:23.114 | 70.00th=[ 273], 80.00th=[ 285], 90.00th=[ 347], 95.00th=[ 490], 00:27:23.114 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:27:23.114 | 99.99th=[41157] 00:27:23.114 write: IOPS=341, BW=1365KiB/s (1398kB/s)(80.0MiB/60002msec); 0 zone resets 00:27:23.114 slat (usec): min=5, max=28479, avg=16.02, stdev=199.19 00:27:23.114 clat (usec): min=52, max=1972, avg=210.27, stdev=40.95 00:27:23.114 lat (usec): min=161, max=28740, avg=226.29, stdev=204.57 00:27:23.114 clat percentiles (usec): 00:27:23.114 | 1.00th=[ 167], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 184], 00:27:23.114 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 198], 60.00th=[ 208], 00:27:23.114 | 70.00th=[ 219], 80.00th=[ 229], 90.00th=[ 255], 95.00th=[ 281], 00:27:23.114 | 99.00th=[ 383], 99.50th=[ 400], 99.90th=[ 433], 99.95th=[ 445], 00:27:23.114 | 99.99th=[ 494] 00:27:23.114 bw ( KiB/s): min= 3616, max= 9632, per=100.00%, avg=6826.67, stdev=1875.66, samples=24 00:27:23.114 iops : min= 904, max= 2408, avg=1706.67, stdev=468.91, samples=24 00:27:23.114 lat (usec) : 100=0.01%, 250=65.89%, 500=31.96%, 750=1.61%, 1000=0.01% 00:27:23.114 lat (msec) : 2=0.01%, 50=0.53%, >=2000=0.01% 00:27:23.114 cpu : usr=0.64%, sys=1.12%, ctx=40509, majf=0, minf=38 00:27:23.114 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:23.114 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:23.114 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:23.114 issued rwts: total=20022,20480,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:23.114 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:23.114 00:27:23.114 Run status group 0 (all jobs): 00:27:23.114 READ: bw=1335KiB/s (1367kB/s), 1335KiB/s-1335KiB/s (1367kB/s-1367kB/s), io=78.2MiB (82.0MB), run=60002-60002msec 00:27:23.114 WRITE: bw=1365KiB/s (1398kB/s), 1365KiB/s-1365KiB/s (1398kB/s-1398kB/s), io=80.0MiB (83.9MB), run=60002-60002msec 00:27:23.114 00:27:23.114 Disk stats (read/write): 00:27:23.114 nvme0n1: ios=20120/20480, merge=0/0, ticks=16110/3999, in_queue=20109, util=99.82% 00:27:23.114 00:32:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:23.114 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:23.114 00:32:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:23.114 00:32:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # local i=0 00:27:23.114 00:32:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:23.114 00:32:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:23.114 00:32:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:23.114 00:32:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:23.114 00:32:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1235 -- # return 0 00:27:23.114 00:32:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:27:23.114 00:32:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:27:23.114 nvmf hotplug test: fio successful as expected 00:27:23.115 00:32:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:23.115 00:32:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.115 00:32:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:23.115 00:32:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.115 00:32:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:27:23.115 00:32:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:27:23.115 00:32:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:27:23.115 00:32:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:23.115 00:32:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:27:23.115 00:32:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:23.115 00:32:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:27:23.115 00:32:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:23.115 00:32:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:23.115 rmmod nvme_tcp 00:27:23.115 rmmod nvme_fabrics 00:27:23.115 rmmod nvme_keyring 00:27:23.115 00:32:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:23.115 00:32:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:27:23.115 00:32:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:27:23.115 00:32:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 305772 ']' 00:27:23.115 00:32:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 305772 00:27:23.115 00:32:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' -z 305772 ']' 00:27:23.115 00:32:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # kill -0 305772 00:27:23.115 00:32:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # uname 00:27:23.115 00:32:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:23.115 00:32:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 305772 00:27:23.115 00:32:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:23.115 00:32:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:23.115 00:32:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 305772' 00:27:23.115 killing process with pid 305772 00:27:23.115 00:32:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # kill 305772 00:27:23.115 00:32:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@978 -- # wait 305772 00:27:23.115 00:32:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:23.115 00:32:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:23.115 00:32:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:23.115 00:32:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:27:23.115 00:32:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-save 00:27:23.115 00:32:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:23.115 00:32:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:27:23.115 00:32:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:23.115 00:32:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:23.115 00:32:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:23.115 00:32:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:23.115 00:32:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:23.688 00:32:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:23.688 00:27:23.688 real 1m8.353s 00:27:23.688 user 4m10.427s 00:27:23.688 sys 0m7.729s 00:27:23.688 00:32:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:23.688 00:32:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:23.688 ************************************ 00:27:23.688 END TEST nvmf_initiator_timeout 00:27:23.688 ************************************ 00:27:23.688 00:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:27:23.688 00:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:27:23.688 00:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:27:23.688 00:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:27:23.688 00:32:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:25.598 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:25.598 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:27:25.598 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:25.598 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:25.598 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:25.598 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:25.598 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:25.598 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:27:25.598 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:25.598 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:27:25.598 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:27:25.598 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:27:25.598 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:27:25.598 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:27:25.598 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:27:25.598 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:25.598 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:25.598 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:25.598 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:25.598 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:25.598 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:25.598 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:25.598 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:25.598 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:25.598 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:25.598 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:25.598 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:25.598 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:25.598 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:25.598 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:25.598 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:25.598 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:25.598 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:25.598 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:25.598 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:25.598 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:25.598 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:25.598 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:25.598 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:25.598 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:25.598 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:25.598 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:25.598 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:25.598 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:25.598 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:25.598 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:25.598 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:25.598 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:25.598 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:25.598 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:25.599 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:25.599 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:25.599 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:25.599 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:25.599 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:25.599 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:25.599 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:25.599 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:25.599 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:25.599 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:25.599 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:25.599 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:25.599 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:25.599 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:25.599 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:25.599 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:25.599 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:25.599 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:25.599 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:25.599 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:25.599 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:25.599 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:25.599 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:25.599 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:25.599 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:27:25.599 00:32:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:25.599 00:32:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:25.599 00:32:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:25.599 00:32:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:25.599 ************************************ 00:27:25.599 START TEST nvmf_perf_adq 00:27:25.599 ************************************ 00:27:25.599 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:25.599 * Looking for test storage... 00:27:25.599 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:25.599 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:25.599 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:27:25.599 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:25.857 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:25.857 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:25.857 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:25.857 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:25.857 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:27:25.857 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:27:25.857 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:27:25.857 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:27:25.857 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:27:25.857 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:27:25.857 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:27:25.857 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:25.857 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:27:25.857 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:27:25.857 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:25.857 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:25.857 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:27:25.857 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:27:25.857 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:25.857 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:27:25.857 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:27:25.857 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:27:25.857 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:27:25.857 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:25.857 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:27:25.857 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:27:25.858 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:25.858 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:25.858 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:27:25.858 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:25.858 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:25.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:25.858 --rc genhtml_branch_coverage=1 00:27:25.858 --rc genhtml_function_coverage=1 00:27:25.858 --rc genhtml_legend=1 00:27:25.858 --rc geninfo_all_blocks=1 00:27:25.858 --rc geninfo_unexecuted_blocks=1 00:27:25.858 00:27:25.858 ' 00:27:25.858 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:25.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:25.858 --rc genhtml_branch_coverage=1 00:27:25.858 --rc genhtml_function_coverage=1 00:27:25.858 --rc genhtml_legend=1 00:27:25.858 --rc geninfo_all_blocks=1 00:27:25.858 --rc geninfo_unexecuted_blocks=1 00:27:25.858 00:27:25.858 ' 00:27:25.858 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:25.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:25.858 --rc genhtml_branch_coverage=1 00:27:25.858 --rc genhtml_function_coverage=1 00:27:25.858 --rc genhtml_legend=1 00:27:25.858 --rc geninfo_all_blocks=1 00:27:25.858 --rc geninfo_unexecuted_blocks=1 00:27:25.858 00:27:25.858 ' 00:27:25.858 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:25.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:25.858 --rc genhtml_branch_coverage=1 00:27:25.858 --rc genhtml_function_coverage=1 00:27:25.858 --rc genhtml_legend=1 00:27:25.858 --rc geninfo_all_blocks=1 00:27:25.858 --rc geninfo_unexecuted_blocks=1 00:27:25.858 00:27:25.858 ' 00:27:25.858 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:25.858 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:27:25.858 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:25.858 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:25.858 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:25.858 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:25.858 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:25.858 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:25.858 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:25.858 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:25.858 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:25.858 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:25.858 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:25.858 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:25.858 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:25.858 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:25.858 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:25.858 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:25.858 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:25.858 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:27:25.858 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:25.858 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:25.858 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:25.858 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.858 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.858 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.858 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:27:25.858 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.858 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:27:25.858 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:25.858 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:25.858 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:25.858 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:25.858 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:25.858 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:25.858 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:25.858 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:25.858 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:25.858 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:25.858 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:27:25.858 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:27:25.858 00:32:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:27.768 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:27.768 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:27:27.768 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:27.768 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:27.768 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:27.768 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:27.768 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:27.768 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:27:27.768 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:27.768 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:27:27.768 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:27:27.768 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:27:27.768 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:27:27.768 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:27:27.768 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:27:27.768 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:27.768 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:27.768 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:27.769 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:27.769 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:27.769 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:27.769 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:27.769 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:27.769 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:27.769 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:27.769 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:27.769 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:27.769 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:27.769 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:27.769 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:27.769 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:27.769 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:27.769 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:27.769 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:27.769 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:27.769 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:27.769 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:27.769 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:27.769 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:27.769 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:27.769 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:27.769 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:27.769 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:27.769 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:27.769 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:27.769 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:27.769 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:27.769 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:27.769 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:27.769 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:27.769 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:27.769 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:27.769 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:27.769 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:27.769 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:27.769 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:27.769 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:27.769 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:27.769 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:27.769 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:27.769 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:27.769 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:27.769 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:27.769 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:27.769 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:27.769 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:27.769 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:27.769 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:27.769 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:27.769 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:27.769 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:27.769 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:27.769 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:27.769 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:27.769 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:27:27.769 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:27:27.769 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:27:27.769 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:27:27.769 00:32:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:27:28.720 00:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:27:32.906 00:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:27:38.176 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:27:38.176 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:38.176 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:38.176 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:38.176 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:38.176 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:38.176 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:38.176 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:38.176 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:38.176 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:38.176 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:38.176 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:27:38.176 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:38.176 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:38.176 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:27:38.176 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:38.176 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:38.176 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:38.176 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:38.176 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:38.176 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:27:38.176 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:38.176 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:27:38.176 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:27:38.176 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:27:38.176 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:27:38.176 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:27:38.176 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:27:38.176 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:38.176 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:38.176 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:38.176 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:38.176 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:38.176 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:38.176 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:38.176 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:38.176 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:38.176 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:38.176 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:38.176 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:38.176 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:38.176 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:38.176 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:38.176 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:38.176 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:38.176 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:38.176 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:38.176 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:38.176 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:38.176 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:38.176 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:38.176 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:38.176 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:38.176 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:38.176 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:38.177 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:38.177 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:38.177 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:38.177 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:38.177 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:38.177 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:38.177 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:38.177 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:38.177 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:38.177 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:38.177 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:38.177 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:38.177 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:38.177 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:38.177 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:38.177 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:38.177 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:38.177 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:38.177 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:38.177 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:38.177 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:38.177 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:38.177 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:38.177 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:38.177 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:38.177 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:38.177 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:38.177 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:38.177 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:38.177 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:38.177 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:38.177 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:27:38.177 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:38.177 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:38.177 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:38.177 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:38.177 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:38.177 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:38.177 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:38.177 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:38.177 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:38.177 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:38.177 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:38.177 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:38.177 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:38.177 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:38.177 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:38.177 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:38.177 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:38.177 00:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:38.177 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:38.177 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:38.177 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:38.177 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:38.177 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:38.177 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:38.177 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:38.177 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:38.177 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:38.177 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.163 ms 00:27:38.177 00:27:38.177 --- 10.0.0.2 ping statistics --- 00:27:38.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:38.177 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:27:38.177 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:38.177 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:38.177 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:27:38.177 00:27:38.177 --- 10.0.0.1 ping statistics --- 00:27:38.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:38.177 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:27:38.177 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:38.177 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:27:38.177 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:38.177 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:38.177 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:38.177 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:38.177 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:38.177 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:38.177 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:38.177 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:38.177 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:38.177 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:38.177 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:38.177 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=318014 00:27:38.177 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:38.177 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 318014 00:27:38.177 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 318014 ']' 00:27:38.177 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:38.177 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:38.177 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:38.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:38.177 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:38.177 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:38.177 [2024-11-18 00:33:01.172477] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:27:38.177 [2024-11-18 00:33:01.172588] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:38.177 [2024-11-18 00:33:01.247409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:38.177 [2024-11-18 00:33:01.294570] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:38.177 [2024-11-18 00:33:01.294636] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:38.177 [2024-11-18 00:33:01.294650] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:38.177 [2024-11-18 00:33:01.294679] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:38.177 [2024-11-18 00:33:01.294705] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:38.177 [2024-11-18 00:33:01.296415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:38.177 [2024-11-18 00:33:01.296479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:38.177 [2024-11-18 00:33:01.296545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:38.178 [2024-11-18 00:33:01.296548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:38.178 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:38.178 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:27:38.178 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:38.178 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:38.178 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:38.178 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:38.178 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:27:38.178 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:38.178 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:38.178 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.178 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:38.178 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.178 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:38.178 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:27:38.178 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.178 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:38.178 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.178 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:38.178 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.178 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:38.178 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.178 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:27:38.178 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.178 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:38.178 [2024-11-18 00:33:01.579996] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:38.178 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.178 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:38.178 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.178 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:38.178 Malloc1 00:27:38.178 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.178 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:38.178 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.178 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:38.178 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.178 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:38.178 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.178 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:38.178 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.178 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:38.178 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.178 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:38.178 [2024-11-18 00:33:01.650574] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:38.178 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.178 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=318188 00:27:38.178 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:27:38.178 00:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:40.077 00:33:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:27:40.077 00:33:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.077 00:33:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:40.077 00:33:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.077 00:33:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:27:40.077 "tick_rate": 2700000000, 00:27:40.077 "poll_groups": [ 00:27:40.077 { 00:27:40.077 "name": "nvmf_tgt_poll_group_000", 00:27:40.077 "admin_qpairs": 1, 00:27:40.077 "io_qpairs": 1, 00:27:40.077 "current_admin_qpairs": 1, 00:27:40.077 "current_io_qpairs": 1, 00:27:40.077 "pending_bdev_io": 0, 00:27:40.077 "completed_nvme_io": 19720, 00:27:40.077 "transports": [ 00:27:40.077 { 00:27:40.077 "trtype": "TCP" 00:27:40.077 } 00:27:40.077 ] 00:27:40.077 }, 00:27:40.077 { 00:27:40.077 "name": "nvmf_tgt_poll_group_001", 00:27:40.077 "admin_qpairs": 0, 00:27:40.077 "io_qpairs": 1, 00:27:40.077 "current_admin_qpairs": 0, 00:27:40.077 "current_io_qpairs": 1, 00:27:40.077 "pending_bdev_io": 0, 00:27:40.077 "completed_nvme_io": 19344, 00:27:40.077 "transports": [ 00:27:40.077 { 00:27:40.077 "trtype": "TCP" 00:27:40.077 } 00:27:40.077 ] 00:27:40.077 }, 00:27:40.077 { 00:27:40.077 "name": "nvmf_tgt_poll_group_002", 00:27:40.077 "admin_qpairs": 0, 00:27:40.077 "io_qpairs": 1, 00:27:40.077 "current_admin_qpairs": 0, 00:27:40.077 "current_io_qpairs": 1, 00:27:40.077 "pending_bdev_io": 0, 00:27:40.077 "completed_nvme_io": 19337, 00:27:40.077 "transports": [ 00:27:40.077 { 00:27:40.077 "trtype": "TCP" 00:27:40.077 } 00:27:40.077 ] 00:27:40.077 }, 00:27:40.077 { 00:27:40.077 "name": "nvmf_tgt_poll_group_003", 00:27:40.077 "admin_qpairs": 0, 00:27:40.077 "io_qpairs": 1, 00:27:40.077 "current_admin_qpairs": 0, 00:27:40.077 "current_io_qpairs": 1, 00:27:40.077 "pending_bdev_io": 0, 00:27:40.077 "completed_nvme_io": 19047, 00:27:40.077 "transports": [ 00:27:40.077 { 00:27:40.077 "trtype": "TCP" 00:27:40.077 } 00:27:40.077 ] 00:27:40.077 } 00:27:40.077 ] 00:27:40.077 }' 00:27:40.077 00:33:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:27:40.077 00:33:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:27:40.077 00:33:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:27:40.077 00:33:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:27:40.077 00:33:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 318188 00:27:48.202 Initializing NVMe Controllers 00:27:48.202 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:48.203 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:48.203 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:48.203 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:48.203 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:48.203 Initialization complete. Launching workers. 00:27:48.203 ======================================================== 00:27:48.203 Latency(us) 00:27:48.203 Device Information : IOPS MiB/s Average min max 00:27:48.203 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10121.50 39.54 6322.89 2423.29 10565.07 00:27:48.203 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10338.70 40.39 6190.97 2343.79 10254.43 00:27:48.203 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10394.10 40.60 6157.82 2521.12 10538.71 00:27:48.203 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10126.50 39.56 6321.76 2565.67 10785.96 00:27:48.203 ======================================================== 00:27:48.203 Total : 40980.79 160.08 6247.46 2343.79 10785.96 00:27:48.203 00:27:48.203 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:27:48.203 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:48.203 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:27:48.203 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:48.203 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:27:48.203 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:48.203 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:48.203 rmmod nvme_tcp 00:27:48.203 rmmod nvme_fabrics 00:27:48.203 rmmod nvme_keyring 00:27:48.203 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:48.203 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:27:48.203 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:27:48.203 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 318014 ']' 00:27:48.203 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 318014 00:27:48.203 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 318014 ']' 00:27:48.203 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 318014 00:27:48.203 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:27:48.203 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:48.203 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 318014 00:27:48.203 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:48.203 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:48.203 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 318014' 00:27:48.203 killing process with pid 318014 00:27:48.203 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 318014 00:27:48.203 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 318014 00:27:48.462 00:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:48.462 00:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:48.462 00:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:48.462 00:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:27:48.462 00:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:27:48.462 00:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:48.462 00:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:27:48.462 00:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:48.462 00:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:48.462 00:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:48.462 00:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:48.462 00:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:50.366 00:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:50.366 00:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:27:50.366 00:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:27:50.366 00:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:27:51.301 00:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:27:53.830 00:33:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:27:59.102 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:27:59.102 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:59.102 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:59.102 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:59.102 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:59.102 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:59.102 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:59.102 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:59.102 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:59.102 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:59.102 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:59.102 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:27:59.102 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:59.102 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:59.102 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:27:59.102 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:59.102 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:59.102 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:59.102 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:59.102 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:59.102 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:27:59.102 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:59.102 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:27:59.102 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:27:59.102 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:59.103 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:59.103 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:59.103 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:59.103 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:59.103 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:59.103 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:59.104 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:27:59.104 00:27:59.104 --- 10.0.0.2 ping statistics --- 00:27:59.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:59.104 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:27:59.104 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:59.104 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:59.104 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:27:59.104 00:27:59.104 --- 10.0.0.1 ping statistics --- 00:27:59.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:59.104 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:27:59.104 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:59.104 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:27:59.104 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:59.104 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:59.104 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:59.104 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:59.104 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:59.104 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:59.104 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:59.104 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:27:59.104 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:27:59.104 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:27:59.104 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:27:59.104 net.core.busy_poll = 1 00:27:59.104 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:27:59.104 net.core.busy_read = 1 00:27:59.104 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:27:59.104 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:27:59.104 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:27:59.104 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:27:59.104 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:27:59.104 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:59.104 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:59.104 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:59.104 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:59.104 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=321331 00:27:59.104 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:59.104 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 321331 00:27:59.104 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 321331 ']' 00:27:59.104 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:59.104 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:59.104 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:59.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:59.104 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:59.104 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:59.104 [2024-11-18 00:33:22.600362] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:27:59.104 [2024-11-18 00:33:22.600440] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:59.104 [2024-11-18 00:33:22.675946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:59.104 [2024-11-18 00:33:22.722433] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:59.104 [2024-11-18 00:33:22.722483] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:59.104 [2024-11-18 00:33:22.722507] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:59.104 [2024-11-18 00:33:22.722518] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:59.104 [2024-11-18 00:33:22.722528] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:59.104 [2024-11-18 00:33:22.723965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:59.104 [2024-11-18 00:33:22.724045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:59.104 [2024-11-18 00:33:22.723989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:59.104 [2024-11-18 00:33:22.724048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:59.104 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:59.104 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:27:59.104 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:59.104 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:59.104 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:59.104 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:59.104 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:27:59.104 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:59.104 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:59.104 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.104 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:59.104 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.104 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:59.104 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:27:59.104 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.104 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:59.104 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.104 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:59.104 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.104 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:59.363 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.363 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:27:59.363 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.363 00:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:59.363 [2024-11-18 00:33:23.004045] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:59.363 00:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.363 00:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:59.363 00:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.363 00:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:59.363 Malloc1 00:27:59.363 00:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.363 00:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:59.363 00:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.363 00:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:59.363 00:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.363 00:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:59.363 00:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.363 00:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:59.363 00:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.363 00:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:59.363 00:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.363 00:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:59.363 [2024-11-18 00:33:23.072066] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:59.363 00:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.363 00:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=321446 00:27:59.363 00:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:27:59.363 00:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:01.265 00:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:28:01.265 00:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.265 00:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:01.524 00:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.524 00:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:28:01.524 "tick_rate": 2700000000, 00:28:01.524 "poll_groups": [ 00:28:01.524 { 00:28:01.524 "name": "nvmf_tgt_poll_group_000", 00:28:01.524 "admin_qpairs": 1, 00:28:01.524 "io_qpairs": 0, 00:28:01.524 "current_admin_qpairs": 1, 00:28:01.524 "current_io_qpairs": 0, 00:28:01.524 "pending_bdev_io": 0, 00:28:01.524 "completed_nvme_io": 0, 00:28:01.524 "transports": [ 00:28:01.524 { 00:28:01.524 "trtype": "TCP" 00:28:01.524 } 00:28:01.524 ] 00:28:01.524 }, 00:28:01.524 { 00:28:01.524 "name": "nvmf_tgt_poll_group_001", 00:28:01.524 "admin_qpairs": 0, 00:28:01.524 "io_qpairs": 4, 00:28:01.524 "current_admin_qpairs": 0, 00:28:01.524 "current_io_qpairs": 4, 00:28:01.524 "pending_bdev_io": 0, 00:28:01.524 "completed_nvme_io": 34140, 00:28:01.524 "transports": [ 00:28:01.524 { 00:28:01.524 "trtype": "TCP" 00:28:01.524 } 00:28:01.524 ] 00:28:01.524 }, 00:28:01.524 { 00:28:01.524 "name": "nvmf_tgt_poll_group_002", 00:28:01.524 "admin_qpairs": 0, 00:28:01.524 "io_qpairs": 0, 00:28:01.524 "current_admin_qpairs": 0, 00:28:01.524 "current_io_qpairs": 0, 00:28:01.524 "pending_bdev_io": 0, 00:28:01.524 "completed_nvme_io": 0, 00:28:01.524 "transports": [ 00:28:01.524 { 00:28:01.524 "trtype": "TCP" 00:28:01.524 } 00:28:01.524 ] 00:28:01.524 }, 00:28:01.524 { 00:28:01.524 "name": "nvmf_tgt_poll_group_003", 00:28:01.524 "admin_qpairs": 0, 00:28:01.524 "io_qpairs": 0, 00:28:01.524 "current_admin_qpairs": 0, 00:28:01.524 "current_io_qpairs": 0, 00:28:01.524 "pending_bdev_io": 0, 00:28:01.524 "completed_nvme_io": 0, 00:28:01.524 "transports": [ 00:28:01.524 { 00:28:01.524 "trtype": "TCP" 00:28:01.524 } 00:28:01.524 ] 00:28:01.524 } 00:28:01.524 ] 00:28:01.524 }' 00:28:01.524 00:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:28:01.524 00:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:28:01.524 00:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=3 00:28:01.524 00:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 3 -lt 2 ]] 00:28:01.524 00:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 321446 00:28:09.633 Initializing NVMe Controllers 00:28:09.633 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:09.633 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:09.633 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:09.633 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:09.633 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:09.633 Initialization complete. Launching workers. 00:28:09.633 ======================================================== 00:28:09.633 Latency(us) 00:28:09.633 Device Information : IOPS MiB/s Average min max 00:28:09.633 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4093.50 15.99 15633.98 1851.46 58839.22 00:28:09.633 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4863.70 19.00 13162.88 1618.70 58544.35 00:28:09.633 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4833.90 18.88 13243.60 1563.13 61862.99 00:28:09.633 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4204.60 16.42 15271.37 1513.08 59225.95 00:28:09.633 ======================================================== 00:28:09.633 Total : 17995.70 70.30 14239.30 1513.08 61862.99 00:28:09.633 00:28:09.633 00:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:28:09.633 00:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:09.634 00:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:09.634 00:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:09.634 00:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:09.634 00:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:09.634 00:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:09.634 rmmod nvme_tcp 00:28:09.634 rmmod nvme_fabrics 00:28:09.634 rmmod nvme_keyring 00:28:09.634 00:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:09.634 00:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:09.634 00:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:09.634 00:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 321331 ']' 00:28:09.634 00:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 321331 00:28:09.634 00:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 321331 ']' 00:28:09.634 00:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 321331 00:28:09.634 00:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:28:09.634 00:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:09.634 00:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 321331 00:28:09.634 00:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:09.634 00:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:09.634 00:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 321331' 00:28:09.634 killing process with pid 321331 00:28:09.634 00:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 321331 00:28:09.634 00:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 321331 00:28:09.892 00:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:09.892 00:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:09.892 00:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:09.892 00:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:09.892 00:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:28:09.892 00:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:09.892 00:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:28:09.892 00:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:09.892 00:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:09.892 00:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:09.892 00:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:09.892 00:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:13.174 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:13.174 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:28:13.174 00:28:13.174 real 0m47.305s 00:28:13.174 user 2m39.647s 00:28:13.174 sys 0m11.486s 00:28:13.174 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:13.174 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:13.174 ************************************ 00:28:13.174 END TEST nvmf_perf_adq 00:28:13.174 ************************************ 00:28:13.174 00:33:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:13.174 00:33:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:13.174 00:33:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:13.174 00:33:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:13.174 ************************************ 00:28:13.174 START TEST nvmf_shutdown 00:28:13.174 ************************************ 00:28:13.174 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:13.174 * Looking for test storage... 00:28:13.174 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:13.174 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:13.174 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:28:13.174 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:13.174 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:13.174 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:13.174 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:13.174 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:13.174 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:28:13.174 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:28:13.174 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:28:13.174 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:28:13.174 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:28:13.174 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:28:13.174 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:28:13.174 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:13.174 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:28:13.174 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:28:13.174 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:13.174 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:13.174 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:28:13.174 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:28:13.174 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:13.174 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:28:13.174 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:28:13.174 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:28:13.175 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:28:13.175 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:13.175 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:28:13.175 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:28:13.175 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:13.175 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:13.175 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:28:13.175 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:13.175 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:13.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:13.175 --rc genhtml_branch_coverage=1 00:28:13.175 --rc genhtml_function_coverage=1 00:28:13.175 --rc genhtml_legend=1 00:28:13.175 --rc geninfo_all_blocks=1 00:28:13.175 --rc geninfo_unexecuted_blocks=1 00:28:13.175 00:28:13.175 ' 00:28:13.175 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:13.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:13.175 --rc genhtml_branch_coverage=1 00:28:13.175 --rc genhtml_function_coverage=1 00:28:13.175 --rc genhtml_legend=1 00:28:13.175 --rc geninfo_all_blocks=1 00:28:13.175 --rc geninfo_unexecuted_blocks=1 00:28:13.175 00:28:13.175 ' 00:28:13.175 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:13.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:13.175 --rc genhtml_branch_coverage=1 00:28:13.175 --rc genhtml_function_coverage=1 00:28:13.175 --rc genhtml_legend=1 00:28:13.175 --rc geninfo_all_blocks=1 00:28:13.175 --rc geninfo_unexecuted_blocks=1 00:28:13.175 00:28:13.175 ' 00:28:13.175 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:13.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:13.175 --rc genhtml_branch_coverage=1 00:28:13.175 --rc genhtml_function_coverage=1 00:28:13.175 --rc genhtml_legend=1 00:28:13.175 --rc geninfo_all_blocks=1 00:28:13.175 --rc geninfo_unexecuted_blocks=1 00:28:13.175 00:28:13.175 ' 00:28:13.175 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:13.175 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:28:13.175 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:13.175 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:13.175 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:13.175 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:13.175 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:13.175 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:13.175 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:13.175 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:13.175 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:13.175 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:13.175 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:13.175 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:13.175 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:13.175 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:13.175 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:13.175 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:13.175 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:13.175 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:28:13.175 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:13.175 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:13.175 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:13.175 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.175 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.175 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.175 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:28:13.175 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.175 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:28:13.175 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:13.175 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:13.175 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:13.175 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:13.175 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:13.175 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:13.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:13.175 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:13.175 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:13.175 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:13.175 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:13.175 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:13.175 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:28:13.175 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:13.175 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:13.175 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:13.175 ************************************ 00:28:13.175 START TEST nvmf_shutdown_tc1 00:28:13.175 ************************************ 00:28:13.175 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:28:13.175 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:28:13.175 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:13.176 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:13.176 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:13.176 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:13.176 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:13.176 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:13.176 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:13.176 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:13.176 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:13.176 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:13.176 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:13.176 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:13.176 00:33:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:15.213 00:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:15.213 00:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:15.213 00:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:15.213 00:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:15.213 00:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:15.213 00:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:15.213 00:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:15.213 00:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:28:15.213 00:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:15.213 00:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:28:15.213 00:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:28:15.213 00:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:28:15.213 00:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:28:15.213 00:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:28:15.213 00:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:15.213 00:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:15.213 00:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:15.213 00:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:15.213 00:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:15.213 00:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:15.213 00:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:15.213 00:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:15.213 00:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:15.213 00:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:15.213 00:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:15.213 00:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:15.213 00:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:15.214 00:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:15.214 00:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:15.214 00:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:15.214 00:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:15.214 00:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:15.214 00:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:15.214 00:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:15.214 00:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:15.214 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:15.214 00:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:15.214 00:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:15.214 00:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:15.214 00:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:15.214 00:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:15.214 00:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:15.214 00:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:15.214 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:15.214 00:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:15.214 00:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:15.214 00:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:15.214 00:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:15.214 00:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:15.214 00:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:15.214 00:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:15.214 00:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:15.214 00:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:15.214 00:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:15.214 00:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:15.214 00:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:15.214 00:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:15.214 00:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:15.214 00:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:15.214 00:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:15.214 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:15.214 00:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:15.214 00:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:15.214 00:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:15.214 00:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:15.214 00:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:15.214 00:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:15.214 00:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:15.214 00:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:15.214 00:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:15.214 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:15.214 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:15.214 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:15.214 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:15.214 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:15.214 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:15.214 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:15.214 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:15.214 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:15.214 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:15.214 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:15.214 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:15.214 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:15.214 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:15.214 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:15.214 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:15.214 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:15.214 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:15.214 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:15.214 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:15.214 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:15.214 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:15.513 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:15.513 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:15.513 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:15.513 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:15.513 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:15.513 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:15.513 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:15.513 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:15.513 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:15.513 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:28:15.513 00:28:15.513 --- 10.0.0.2 ping statistics --- 00:28:15.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:15.513 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:28:15.513 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:15.513 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:15.513 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:28:15.513 00:28:15.513 --- 10.0.0.1 ping statistics --- 00:28:15.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:15.513 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:28:15.513 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:15.513 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:28:15.513 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:15.513 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:15.513 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:15.513 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:15.513 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:15.513 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:15.513 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:15.513 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:15.513 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:15.513 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:15.513 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:15.513 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=324782 00:28:15.513 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:15.513 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 324782 00:28:15.513 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 324782 ']' 00:28:15.513 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:15.513 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:15.513 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:15.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:15.513 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:15.513 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:15.513 [2024-11-18 00:33:39.216253] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:28:15.513 [2024-11-18 00:33:39.216369] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:15.513 [2024-11-18 00:33:39.287470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:15.808 [2024-11-18 00:33:39.336235] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:15.808 [2024-11-18 00:33:39.336300] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:15.808 [2024-11-18 00:33:39.336331] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:15.808 [2024-11-18 00:33:39.336343] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:15.808 [2024-11-18 00:33:39.336352] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:15.808 [2024-11-18 00:33:39.337828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:15.808 [2024-11-18 00:33:39.337893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:15.808 [2024-11-18 00:33:39.337974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:15.808 [2024-11-18 00:33:39.337976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:15.808 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:15.808 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:28:15.808 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:15.808 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:15.808 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:15.808 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:15.808 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:15.808 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.808 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:15.808 [2024-11-18 00:33:39.481287] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:15.808 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.808 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:15.808 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:15.808 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:15.808 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:15.808 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:15.808 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:15.808 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:15.808 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:15.808 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:15.808 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:15.808 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:15.808 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:15.808 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:15.808 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:15.808 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:15.808 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:15.808 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:15.808 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:15.808 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:15.808 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:15.808 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:15.808 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:15.808 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:15.808 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:15.808 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:15.808 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:15.808 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.808 00:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:15.808 Malloc1 00:28:15.808 [2024-11-18 00:33:39.574058] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:15.808 Malloc2 00:28:16.106 Malloc3 00:28:16.106 Malloc4 00:28:16.106 Malloc5 00:28:16.106 Malloc6 00:28:16.106 Malloc7 00:28:16.106 Malloc8 00:28:16.366 Malloc9 00:28:16.366 Malloc10 00:28:16.366 00:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.366 00:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:16.366 00:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:16.366 00:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:16.366 00:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=324854 00:28:16.366 00:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 324854 /var/tmp/bdevperf.sock 00:28:16.366 00:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 324854 ']' 00:28:16.366 00:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:16.366 00:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:28:16.366 00:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:16.366 00:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:16.366 00:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:16.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:16.366 00:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:28:16.366 00:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:16.366 00:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:28:16.366 00:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:16.366 00:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:16.366 00:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:16.366 { 00:28:16.366 "params": { 00:28:16.366 "name": "Nvme$subsystem", 00:28:16.366 "trtype": "$TEST_TRANSPORT", 00:28:16.366 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:16.366 "adrfam": "ipv4", 00:28:16.366 "trsvcid": "$NVMF_PORT", 00:28:16.366 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:16.366 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:16.366 "hdgst": ${hdgst:-false}, 00:28:16.366 "ddgst": ${ddgst:-false} 00:28:16.366 }, 00:28:16.366 "method": "bdev_nvme_attach_controller" 00:28:16.366 } 00:28:16.366 EOF 00:28:16.366 )") 00:28:16.366 00:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:16.366 00:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:16.366 00:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:16.366 { 00:28:16.366 "params": { 00:28:16.366 "name": "Nvme$subsystem", 00:28:16.366 "trtype": "$TEST_TRANSPORT", 00:28:16.366 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:16.366 "adrfam": "ipv4", 00:28:16.366 "trsvcid": "$NVMF_PORT", 00:28:16.366 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:16.366 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:16.366 "hdgst": ${hdgst:-false}, 00:28:16.366 "ddgst": ${ddgst:-false} 00:28:16.366 }, 00:28:16.366 "method": "bdev_nvme_attach_controller" 00:28:16.366 } 00:28:16.366 EOF 00:28:16.366 )") 00:28:16.366 00:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:16.366 00:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:16.366 00:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:16.366 { 00:28:16.366 "params": { 00:28:16.366 "name": "Nvme$subsystem", 00:28:16.366 "trtype": "$TEST_TRANSPORT", 00:28:16.366 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:16.367 "adrfam": "ipv4", 00:28:16.367 "trsvcid": "$NVMF_PORT", 00:28:16.367 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:16.367 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:16.367 "hdgst": ${hdgst:-false}, 00:28:16.367 "ddgst": ${ddgst:-false} 00:28:16.367 }, 00:28:16.367 "method": "bdev_nvme_attach_controller" 00:28:16.367 } 00:28:16.367 EOF 00:28:16.367 )") 00:28:16.367 00:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:16.367 00:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:16.367 00:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:16.367 { 00:28:16.367 "params": { 00:28:16.367 "name": "Nvme$subsystem", 00:28:16.367 "trtype": "$TEST_TRANSPORT", 00:28:16.367 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:16.367 "adrfam": "ipv4", 00:28:16.367 "trsvcid": "$NVMF_PORT", 00:28:16.367 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:16.367 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:16.367 "hdgst": ${hdgst:-false}, 00:28:16.367 "ddgst": ${ddgst:-false} 00:28:16.367 }, 00:28:16.367 "method": "bdev_nvme_attach_controller" 00:28:16.367 } 00:28:16.367 EOF 00:28:16.367 )") 00:28:16.367 00:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:16.367 00:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:16.367 00:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:16.367 { 00:28:16.367 "params": { 00:28:16.367 "name": "Nvme$subsystem", 00:28:16.367 "trtype": "$TEST_TRANSPORT", 00:28:16.367 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:16.367 "adrfam": "ipv4", 00:28:16.367 "trsvcid": "$NVMF_PORT", 00:28:16.367 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:16.367 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:16.367 "hdgst": ${hdgst:-false}, 00:28:16.367 "ddgst": ${ddgst:-false} 00:28:16.367 }, 00:28:16.367 "method": "bdev_nvme_attach_controller" 00:28:16.367 } 00:28:16.367 EOF 00:28:16.367 )") 00:28:16.367 00:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:16.367 00:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:16.367 00:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:16.367 { 00:28:16.367 "params": { 00:28:16.367 "name": "Nvme$subsystem", 00:28:16.367 "trtype": "$TEST_TRANSPORT", 00:28:16.367 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:16.367 "adrfam": "ipv4", 00:28:16.367 "trsvcid": "$NVMF_PORT", 00:28:16.367 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:16.367 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:16.367 "hdgst": ${hdgst:-false}, 00:28:16.367 "ddgst": ${ddgst:-false} 00:28:16.367 }, 00:28:16.367 "method": "bdev_nvme_attach_controller" 00:28:16.367 } 00:28:16.367 EOF 00:28:16.367 )") 00:28:16.367 00:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:16.367 00:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:16.367 00:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:16.367 { 00:28:16.367 "params": { 00:28:16.367 "name": "Nvme$subsystem", 00:28:16.367 "trtype": "$TEST_TRANSPORT", 00:28:16.367 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:16.367 "adrfam": "ipv4", 00:28:16.367 "trsvcid": "$NVMF_PORT", 00:28:16.367 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:16.367 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:16.367 "hdgst": ${hdgst:-false}, 00:28:16.367 "ddgst": ${ddgst:-false} 00:28:16.367 }, 00:28:16.367 "method": "bdev_nvme_attach_controller" 00:28:16.367 } 00:28:16.367 EOF 00:28:16.367 )") 00:28:16.367 00:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:16.367 00:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:16.367 00:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:16.367 { 00:28:16.367 "params": { 00:28:16.367 "name": "Nvme$subsystem", 00:28:16.367 "trtype": "$TEST_TRANSPORT", 00:28:16.367 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:16.367 "adrfam": "ipv4", 00:28:16.367 "trsvcid": "$NVMF_PORT", 00:28:16.367 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:16.367 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:16.367 "hdgst": ${hdgst:-false}, 00:28:16.367 "ddgst": ${ddgst:-false} 00:28:16.367 }, 00:28:16.367 "method": "bdev_nvme_attach_controller" 00:28:16.367 } 00:28:16.367 EOF 00:28:16.367 )") 00:28:16.367 00:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:16.367 00:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:16.367 00:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:16.367 { 00:28:16.367 "params": { 00:28:16.367 "name": "Nvme$subsystem", 00:28:16.367 "trtype": "$TEST_TRANSPORT", 00:28:16.367 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:16.367 "adrfam": "ipv4", 00:28:16.367 "trsvcid": "$NVMF_PORT", 00:28:16.367 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:16.367 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:16.367 "hdgst": ${hdgst:-false}, 00:28:16.367 "ddgst": ${ddgst:-false} 00:28:16.367 }, 00:28:16.367 "method": "bdev_nvme_attach_controller" 00:28:16.367 } 00:28:16.367 EOF 00:28:16.367 )") 00:28:16.367 00:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:16.367 00:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:16.367 00:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:16.367 { 00:28:16.367 "params": { 00:28:16.367 "name": "Nvme$subsystem", 00:28:16.367 "trtype": "$TEST_TRANSPORT", 00:28:16.367 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:16.367 "adrfam": "ipv4", 00:28:16.367 "trsvcid": "$NVMF_PORT", 00:28:16.367 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:16.367 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:16.367 "hdgst": ${hdgst:-false}, 00:28:16.367 "ddgst": ${ddgst:-false} 00:28:16.367 }, 00:28:16.367 "method": "bdev_nvme_attach_controller" 00:28:16.367 } 00:28:16.367 EOF 00:28:16.367 )") 00:28:16.367 00:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:16.367 00:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:28:16.367 00:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:28:16.367 00:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:16.367 "params": { 00:28:16.367 "name": "Nvme1", 00:28:16.367 "trtype": "tcp", 00:28:16.367 "traddr": "10.0.0.2", 00:28:16.367 "adrfam": "ipv4", 00:28:16.367 "trsvcid": "4420", 00:28:16.367 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:16.367 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:16.367 "hdgst": false, 00:28:16.367 "ddgst": false 00:28:16.367 }, 00:28:16.367 "method": "bdev_nvme_attach_controller" 00:28:16.368 },{ 00:28:16.368 "params": { 00:28:16.368 "name": "Nvme2", 00:28:16.368 "trtype": "tcp", 00:28:16.368 "traddr": "10.0.0.2", 00:28:16.368 "adrfam": "ipv4", 00:28:16.368 "trsvcid": "4420", 00:28:16.368 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:16.368 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:16.368 "hdgst": false, 00:28:16.368 "ddgst": false 00:28:16.368 }, 00:28:16.368 "method": "bdev_nvme_attach_controller" 00:28:16.368 },{ 00:28:16.368 "params": { 00:28:16.368 "name": "Nvme3", 00:28:16.368 "trtype": "tcp", 00:28:16.368 "traddr": "10.0.0.2", 00:28:16.368 "adrfam": "ipv4", 00:28:16.368 "trsvcid": "4420", 00:28:16.368 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:16.368 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:16.368 "hdgst": false, 00:28:16.368 "ddgst": false 00:28:16.368 }, 00:28:16.368 "method": "bdev_nvme_attach_controller" 00:28:16.368 },{ 00:28:16.368 "params": { 00:28:16.368 "name": "Nvme4", 00:28:16.368 "trtype": "tcp", 00:28:16.368 "traddr": "10.0.0.2", 00:28:16.368 "adrfam": "ipv4", 00:28:16.368 "trsvcid": "4420", 00:28:16.368 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:16.368 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:16.368 "hdgst": false, 00:28:16.368 "ddgst": false 00:28:16.368 }, 00:28:16.368 "method": "bdev_nvme_attach_controller" 00:28:16.368 },{ 00:28:16.368 "params": { 00:28:16.368 "name": "Nvme5", 00:28:16.368 "trtype": "tcp", 00:28:16.368 "traddr": "10.0.0.2", 00:28:16.368 "adrfam": "ipv4", 00:28:16.368 "trsvcid": "4420", 00:28:16.368 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:16.368 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:16.368 "hdgst": false, 00:28:16.368 "ddgst": false 00:28:16.368 }, 00:28:16.368 "method": "bdev_nvme_attach_controller" 00:28:16.368 },{ 00:28:16.368 "params": { 00:28:16.368 "name": "Nvme6", 00:28:16.368 "trtype": "tcp", 00:28:16.368 "traddr": "10.0.0.2", 00:28:16.368 "adrfam": "ipv4", 00:28:16.368 "trsvcid": "4420", 00:28:16.368 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:16.368 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:16.368 "hdgst": false, 00:28:16.368 "ddgst": false 00:28:16.368 }, 00:28:16.368 "method": "bdev_nvme_attach_controller" 00:28:16.368 },{ 00:28:16.368 "params": { 00:28:16.368 "name": "Nvme7", 00:28:16.368 "trtype": "tcp", 00:28:16.368 "traddr": "10.0.0.2", 00:28:16.368 "adrfam": "ipv4", 00:28:16.368 "trsvcid": "4420", 00:28:16.368 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:16.368 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:16.368 "hdgst": false, 00:28:16.368 "ddgst": false 00:28:16.368 }, 00:28:16.368 "method": "bdev_nvme_attach_controller" 00:28:16.368 },{ 00:28:16.368 "params": { 00:28:16.368 "name": "Nvme8", 00:28:16.368 "trtype": "tcp", 00:28:16.368 "traddr": "10.0.0.2", 00:28:16.368 "adrfam": "ipv4", 00:28:16.368 "trsvcid": "4420", 00:28:16.368 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:16.368 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:16.368 "hdgst": false, 00:28:16.368 "ddgst": false 00:28:16.368 }, 00:28:16.368 "method": "bdev_nvme_attach_controller" 00:28:16.368 },{ 00:28:16.368 "params": { 00:28:16.368 "name": "Nvme9", 00:28:16.368 "trtype": "tcp", 00:28:16.368 "traddr": "10.0.0.2", 00:28:16.368 "adrfam": "ipv4", 00:28:16.368 "trsvcid": "4420", 00:28:16.368 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:16.368 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:16.368 "hdgst": false, 00:28:16.368 "ddgst": false 00:28:16.368 }, 00:28:16.368 "method": "bdev_nvme_attach_controller" 00:28:16.368 },{ 00:28:16.368 "params": { 00:28:16.368 "name": "Nvme10", 00:28:16.368 "trtype": "tcp", 00:28:16.368 "traddr": "10.0.0.2", 00:28:16.368 "adrfam": "ipv4", 00:28:16.368 "trsvcid": "4420", 00:28:16.368 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:16.368 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:16.368 "hdgst": false, 00:28:16.368 "ddgst": false 00:28:16.368 }, 00:28:16.368 "method": "bdev_nvme_attach_controller" 00:28:16.368 }' 00:28:16.368 [2024-11-18 00:33:40.073889] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:28:16.368 [2024-11-18 00:33:40.073980] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:28:16.368 [2024-11-18 00:33:40.154495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:16.627 [2024-11-18 00:33:40.203426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:18.527 00:33:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:18.527 00:33:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:28:18.527 00:33:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:18.527 00:33:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.527 00:33:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:18.527 00:33:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.527 00:33:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 324854 00:28:18.527 00:33:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:28:18.527 00:33:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:28:19.461 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 324854 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:28:19.461 00:33:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 324782 00:28:19.461 00:33:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:19.461 00:33:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:19.461 00:33:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:28:19.461 00:33:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:28:19.461 00:33:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:19.461 00:33:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:19.461 { 00:28:19.461 "params": { 00:28:19.461 "name": "Nvme$subsystem", 00:28:19.461 "trtype": "$TEST_TRANSPORT", 00:28:19.461 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:19.461 "adrfam": "ipv4", 00:28:19.461 "trsvcid": "$NVMF_PORT", 00:28:19.461 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:19.461 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:19.461 "hdgst": ${hdgst:-false}, 00:28:19.461 "ddgst": ${ddgst:-false} 00:28:19.462 }, 00:28:19.462 "method": "bdev_nvme_attach_controller" 00:28:19.462 } 00:28:19.462 EOF 00:28:19.462 )") 00:28:19.462 00:33:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:19.462 00:33:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:19.462 00:33:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:19.462 { 00:28:19.462 "params": { 00:28:19.462 "name": "Nvme$subsystem", 00:28:19.462 "trtype": "$TEST_TRANSPORT", 00:28:19.462 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:19.462 "adrfam": "ipv4", 00:28:19.462 "trsvcid": "$NVMF_PORT", 00:28:19.462 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:19.462 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:19.462 "hdgst": ${hdgst:-false}, 00:28:19.462 "ddgst": ${ddgst:-false} 00:28:19.462 }, 00:28:19.462 "method": "bdev_nvme_attach_controller" 00:28:19.462 } 00:28:19.462 EOF 00:28:19.462 )") 00:28:19.462 00:33:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:19.462 00:33:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:19.462 00:33:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:19.462 { 00:28:19.462 "params": { 00:28:19.462 "name": "Nvme$subsystem", 00:28:19.462 "trtype": "$TEST_TRANSPORT", 00:28:19.462 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:19.462 "adrfam": "ipv4", 00:28:19.462 "trsvcid": "$NVMF_PORT", 00:28:19.462 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:19.462 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:19.462 "hdgst": ${hdgst:-false}, 00:28:19.462 "ddgst": ${ddgst:-false} 00:28:19.462 }, 00:28:19.462 "method": "bdev_nvme_attach_controller" 00:28:19.462 } 00:28:19.462 EOF 00:28:19.462 )") 00:28:19.462 00:33:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:19.462 00:33:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:19.462 00:33:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:19.462 { 00:28:19.462 "params": { 00:28:19.462 "name": "Nvme$subsystem", 00:28:19.462 "trtype": "$TEST_TRANSPORT", 00:28:19.462 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:19.462 "adrfam": "ipv4", 00:28:19.462 "trsvcid": "$NVMF_PORT", 00:28:19.462 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:19.462 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:19.462 "hdgst": ${hdgst:-false}, 00:28:19.462 "ddgst": ${ddgst:-false} 00:28:19.462 }, 00:28:19.462 "method": "bdev_nvme_attach_controller" 00:28:19.462 } 00:28:19.462 EOF 00:28:19.462 )") 00:28:19.462 00:33:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:19.462 00:33:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:19.462 00:33:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:19.462 { 00:28:19.462 "params": { 00:28:19.462 "name": "Nvme$subsystem", 00:28:19.462 "trtype": "$TEST_TRANSPORT", 00:28:19.462 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:19.462 "adrfam": "ipv4", 00:28:19.462 "trsvcid": "$NVMF_PORT", 00:28:19.462 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:19.462 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:19.462 "hdgst": ${hdgst:-false}, 00:28:19.462 "ddgst": ${ddgst:-false} 00:28:19.462 }, 00:28:19.462 "method": "bdev_nvme_attach_controller" 00:28:19.462 } 00:28:19.462 EOF 00:28:19.462 )") 00:28:19.462 00:33:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:19.462 00:33:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:19.462 00:33:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:19.462 { 00:28:19.462 "params": { 00:28:19.462 "name": "Nvme$subsystem", 00:28:19.462 "trtype": "$TEST_TRANSPORT", 00:28:19.462 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:19.462 "adrfam": "ipv4", 00:28:19.462 "trsvcid": "$NVMF_PORT", 00:28:19.462 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:19.462 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:19.462 "hdgst": ${hdgst:-false}, 00:28:19.462 "ddgst": ${ddgst:-false} 00:28:19.462 }, 00:28:19.462 "method": "bdev_nvme_attach_controller" 00:28:19.462 } 00:28:19.462 EOF 00:28:19.462 )") 00:28:19.462 00:33:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:19.462 00:33:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:19.462 00:33:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:19.462 { 00:28:19.462 "params": { 00:28:19.462 "name": "Nvme$subsystem", 00:28:19.462 "trtype": "$TEST_TRANSPORT", 00:28:19.462 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:19.462 "adrfam": "ipv4", 00:28:19.462 "trsvcid": "$NVMF_PORT", 00:28:19.462 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:19.462 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:19.462 "hdgst": ${hdgst:-false}, 00:28:19.462 "ddgst": ${ddgst:-false} 00:28:19.462 }, 00:28:19.462 "method": "bdev_nvme_attach_controller" 00:28:19.462 } 00:28:19.462 EOF 00:28:19.462 )") 00:28:19.462 00:33:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:19.462 00:33:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:19.462 00:33:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:19.462 { 00:28:19.462 "params": { 00:28:19.462 "name": "Nvme$subsystem", 00:28:19.462 "trtype": "$TEST_TRANSPORT", 00:28:19.462 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:19.462 "adrfam": "ipv4", 00:28:19.462 "trsvcid": "$NVMF_PORT", 00:28:19.462 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:19.462 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:19.462 "hdgst": ${hdgst:-false}, 00:28:19.462 "ddgst": ${ddgst:-false} 00:28:19.462 }, 00:28:19.462 "method": "bdev_nvme_attach_controller" 00:28:19.462 } 00:28:19.462 EOF 00:28:19.462 )") 00:28:19.462 00:33:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:19.462 00:33:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:19.462 00:33:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:19.462 { 00:28:19.462 "params": { 00:28:19.462 "name": "Nvme$subsystem", 00:28:19.462 "trtype": "$TEST_TRANSPORT", 00:28:19.462 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:19.462 "adrfam": "ipv4", 00:28:19.462 "trsvcid": "$NVMF_PORT", 00:28:19.462 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:19.462 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:19.462 "hdgst": ${hdgst:-false}, 00:28:19.462 "ddgst": ${ddgst:-false} 00:28:19.462 }, 00:28:19.462 "method": "bdev_nvme_attach_controller" 00:28:19.462 } 00:28:19.462 EOF 00:28:19.462 )") 00:28:19.462 00:33:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:19.462 00:33:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:19.462 00:33:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:19.462 { 00:28:19.462 "params": { 00:28:19.463 "name": "Nvme$subsystem", 00:28:19.463 "trtype": "$TEST_TRANSPORT", 00:28:19.463 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:19.463 "adrfam": "ipv4", 00:28:19.463 "trsvcid": "$NVMF_PORT", 00:28:19.463 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:19.463 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:19.463 "hdgst": ${hdgst:-false}, 00:28:19.463 "ddgst": ${ddgst:-false} 00:28:19.463 }, 00:28:19.463 "method": "bdev_nvme_attach_controller" 00:28:19.463 } 00:28:19.463 EOF 00:28:19.463 )") 00:28:19.463 00:33:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:19.463 00:33:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:28:19.463 00:33:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:28:19.463 00:33:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:19.463 "params": { 00:28:19.463 "name": "Nvme1", 00:28:19.463 "trtype": "tcp", 00:28:19.463 "traddr": "10.0.0.2", 00:28:19.463 "adrfam": "ipv4", 00:28:19.463 "trsvcid": "4420", 00:28:19.463 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:19.463 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:19.463 "hdgst": false, 00:28:19.463 "ddgst": false 00:28:19.463 }, 00:28:19.463 "method": "bdev_nvme_attach_controller" 00:28:19.463 },{ 00:28:19.463 "params": { 00:28:19.463 "name": "Nvme2", 00:28:19.463 "trtype": "tcp", 00:28:19.463 "traddr": "10.0.0.2", 00:28:19.463 "adrfam": "ipv4", 00:28:19.463 "trsvcid": "4420", 00:28:19.463 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:19.463 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:19.463 "hdgst": false, 00:28:19.463 "ddgst": false 00:28:19.463 }, 00:28:19.463 "method": "bdev_nvme_attach_controller" 00:28:19.463 },{ 00:28:19.463 "params": { 00:28:19.463 "name": "Nvme3", 00:28:19.463 "trtype": "tcp", 00:28:19.463 "traddr": "10.0.0.2", 00:28:19.463 "adrfam": "ipv4", 00:28:19.463 "trsvcid": "4420", 00:28:19.463 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:19.463 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:19.463 "hdgst": false, 00:28:19.463 "ddgst": false 00:28:19.463 }, 00:28:19.463 "method": "bdev_nvme_attach_controller" 00:28:19.463 },{ 00:28:19.463 "params": { 00:28:19.463 "name": "Nvme4", 00:28:19.463 "trtype": "tcp", 00:28:19.463 "traddr": "10.0.0.2", 00:28:19.463 "adrfam": "ipv4", 00:28:19.463 "trsvcid": "4420", 00:28:19.463 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:19.463 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:19.463 "hdgst": false, 00:28:19.463 "ddgst": false 00:28:19.463 }, 00:28:19.463 "method": "bdev_nvme_attach_controller" 00:28:19.463 },{ 00:28:19.463 "params": { 00:28:19.463 "name": "Nvme5", 00:28:19.463 "trtype": "tcp", 00:28:19.463 "traddr": "10.0.0.2", 00:28:19.463 "adrfam": "ipv4", 00:28:19.463 "trsvcid": "4420", 00:28:19.463 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:19.463 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:19.463 "hdgst": false, 00:28:19.463 "ddgst": false 00:28:19.463 }, 00:28:19.463 "method": "bdev_nvme_attach_controller" 00:28:19.463 },{ 00:28:19.463 "params": { 00:28:19.463 "name": "Nvme6", 00:28:19.463 "trtype": "tcp", 00:28:19.463 "traddr": "10.0.0.2", 00:28:19.463 "adrfam": "ipv4", 00:28:19.463 "trsvcid": "4420", 00:28:19.463 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:19.463 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:19.463 "hdgst": false, 00:28:19.463 "ddgst": false 00:28:19.463 }, 00:28:19.463 "method": "bdev_nvme_attach_controller" 00:28:19.463 },{ 00:28:19.463 "params": { 00:28:19.463 "name": "Nvme7", 00:28:19.463 "trtype": "tcp", 00:28:19.463 "traddr": "10.0.0.2", 00:28:19.463 "adrfam": "ipv4", 00:28:19.463 "trsvcid": "4420", 00:28:19.463 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:19.463 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:19.463 "hdgst": false, 00:28:19.463 "ddgst": false 00:28:19.463 }, 00:28:19.463 "method": "bdev_nvme_attach_controller" 00:28:19.463 },{ 00:28:19.463 "params": { 00:28:19.463 "name": "Nvme8", 00:28:19.463 "trtype": "tcp", 00:28:19.463 "traddr": "10.0.0.2", 00:28:19.463 "adrfam": "ipv4", 00:28:19.463 "trsvcid": "4420", 00:28:19.463 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:19.463 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:19.463 "hdgst": false, 00:28:19.463 "ddgst": false 00:28:19.463 }, 00:28:19.463 "method": "bdev_nvme_attach_controller" 00:28:19.463 },{ 00:28:19.463 "params": { 00:28:19.463 "name": "Nvme9", 00:28:19.463 "trtype": "tcp", 00:28:19.463 "traddr": "10.0.0.2", 00:28:19.463 "adrfam": "ipv4", 00:28:19.463 "trsvcid": "4420", 00:28:19.463 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:19.463 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:19.463 "hdgst": false, 00:28:19.463 "ddgst": false 00:28:19.463 }, 00:28:19.463 "method": "bdev_nvme_attach_controller" 00:28:19.463 },{ 00:28:19.463 "params": { 00:28:19.463 "name": "Nvme10", 00:28:19.463 "trtype": "tcp", 00:28:19.463 "traddr": "10.0.0.2", 00:28:19.463 "adrfam": "ipv4", 00:28:19.463 "trsvcid": "4420", 00:28:19.463 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:19.463 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:19.463 "hdgst": false, 00:28:19.463 "ddgst": false 00:28:19.463 }, 00:28:19.463 "method": "bdev_nvme_attach_controller" 00:28:19.463 }' 00:28:19.463 [2024-11-18 00:33:43.151062] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:28:19.463 [2024-11-18 00:33:43.151147] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid325272 ] 00:28:19.463 [2024-11-18 00:33:43.226327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:19.463 [2024-11-18 00:33:43.275261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:21.364 Running I/O for 1 seconds... 00:28:22.557 1815.00 IOPS, 113.44 MiB/s 00:28:22.557 Latency(us) 00:28:22.557 [2024-11-17T23:33:46.379Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:22.557 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:22.557 Verification LBA range: start 0x0 length 0x400 00:28:22.557 Nvme1n1 : 1.10 249.61 15.60 0.00 0.00 244661.18 14175.19 237677.23 00:28:22.557 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:22.557 Verification LBA range: start 0x0 length 0x400 00:28:22.557 Nvme2n1 : 1.14 229.86 14.37 0.00 0.00 270576.40 5291.43 250104.79 00:28:22.557 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:22.557 Verification LBA range: start 0x0 length 0x400 00:28:22.557 Nvme3n1 : 1.09 233.91 14.62 0.00 0.00 261647.74 17379.18 253211.69 00:28:22.557 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:22.557 Verification LBA range: start 0x0 length 0x400 00:28:22.557 Nvme4n1 : 1.13 251.31 15.71 0.00 0.00 233396.67 19709.35 253211.69 00:28:22.557 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:22.557 Verification LBA range: start 0x0 length 0x400 00:28:22.557 Nvme5n1 : 1.15 223.52 13.97 0.00 0.00 265300.20 21942.42 248551.35 00:28:22.557 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:22.557 Verification LBA range: start 0x0 length 0x400 00:28:22.557 Nvme6n1 : 1.15 222.76 13.92 0.00 0.00 261672.01 21748.24 251658.24 00:28:22.557 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:22.557 Verification LBA range: start 0x0 length 0x400 00:28:22.557 Nvme7n1 : 1.14 224.41 14.03 0.00 0.00 254998.76 22330.79 256318.58 00:28:22.557 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:22.557 Verification LBA range: start 0x0 length 0x400 00:28:22.557 Nvme8n1 : 1.18 270.74 16.92 0.00 0.00 208586.52 12087.75 256318.58 00:28:22.557 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:22.557 Verification LBA range: start 0x0 length 0x400 00:28:22.557 Nvme9n1 : 1.15 221.69 13.86 0.00 0.00 249753.22 23787.14 267192.70 00:28:22.557 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:22.557 Verification LBA range: start 0x0 length 0x400 00:28:22.557 Nvme10n1 : 1.20 267.18 16.70 0.00 0.00 204785.47 5728.33 285834.05 00:28:22.557 [2024-11-17T23:33:46.379Z] =================================================================================================================== 00:28:22.557 [2024-11-17T23:33:46.379Z] Total : 2394.99 149.69 0.00 0.00 243641.75 5291.43 285834.05 00:28:22.557 00:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:28:22.557 00:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:22.557 00:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:22.816 00:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:22.816 00:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:22.816 00:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:22.816 00:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:28:22.816 00:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:22.816 00:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:28:22.816 00:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:22.816 00:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:22.816 rmmod nvme_tcp 00:28:22.816 rmmod nvme_fabrics 00:28:22.816 rmmod nvme_keyring 00:28:22.816 00:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:22.816 00:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:28:22.816 00:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:28:22.816 00:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 324782 ']' 00:28:22.816 00:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 324782 00:28:22.816 00:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 324782 ']' 00:28:22.816 00:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 324782 00:28:22.816 00:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:28:22.816 00:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:22.816 00:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 324782 00:28:22.816 00:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:22.816 00:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:22.816 00:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 324782' 00:28:22.816 killing process with pid 324782 00:28:22.816 00:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 324782 00:28:22.816 00:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 324782 00:28:23.384 00:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:23.384 00:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:23.384 00:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:23.384 00:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:28:23.384 00:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:28:23.384 00:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:23.384 00:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:28:23.384 00:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:23.384 00:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:23.384 00:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:23.384 00:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:23.384 00:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:25.291 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:25.291 00:28:25.291 real 0m12.119s 00:28:25.291 user 0m35.472s 00:28:25.291 sys 0m3.293s 00:28:25.291 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:25.291 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:25.291 ************************************ 00:28:25.291 END TEST nvmf_shutdown_tc1 00:28:25.291 ************************************ 00:28:25.291 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:28:25.291 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:25.291 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:25.291 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:25.291 ************************************ 00:28:25.291 START TEST nvmf_shutdown_tc2 00:28:25.291 ************************************ 00:28:25.291 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:28:25.291 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:28:25.291 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:25.291 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:25.291 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:25.291 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:25.291 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:25.291 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:25.291 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:25.291 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:25.291 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:25.291 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:25.291 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:25.291 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:25.291 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:25.291 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:25.291 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:25.291 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:25.291 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:25.291 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:25.291 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:25.291 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:25.291 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:28:25.291 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:25.291 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:28:25.291 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:28:25.291 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:28:25.291 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:28:25.291 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:28:25.291 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:25.291 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:25.291 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:25.292 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:25.292 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:25.292 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:25.292 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:25.292 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:25.550 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:25.550 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:25.550 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:25.550 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:25.550 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:25.550 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:25.551 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:25.551 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:25.551 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:25.551 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:28:25.551 00:28:25.551 --- 10.0.0.2 ping statistics --- 00:28:25.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:25.551 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:28:25.551 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:25.551 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:25.551 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:28:25.551 00:28:25.551 --- 10.0.0.1 ping statistics --- 00:28:25.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:25.551 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:28:25.551 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:25.551 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:28:25.551 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:25.551 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:25.551 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:25.551 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:25.551 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:25.551 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:25.551 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:25.551 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:25.551 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:25.551 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:25.551 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:25.551 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=326091 00:28:25.551 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:25.551 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 326091 00:28:25.551 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 326091 ']' 00:28:25.551 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:25.551 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:25.551 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:25.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:25.551 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:25.551 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:25.551 [2024-11-18 00:33:49.295146] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:28:25.551 [2024-11-18 00:33:49.295231] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:25.551 [2024-11-18 00:33:49.368213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:25.810 [2024-11-18 00:33:49.414244] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:25.810 [2024-11-18 00:33:49.414300] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:25.810 [2024-11-18 00:33:49.414335] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:25.810 [2024-11-18 00:33:49.414348] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:25.810 [2024-11-18 00:33:49.414365] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:25.810 [2024-11-18 00:33:49.415855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:25.810 [2024-11-18 00:33:49.415920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:25.810 [2024-11-18 00:33:49.415987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:25.810 [2024-11-18 00:33:49.415990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:25.810 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:25.810 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:25.810 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:25.810 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:25.810 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:25.810 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:25.810 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:25.810 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.810 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:25.810 [2024-11-18 00:33:49.552275] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:25.810 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.810 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:25.810 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:25.810 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:25.810 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:25.810 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:25.810 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:25.810 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:25.810 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:25.810 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:25.810 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:25.810 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:25.810 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:25.810 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:25.810 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:25.810 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:25.810 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:25.810 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:25.810 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:25.810 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:25.810 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:25.810 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:25.810 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:25.810 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:25.810 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:25.810 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:25.810 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:25.810 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.810 00:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:25.810 Malloc1 00:28:26.069 [2024-11-18 00:33:49.641193] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:26.069 Malloc2 00:28:26.069 Malloc3 00:28:26.069 Malloc4 00:28:26.069 Malloc5 00:28:26.069 Malloc6 00:28:26.328 Malloc7 00:28:26.328 Malloc8 00:28:26.328 Malloc9 00:28:26.328 Malloc10 00:28:26.328 00:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.328 00:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:26.328 00:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:26.328 00:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:26.328 00:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=326221 00:28:26.328 00:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 326221 /var/tmp/bdevperf.sock 00:28:26.328 00:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 326221 ']' 00:28:26.329 00:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:26.329 00:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:26.329 00:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:26.329 00:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:26.329 00:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:26.329 00:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:28:26.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:26.329 00:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:26.329 00:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:28:26.329 00:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:26.329 00:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:26.329 00:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:26.329 { 00:28:26.329 "params": { 00:28:26.329 "name": "Nvme$subsystem", 00:28:26.329 "trtype": "$TEST_TRANSPORT", 00:28:26.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:26.329 "adrfam": "ipv4", 00:28:26.329 "trsvcid": "$NVMF_PORT", 00:28:26.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:26.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:26.329 "hdgst": ${hdgst:-false}, 00:28:26.329 "ddgst": ${ddgst:-false} 00:28:26.329 }, 00:28:26.329 "method": "bdev_nvme_attach_controller" 00:28:26.329 } 00:28:26.329 EOF 00:28:26.329 )") 00:28:26.329 00:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:26.329 00:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:26.329 00:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:26.329 { 00:28:26.329 "params": { 00:28:26.329 "name": "Nvme$subsystem", 00:28:26.329 "trtype": "$TEST_TRANSPORT", 00:28:26.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:26.329 "adrfam": "ipv4", 00:28:26.329 "trsvcid": "$NVMF_PORT", 00:28:26.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:26.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:26.329 "hdgst": ${hdgst:-false}, 00:28:26.329 "ddgst": ${ddgst:-false} 00:28:26.329 }, 00:28:26.329 "method": "bdev_nvme_attach_controller" 00:28:26.329 } 00:28:26.329 EOF 00:28:26.329 )") 00:28:26.329 00:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:26.329 00:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:26.329 00:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:26.329 { 00:28:26.329 "params": { 00:28:26.329 "name": "Nvme$subsystem", 00:28:26.329 "trtype": "$TEST_TRANSPORT", 00:28:26.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:26.329 "adrfam": "ipv4", 00:28:26.329 "trsvcid": "$NVMF_PORT", 00:28:26.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:26.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:26.329 "hdgst": ${hdgst:-false}, 00:28:26.329 "ddgst": ${ddgst:-false} 00:28:26.329 }, 00:28:26.329 "method": "bdev_nvme_attach_controller" 00:28:26.329 } 00:28:26.329 EOF 00:28:26.329 )") 00:28:26.329 00:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:26.329 00:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:26.329 00:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:26.329 { 00:28:26.329 "params": { 00:28:26.329 "name": "Nvme$subsystem", 00:28:26.329 "trtype": "$TEST_TRANSPORT", 00:28:26.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:26.329 "adrfam": "ipv4", 00:28:26.329 "trsvcid": "$NVMF_PORT", 00:28:26.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:26.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:26.329 "hdgst": ${hdgst:-false}, 00:28:26.329 "ddgst": ${ddgst:-false} 00:28:26.329 }, 00:28:26.329 "method": "bdev_nvme_attach_controller" 00:28:26.329 } 00:28:26.329 EOF 00:28:26.329 )") 00:28:26.329 00:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:26.329 00:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:26.329 00:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:26.329 { 00:28:26.329 "params": { 00:28:26.329 "name": "Nvme$subsystem", 00:28:26.329 "trtype": "$TEST_TRANSPORT", 00:28:26.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:26.329 "adrfam": "ipv4", 00:28:26.329 "trsvcid": "$NVMF_PORT", 00:28:26.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:26.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:26.329 "hdgst": ${hdgst:-false}, 00:28:26.329 "ddgst": ${ddgst:-false} 00:28:26.329 }, 00:28:26.329 "method": "bdev_nvme_attach_controller" 00:28:26.329 } 00:28:26.329 EOF 00:28:26.329 )") 00:28:26.329 00:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:26.329 00:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:26.329 00:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:26.329 { 00:28:26.329 "params": { 00:28:26.329 "name": "Nvme$subsystem", 00:28:26.329 "trtype": "$TEST_TRANSPORT", 00:28:26.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:26.329 "adrfam": "ipv4", 00:28:26.329 "trsvcid": "$NVMF_PORT", 00:28:26.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:26.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:26.329 "hdgst": ${hdgst:-false}, 00:28:26.329 "ddgst": ${ddgst:-false} 00:28:26.329 }, 00:28:26.329 "method": "bdev_nvme_attach_controller" 00:28:26.329 } 00:28:26.329 EOF 00:28:26.329 )") 00:28:26.329 00:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:26.329 00:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:26.329 00:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:26.329 { 00:28:26.329 "params": { 00:28:26.329 "name": "Nvme$subsystem", 00:28:26.329 "trtype": "$TEST_TRANSPORT", 00:28:26.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:26.329 "adrfam": "ipv4", 00:28:26.329 "trsvcid": "$NVMF_PORT", 00:28:26.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:26.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:26.329 "hdgst": ${hdgst:-false}, 00:28:26.329 "ddgst": ${ddgst:-false} 00:28:26.329 }, 00:28:26.329 "method": "bdev_nvme_attach_controller" 00:28:26.329 } 00:28:26.329 EOF 00:28:26.329 )") 00:28:26.329 00:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:26.329 00:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:26.329 00:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:26.329 { 00:28:26.329 "params": { 00:28:26.329 "name": "Nvme$subsystem", 00:28:26.329 "trtype": "$TEST_TRANSPORT", 00:28:26.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:26.329 "adrfam": "ipv4", 00:28:26.329 "trsvcid": "$NVMF_PORT", 00:28:26.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:26.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:26.329 "hdgst": ${hdgst:-false}, 00:28:26.329 "ddgst": ${ddgst:-false} 00:28:26.329 }, 00:28:26.329 "method": "bdev_nvme_attach_controller" 00:28:26.329 } 00:28:26.329 EOF 00:28:26.329 )") 00:28:26.329 00:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:26.329 00:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:26.329 00:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:26.329 { 00:28:26.329 "params": { 00:28:26.329 "name": "Nvme$subsystem", 00:28:26.329 "trtype": "$TEST_TRANSPORT", 00:28:26.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:26.329 "adrfam": "ipv4", 00:28:26.329 "trsvcid": "$NVMF_PORT", 00:28:26.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:26.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:26.329 "hdgst": ${hdgst:-false}, 00:28:26.329 "ddgst": ${ddgst:-false} 00:28:26.329 }, 00:28:26.329 "method": "bdev_nvme_attach_controller" 00:28:26.329 } 00:28:26.330 EOF 00:28:26.330 )") 00:28:26.330 00:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:26.330 00:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:26.330 00:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:26.330 { 00:28:26.330 "params": { 00:28:26.330 "name": "Nvme$subsystem", 00:28:26.330 "trtype": "$TEST_TRANSPORT", 00:28:26.330 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:26.330 "adrfam": "ipv4", 00:28:26.330 "trsvcid": "$NVMF_PORT", 00:28:26.330 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:26.330 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:26.330 "hdgst": ${hdgst:-false}, 00:28:26.330 "ddgst": ${ddgst:-false} 00:28:26.330 }, 00:28:26.330 "method": "bdev_nvme_attach_controller" 00:28:26.330 } 00:28:26.330 EOF 00:28:26.330 )") 00:28:26.330 00:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:26.330 00:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:28:26.330 00:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:28:26.330 00:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:26.330 "params": { 00:28:26.330 "name": "Nvme1", 00:28:26.330 "trtype": "tcp", 00:28:26.330 "traddr": "10.0.0.2", 00:28:26.330 "adrfam": "ipv4", 00:28:26.330 "trsvcid": "4420", 00:28:26.330 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:26.330 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:26.330 "hdgst": false, 00:28:26.330 "ddgst": false 00:28:26.330 }, 00:28:26.330 "method": "bdev_nvme_attach_controller" 00:28:26.330 },{ 00:28:26.330 "params": { 00:28:26.330 "name": "Nvme2", 00:28:26.330 "trtype": "tcp", 00:28:26.330 "traddr": "10.0.0.2", 00:28:26.330 "adrfam": "ipv4", 00:28:26.330 "trsvcid": "4420", 00:28:26.330 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:26.330 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:26.330 "hdgst": false, 00:28:26.330 "ddgst": false 00:28:26.330 }, 00:28:26.330 "method": "bdev_nvme_attach_controller" 00:28:26.330 },{ 00:28:26.330 "params": { 00:28:26.330 "name": "Nvme3", 00:28:26.330 "trtype": "tcp", 00:28:26.330 "traddr": "10.0.0.2", 00:28:26.330 "adrfam": "ipv4", 00:28:26.330 "trsvcid": "4420", 00:28:26.330 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:26.330 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:26.330 "hdgst": false, 00:28:26.330 "ddgst": false 00:28:26.330 }, 00:28:26.330 "method": "bdev_nvme_attach_controller" 00:28:26.330 },{ 00:28:26.330 "params": { 00:28:26.330 "name": "Nvme4", 00:28:26.330 "trtype": "tcp", 00:28:26.330 "traddr": "10.0.0.2", 00:28:26.330 "adrfam": "ipv4", 00:28:26.330 "trsvcid": "4420", 00:28:26.330 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:26.330 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:26.330 "hdgst": false, 00:28:26.330 "ddgst": false 00:28:26.330 }, 00:28:26.330 "method": "bdev_nvme_attach_controller" 00:28:26.330 },{ 00:28:26.330 "params": { 00:28:26.330 "name": "Nvme5", 00:28:26.330 "trtype": "tcp", 00:28:26.330 "traddr": "10.0.0.2", 00:28:26.330 "adrfam": "ipv4", 00:28:26.330 "trsvcid": "4420", 00:28:26.330 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:26.330 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:26.330 "hdgst": false, 00:28:26.330 "ddgst": false 00:28:26.330 }, 00:28:26.330 "method": "bdev_nvme_attach_controller" 00:28:26.330 },{ 00:28:26.330 "params": { 00:28:26.330 "name": "Nvme6", 00:28:26.330 "trtype": "tcp", 00:28:26.330 "traddr": "10.0.0.2", 00:28:26.330 "adrfam": "ipv4", 00:28:26.330 "trsvcid": "4420", 00:28:26.330 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:26.330 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:26.330 "hdgst": false, 00:28:26.330 "ddgst": false 00:28:26.330 }, 00:28:26.330 "method": "bdev_nvme_attach_controller" 00:28:26.330 },{ 00:28:26.330 "params": { 00:28:26.330 "name": "Nvme7", 00:28:26.330 "trtype": "tcp", 00:28:26.330 "traddr": "10.0.0.2", 00:28:26.330 "adrfam": "ipv4", 00:28:26.330 "trsvcid": "4420", 00:28:26.330 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:26.330 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:26.330 "hdgst": false, 00:28:26.330 "ddgst": false 00:28:26.330 }, 00:28:26.330 "method": "bdev_nvme_attach_controller" 00:28:26.330 },{ 00:28:26.330 "params": { 00:28:26.330 "name": "Nvme8", 00:28:26.330 "trtype": "tcp", 00:28:26.330 "traddr": "10.0.0.2", 00:28:26.330 "adrfam": "ipv4", 00:28:26.330 "trsvcid": "4420", 00:28:26.330 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:26.330 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:26.330 "hdgst": false, 00:28:26.330 "ddgst": false 00:28:26.330 }, 00:28:26.330 "method": "bdev_nvme_attach_controller" 00:28:26.330 },{ 00:28:26.330 "params": { 00:28:26.330 "name": "Nvme9", 00:28:26.330 "trtype": "tcp", 00:28:26.330 "traddr": "10.0.0.2", 00:28:26.330 "adrfam": "ipv4", 00:28:26.330 "trsvcid": "4420", 00:28:26.330 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:26.330 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:26.330 "hdgst": false, 00:28:26.330 "ddgst": false 00:28:26.330 }, 00:28:26.330 "method": "bdev_nvme_attach_controller" 00:28:26.330 },{ 00:28:26.330 "params": { 00:28:26.330 "name": "Nvme10", 00:28:26.330 "trtype": "tcp", 00:28:26.330 "traddr": "10.0.0.2", 00:28:26.330 "adrfam": "ipv4", 00:28:26.330 "trsvcid": "4420", 00:28:26.330 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:26.330 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:26.330 "hdgst": false, 00:28:26.330 "ddgst": false 00:28:26.330 }, 00:28:26.330 "method": "bdev_nvme_attach_controller" 00:28:26.330 }' 00:28:26.600 [2024-11-18 00:33:50.157834] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:28:26.600 [2024-11-18 00:33:50.157908] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid326221 ] 00:28:26.600 [2024-11-18 00:33:50.230865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:26.600 [2024-11-18 00:33:50.278609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:27.976 Running I/O for 10 seconds... 00:28:28.551 00:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:28.551 00:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:28.551 00:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:28.551 00:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.551 00:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:28.551 00:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.551 00:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:28.551 00:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:28.551 00:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:28:28.551 00:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:28:28.551 00:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:28:28.552 00:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:28:28.552 00:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:28.552 00:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:28.552 00:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:28.552 00:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.552 00:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:28.552 00:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.552 00:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=89 00:28:28.552 00:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 89 -ge 100 ']' 00:28:28.552 00:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:28:28.815 00:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:28:28.815 00:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:28.815 00:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:28.815 00:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:28.815 00:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.815 00:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:28.815 00:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.815 00:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=195 00:28:28.815 00:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']' 00:28:28.815 00:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:28:28.815 00:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:28:28.815 00:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:28:28.815 00:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 326221 00:28:28.815 00:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 326221 ']' 00:28:28.815 00:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 326221 00:28:28.815 00:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:28:28.815 00:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:28.815 00:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 326221 00:28:28.815 00:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:28.815 00:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:28.815 00:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 326221' 00:28:28.815 killing process with pid 326221 00:28:28.815 00:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 326221 00:28:28.815 00:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 326221 00:28:29.073 Received shutdown signal, test time was about 0.953299 seconds 00:28:29.073 00:28:29.073 Latency(us) 00:28:29.073 [2024-11-17T23:33:52.895Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:29.073 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:29.073 Verification LBA range: start 0x0 length 0x400 00:28:29.073 Nvme1n1 : 0.94 271.56 16.97 0.00 0.00 232026.64 20583.16 257872.02 00:28:29.073 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:29.073 Verification LBA range: start 0x0 length 0x400 00:28:29.073 Nvme2n1 : 0.95 281.23 17.58 0.00 0.00 218614.76 8786.68 236123.78 00:28:29.073 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:29.073 Verification LBA range: start 0x0 length 0x400 00:28:29.073 Nvme3n1 : 0.95 268.77 16.80 0.00 0.00 226227.39 16311.18 257872.02 00:28:29.073 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:29.073 Verification LBA range: start 0x0 length 0x400 00:28:29.073 Nvme4n1 : 0.95 269.97 16.87 0.00 0.00 220297.29 23981.32 248551.35 00:28:29.073 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:29.073 Verification LBA range: start 0x0 length 0x400 00:28:29.073 Nvme5n1 : 0.92 207.84 12.99 0.00 0.00 280007.11 19515.16 262532.36 00:28:29.073 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:29.073 Verification LBA range: start 0x0 length 0x400 00:28:29.073 Nvme6n1 : 0.91 211.50 13.22 0.00 0.00 268671.81 28932.93 248551.35 00:28:29.073 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:29.073 Verification LBA range: start 0x0 length 0x400 00:28:29.073 Nvme7n1 : 0.93 207.27 12.95 0.00 0.00 268903.41 32622.36 245444.46 00:28:29.073 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:29.073 Verification LBA range: start 0x0 length 0x400 00:28:29.073 Nvme8n1 : 0.91 215.56 13.47 0.00 0.00 249796.27 2730.67 250104.79 00:28:29.073 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:29.073 Verification LBA range: start 0x0 length 0x400 00:28:29.073 Nvme9n1 : 0.94 205.24 12.83 0.00 0.00 260140.25 22622.06 268746.15 00:28:29.073 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:29.073 Verification LBA range: start 0x0 length 0x400 00:28:29.073 Nvme10n1 : 0.94 204.29 12.77 0.00 0.00 255797.22 22233.69 282727.16 00:28:29.073 [2024-11-17T23:33:52.895Z] =================================================================================================================== 00:28:29.073 [2024-11-17T23:33:52.895Z] Total : 2343.23 146.45 0.00 0.00 245142.10 2730.67 282727.16 00:28:29.073 00:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:28:30.447 00:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 326091 00:28:30.447 00:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:28:30.447 00:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:30.447 00:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:30.447 00:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:30.447 00:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:30.447 00:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:30.447 00:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:28:30.447 00:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:30.447 00:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:28:30.447 00:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:30.447 00:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:30.447 rmmod nvme_tcp 00:28:30.447 rmmod nvme_fabrics 00:28:30.447 rmmod nvme_keyring 00:28:30.447 00:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:30.447 00:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:28:30.447 00:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:28:30.447 00:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 326091 ']' 00:28:30.447 00:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 326091 00:28:30.447 00:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 326091 ']' 00:28:30.447 00:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 326091 00:28:30.447 00:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:28:30.447 00:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:30.447 00:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 326091 00:28:30.447 00:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:30.447 00:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:30.447 00:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 326091' 00:28:30.447 killing process with pid 326091 00:28:30.447 00:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 326091 00:28:30.447 00:33:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 326091 00:28:30.707 00:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:30.707 00:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:30.707 00:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:30.707 00:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:28:30.707 00:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:28:30.707 00:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:30.707 00:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:28:30.707 00:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:30.707 00:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:30.707 00:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:30.707 00:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:30.707 00:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:32.765 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:32.765 00:28:32.765 real 0m7.451s 00:28:32.765 user 0m22.508s 00:28:32.765 sys 0m1.448s 00:28:32.765 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:32.765 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:32.765 ************************************ 00:28:32.765 END TEST nvmf_shutdown_tc2 00:28:32.765 ************************************ 00:28:32.765 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:28:32.765 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:32.765 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:32.765 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:32.765 ************************************ 00:28:32.765 START TEST nvmf_shutdown_tc3 00:28:32.765 ************************************ 00:28:32.765 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:28:32.765 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:28:32.765 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:32.765 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:32.765 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:32.765 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:32.765 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:32.765 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:32.765 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:32.766 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:32.766 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:32.766 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:32.766 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:32.766 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:32.766 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:32.766 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:32.766 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:32.766 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:32.766 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:32.766 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:32.766 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:32.766 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:32.766 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:28:32.766 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:32.766 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:28:32.766 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:28:32.766 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:28:32.766 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:28:32.766 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:28:32.766 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:32.766 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:32.766 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:32.766 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:32.766 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:32.766 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:32.766 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:32.766 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:32.766 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:32.766 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:32.766 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:32.766 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:32.766 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:32.766 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:32.766 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:32.766 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:32.766 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:32.766 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:32.766 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:32.766 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:32.766 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:32.766 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:32.766 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:32.766 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:32.766 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:32.766 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:32.766 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:32.766 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:32.766 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:32.766 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:32.766 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:32.766 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:32.766 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:32.766 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:32.766 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:32.766 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:32.766 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:32.766 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:32.766 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:32.766 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:32.766 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:32.766 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:32.766 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:32.766 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:32.766 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:32.766 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:32.766 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:32.766 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:33.025 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:33.025 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:33.025 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:33.025 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:33.025 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:33.025 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:33.025 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:33.025 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:33.025 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:33.025 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:33.025 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:33.025 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:33.025 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:33.025 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:33.025 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:33.025 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:33.025 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:33.026 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:33.026 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:33.026 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:33.026 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:33.026 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:33.026 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:33.026 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:33.026 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:33.026 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:33.026 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:33.026 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:33.026 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:33.026 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:33.026 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:33.026 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:33.026 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:33.026 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:33.026 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:33.026 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:33.026 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:33.026 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:33.026 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:33.026 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:28:33.026 00:28:33.026 --- 10.0.0.2 ping statistics --- 00:28:33.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:33.026 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:28:33.026 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:33.026 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:33.026 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:28:33.026 00:28:33.026 --- 10.0.0.1 ping statistics --- 00:28:33.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:33.026 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:28:33.026 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:33.026 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:28:33.026 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:33.026 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:33.026 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:33.026 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:33.026 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:33.026 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:33.026 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:33.026 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:33.026 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:33.026 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:33.026 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:33.026 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=327136 00:28:33.026 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:33.026 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 327136 00:28:33.026 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 327136 ']' 00:28:33.026 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:33.026 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:33.026 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:33.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:33.026 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:33.026 00:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:33.285 [2024-11-18 00:33:56.874340] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:28:33.285 [2024-11-18 00:33:56.874417] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:33.285 [2024-11-18 00:33:56.947616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:33.285 [2024-11-18 00:33:56.993415] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:33.285 [2024-11-18 00:33:56.993465] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:33.285 [2024-11-18 00:33:56.993488] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:33.285 [2024-11-18 00:33:56.993499] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:33.285 [2024-11-18 00:33:56.993508] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:33.285 [2024-11-18 00:33:56.994934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:33.285 [2024-11-18 00:33:56.995000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:33.285 [2024-11-18 00:33:56.995065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:33.285 [2024-11-18 00:33:56.995068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:33.544 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:33.544 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:28:33.544 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:33.544 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:33.544 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:33.544 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:33.544 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:33.544 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.544 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:33.544 [2024-11-18 00:33:57.139733] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:33.544 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.544 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:33.544 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:33.544 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:33.544 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:33.544 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:33.544 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:33.544 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:33.544 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:33.544 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:33.544 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:33.544 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:33.544 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:33.544 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:33.544 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:33.544 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:33.544 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:33.544 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:33.544 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:33.544 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:33.544 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:33.544 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:33.544 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:33.544 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:33.544 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:33.544 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:33.544 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:33.544 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.544 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:33.544 Malloc1 00:28:33.544 [2024-11-18 00:33:57.243800] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:33.544 Malloc2 00:28:33.544 Malloc3 00:28:33.802 Malloc4 00:28:33.802 Malloc5 00:28:33.802 Malloc6 00:28:33.802 Malloc7 00:28:33.802 Malloc8 00:28:33.802 Malloc9 00:28:34.060 Malloc10 00:28:34.060 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.060 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:34.060 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:34.060 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:34.060 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=327309 00:28:34.060 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 327309 /var/tmp/bdevperf.sock 00:28:34.060 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 327309 ']' 00:28:34.060 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:34.060 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:34.060 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:34.060 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:34.060 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:28:34.060 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:34.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:34.060 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:34.060 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:28:34.060 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:34.060 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:34.060 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:34.060 { 00:28:34.060 "params": { 00:28:34.060 "name": "Nvme$subsystem", 00:28:34.060 "trtype": "$TEST_TRANSPORT", 00:28:34.060 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:34.060 "adrfam": "ipv4", 00:28:34.060 "trsvcid": "$NVMF_PORT", 00:28:34.060 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:34.060 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:34.060 "hdgst": ${hdgst:-false}, 00:28:34.060 "ddgst": ${ddgst:-false} 00:28:34.060 }, 00:28:34.060 "method": "bdev_nvme_attach_controller" 00:28:34.060 } 00:28:34.060 EOF 00:28:34.060 )") 00:28:34.060 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:34.060 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:34.060 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:34.060 { 00:28:34.060 "params": { 00:28:34.060 "name": "Nvme$subsystem", 00:28:34.060 "trtype": "$TEST_TRANSPORT", 00:28:34.060 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:34.060 "adrfam": "ipv4", 00:28:34.060 "trsvcid": "$NVMF_PORT", 00:28:34.060 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:34.060 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:34.060 "hdgst": ${hdgst:-false}, 00:28:34.060 "ddgst": ${ddgst:-false} 00:28:34.060 }, 00:28:34.060 "method": "bdev_nvme_attach_controller" 00:28:34.060 } 00:28:34.061 EOF 00:28:34.061 )") 00:28:34.061 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:34.061 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:34.061 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:34.061 { 00:28:34.061 "params": { 00:28:34.061 "name": "Nvme$subsystem", 00:28:34.061 "trtype": "$TEST_TRANSPORT", 00:28:34.061 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:34.061 "adrfam": "ipv4", 00:28:34.061 "trsvcid": "$NVMF_PORT", 00:28:34.061 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:34.061 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:34.061 "hdgst": ${hdgst:-false}, 00:28:34.061 "ddgst": ${ddgst:-false} 00:28:34.061 }, 00:28:34.061 "method": "bdev_nvme_attach_controller" 00:28:34.061 } 00:28:34.061 EOF 00:28:34.061 )") 00:28:34.061 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:34.061 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:34.061 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:34.061 { 00:28:34.061 "params": { 00:28:34.061 "name": "Nvme$subsystem", 00:28:34.061 "trtype": "$TEST_TRANSPORT", 00:28:34.061 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:34.061 "adrfam": "ipv4", 00:28:34.061 "trsvcid": "$NVMF_PORT", 00:28:34.061 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:34.061 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:34.061 "hdgst": ${hdgst:-false}, 00:28:34.061 "ddgst": ${ddgst:-false} 00:28:34.061 }, 00:28:34.061 "method": "bdev_nvme_attach_controller" 00:28:34.061 } 00:28:34.061 EOF 00:28:34.061 )") 00:28:34.061 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:34.061 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:34.061 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:34.061 { 00:28:34.061 "params": { 00:28:34.061 "name": "Nvme$subsystem", 00:28:34.061 "trtype": "$TEST_TRANSPORT", 00:28:34.061 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:34.061 "adrfam": "ipv4", 00:28:34.061 "trsvcid": "$NVMF_PORT", 00:28:34.061 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:34.061 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:34.061 "hdgst": ${hdgst:-false}, 00:28:34.061 "ddgst": ${ddgst:-false} 00:28:34.061 }, 00:28:34.061 "method": "bdev_nvme_attach_controller" 00:28:34.061 } 00:28:34.061 EOF 00:28:34.061 )") 00:28:34.061 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:34.061 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:34.061 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:34.061 { 00:28:34.061 "params": { 00:28:34.061 "name": "Nvme$subsystem", 00:28:34.061 "trtype": "$TEST_TRANSPORT", 00:28:34.061 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:34.061 "adrfam": "ipv4", 00:28:34.061 "trsvcid": "$NVMF_PORT", 00:28:34.061 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:34.061 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:34.061 "hdgst": ${hdgst:-false}, 00:28:34.061 "ddgst": ${ddgst:-false} 00:28:34.061 }, 00:28:34.061 "method": "bdev_nvme_attach_controller" 00:28:34.061 } 00:28:34.061 EOF 00:28:34.061 )") 00:28:34.061 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:34.061 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:34.061 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:34.061 { 00:28:34.061 "params": { 00:28:34.061 "name": "Nvme$subsystem", 00:28:34.061 "trtype": "$TEST_TRANSPORT", 00:28:34.061 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:34.061 "adrfam": "ipv4", 00:28:34.061 "trsvcid": "$NVMF_PORT", 00:28:34.061 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:34.061 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:34.061 "hdgst": ${hdgst:-false}, 00:28:34.061 "ddgst": ${ddgst:-false} 00:28:34.061 }, 00:28:34.061 "method": "bdev_nvme_attach_controller" 00:28:34.061 } 00:28:34.061 EOF 00:28:34.061 )") 00:28:34.061 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:34.061 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:34.061 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:34.061 { 00:28:34.061 "params": { 00:28:34.061 "name": "Nvme$subsystem", 00:28:34.061 "trtype": "$TEST_TRANSPORT", 00:28:34.061 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:34.061 "adrfam": "ipv4", 00:28:34.061 "trsvcid": "$NVMF_PORT", 00:28:34.061 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:34.061 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:34.061 "hdgst": ${hdgst:-false}, 00:28:34.061 "ddgst": ${ddgst:-false} 00:28:34.061 }, 00:28:34.061 "method": "bdev_nvme_attach_controller" 00:28:34.061 } 00:28:34.061 EOF 00:28:34.061 )") 00:28:34.061 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:34.061 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:34.061 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:34.061 { 00:28:34.061 "params": { 00:28:34.061 "name": "Nvme$subsystem", 00:28:34.061 "trtype": "$TEST_TRANSPORT", 00:28:34.061 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:34.061 "adrfam": "ipv4", 00:28:34.061 "trsvcid": "$NVMF_PORT", 00:28:34.061 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:34.061 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:34.061 "hdgst": ${hdgst:-false}, 00:28:34.061 "ddgst": ${ddgst:-false} 00:28:34.061 }, 00:28:34.061 "method": "bdev_nvme_attach_controller" 00:28:34.061 } 00:28:34.061 EOF 00:28:34.061 )") 00:28:34.061 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:34.061 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:34.061 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:34.061 { 00:28:34.061 "params": { 00:28:34.061 "name": "Nvme$subsystem", 00:28:34.061 "trtype": "$TEST_TRANSPORT", 00:28:34.061 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:34.061 "adrfam": "ipv4", 00:28:34.061 "trsvcid": "$NVMF_PORT", 00:28:34.061 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:34.061 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:34.061 "hdgst": ${hdgst:-false}, 00:28:34.061 "ddgst": ${ddgst:-false} 00:28:34.061 }, 00:28:34.061 "method": "bdev_nvme_attach_controller" 00:28:34.061 } 00:28:34.061 EOF 00:28:34.061 )") 00:28:34.061 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:34.061 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:28:34.061 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:28:34.061 00:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:34.061 "params": { 00:28:34.061 "name": "Nvme1", 00:28:34.061 "trtype": "tcp", 00:28:34.061 "traddr": "10.0.0.2", 00:28:34.061 "adrfam": "ipv4", 00:28:34.061 "trsvcid": "4420", 00:28:34.061 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:34.061 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:34.061 "hdgst": false, 00:28:34.061 "ddgst": false 00:28:34.061 }, 00:28:34.061 "method": "bdev_nvme_attach_controller" 00:28:34.061 },{ 00:28:34.061 "params": { 00:28:34.061 "name": "Nvme2", 00:28:34.061 "trtype": "tcp", 00:28:34.061 "traddr": "10.0.0.2", 00:28:34.061 "adrfam": "ipv4", 00:28:34.061 "trsvcid": "4420", 00:28:34.061 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:34.061 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:34.061 "hdgst": false, 00:28:34.061 "ddgst": false 00:28:34.061 }, 00:28:34.061 "method": "bdev_nvme_attach_controller" 00:28:34.061 },{ 00:28:34.061 "params": { 00:28:34.061 "name": "Nvme3", 00:28:34.061 "trtype": "tcp", 00:28:34.061 "traddr": "10.0.0.2", 00:28:34.061 "adrfam": "ipv4", 00:28:34.061 "trsvcid": "4420", 00:28:34.061 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:34.061 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:34.061 "hdgst": false, 00:28:34.061 "ddgst": false 00:28:34.061 }, 00:28:34.061 "method": "bdev_nvme_attach_controller" 00:28:34.061 },{ 00:28:34.061 "params": { 00:28:34.061 "name": "Nvme4", 00:28:34.061 "trtype": "tcp", 00:28:34.061 "traddr": "10.0.0.2", 00:28:34.061 "adrfam": "ipv4", 00:28:34.061 "trsvcid": "4420", 00:28:34.061 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:34.061 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:34.061 "hdgst": false, 00:28:34.061 "ddgst": false 00:28:34.061 }, 00:28:34.061 "method": "bdev_nvme_attach_controller" 00:28:34.061 },{ 00:28:34.061 "params": { 00:28:34.061 "name": "Nvme5", 00:28:34.061 "trtype": "tcp", 00:28:34.061 "traddr": "10.0.0.2", 00:28:34.061 "adrfam": "ipv4", 00:28:34.061 "trsvcid": "4420", 00:28:34.061 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:34.061 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:34.061 "hdgst": false, 00:28:34.061 "ddgst": false 00:28:34.061 }, 00:28:34.061 "method": "bdev_nvme_attach_controller" 00:28:34.061 },{ 00:28:34.061 "params": { 00:28:34.061 "name": "Nvme6", 00:28:34.061 "trtype": "tcp", 00:28:34.061 "traddr": "10.0.0.2", 00:28:34.061 "adrfam": "ipv4", 00:28:34.061 "trsvcid": "4420", 00:28:34.061 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:34.061 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:34.061 "hdgst": false, 00:28:34.061 "ddgst": false 00:28:34.061 }, 00:28:34.061 "method": "bdev_nvme_attach_controller" 00:28:34.061 },{ 00:28:34.061 "params": { 00:28:34.061 "name": "Nvme7", 00:28:34.061 "trtype": "tcp", 00:28:34.061 "traddr": "10.0.0.2", 00:28:34.061 "adrfam": "ipv4", 00:28:34.061 "trsvcid": "4420", 00:28:34.061 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:34.061 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:34.061 "hdgst": false, 00:28:34.061 "ddgst": false 00:28:34.061 }, 00:28:34.061 "method": "bdev_nvme_attach_controller" 00:28:34.061 },{ 00:28:34.061 "params": { 00:28:34.061 "name": "Nvme8", 00:28:34.061 "trtype": "tcp", 00:28:34.061 "traddr": "10.0.0.2", 00:28:34.061 "adrfam": "ipv4", 00:28:34.061 "trsvcid": "4420", 00:28:34.061 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:34.061 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:34.061 "hdgst": false, 00:28:34.061 "ddgst": false 00:28:34.061 }, 00:28:34.061 "method": "bdev_nvme_attach_controller" 00:28:34.061 },{ 00:28:34.061 "params": { 00:28:34.061 "name": "Nvme9", 00:28:34.061 "trtype": "tcp", 00:28:34.061 "traddr": "10.0.0.2", 00:28:34.061 "adrfam": "ipv4", 00:28:34.061 "trsvcid": "4420", 00:28:34.061 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:34.061 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:34.061 "hdgst": false, 00:28:34.061 "ddgst": false 00:28:34.061 }, 00:28:34.061 "method": "bdev_nvme_attach_controller" 00:28:34.061 },{ 00:28:34.061 "params": { 00:28:34.061 "name": "Nvme10", 00:28:34.061 "trtype": "tcp", 00:28:34.061 "traddr": "10.0.0.2", 00:28:34.061 "adrfam": "ipv4", 00:28:34.061 "trsvcid": "4420", 00:28:34.061 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:34.061 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:34.061 "hdgst": false, 00:28:34.061 "ddgst": false 00:28:34.061 }, 00:28:34.061 "method": "bdev_nvme_attach_controller" 00:28:34.061 }' 00:28:34.061 [2024-11-18 00:33:57.766151] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:28:34.061 [2024-11-18 00:33:57.766240] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid327309 ] 00:28:34.061 [2024-11-18 00:33:57.840069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:34.319 [2024-11-18 00:33:57.888428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:36.221 Running I/O for 10 seconds... 00:28:36.221 00:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:36.221 00:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:28:36.221 00:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:36.221 00:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.221 00:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:36.221 00:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.221 00:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:36.221 00:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:36.221 00:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:36.221 00:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:28:36.221 00:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:28:36.221 00:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:28:36.221 00:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:28:36.221 00:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:36.221 00:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:36.221 00:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:36.221 00:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.221 00:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:36.221 00:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.221 00:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:28:36.221 00:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:28:36.221 00:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:28:36.480 00:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:28:36.480 00:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:36.480 00:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:36.480 00:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:36.480 00:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.480 00:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:36.480 00:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.480 00:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:28:36.480 00:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:28:36.480 00:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:28:36.738 00:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:28:36.738 00:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:36.738 00:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:36.738 00:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:36.738 00:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.738 00:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:36.738 00:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.011 00:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=139 00:28:37.011 00:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 139 -ge 100 ']' 00:28:37.011 00:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:28:37.011 00:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:28:37.011 00:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:28:37.011 00:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 327136 00:28:37.011 00:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 327136 ']' 00:28:37.011 00:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 327136 00:28:37.011 00:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:28:37.011 00:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:37.011 00:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 327136 00:28:37.011 00:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:37.011 00:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:37.011 00:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 327136' 00:28:37.011 killing process with pid 327136 00:28:37.011 00:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 327136 00:28:37.011 00:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 327136 00:28:37.011 [2024-11-18 00:34:00.607055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229e810 is same with the state(6) to be set 00:28:37.011 [2024-11-18 00:34:00.607134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229e810 is same with the state(6) to be set 00:28:37.011 [2024-11-18 00:34:00.607184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229e810 is same with the state(6) to be set 00:28:37.011 [2024-11-18 00:34:00.607199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229e810 is same with the state(6) to be set 00:28:37.011 [2024-11-18 00:34:00.608094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.011 [2024-11-18 00:34:00.608140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.011 [2024-11-18 00:34:00.608170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.011 [2024-11-18 00:34:00.608185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.011 [2024-11-18 00:34:00.608200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.011 [2024-11-18 00:34:00.608214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.011 [2024-11-18 00:34:00.608231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.011 [2024-11-18 00:34:00.608245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.011 [2024-11-18 00:34:00.608259] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ac450 is same with the state(6) to be set 00:28:37.011 [2024-11-18 00:34:00.608687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.011 [2024-11-18 00:34:00.608727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.011 [2024-11-18 00:34:00.608755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.011 [2024-11-18 00:34:00.608773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.012 [2024-11-18 00:34:00.608790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.012 [2024-11-18 00:34:00.608805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.012 [2024-11-18 00:34:00.608821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.012 [2024-11-18 00:34:00.608836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.012 [2024-11-18 00:34:00.608853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.012 [2024-11-18 00:34:00.608868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.012 [2024-11-18 00:34:00.608884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.012 [2024-11-18 00:34:00.608899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.012 [2024-11-18 00:34:00.608914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a13a0 is same with the state(6) to be set 00:28:37.012 [2024-11-18 00:34:00.608932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.012 [2024-11-18 00:34:00.608946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a13a0 is same with [2024-11-18 00:34:00.608949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:28:37.012 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.012 [2024-11-18 00:34:00.608963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a13a0 is same with the state(6) to be set 00:28:37.012 [2024-11-18 00:34:00.608967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.012 [2024-11-18 00:34:00.608976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a13a0 is same with the state(6) to be set 00:28:37.012 [2024-11-18 00:34:00.608982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.012 [2024-11-18 00:34:00.608989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a13a0 is same with the state(6) to be set 00:28:37.012 [2024-11-18 00:34:00.608999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:12[2024-11-18 00:34:00.609001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a13a0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.012 the state(6) to be set 00:28:37.012 [2024-11-18 00:34:00.609015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a13a0 is same with [2024-11-18 00:34:00.609015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:28:37.012 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.012 [2024-11-18 00:34:00.609041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a13a0 is same with the state(6) to be set 00:28:37.012 [2024-11-18 00:34:00.609045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.012 [2024-11-18 00:34:00.609054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a13a0 is same with the state(6) to be set 00:28:37.012 [2024-11-18 00:34:00.609060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.012 [2024-11-18 00:34:00.609067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a13a0 is same with the state(6) to be set 00:28:37.012 [2024-11-18 00:34:00.609077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:1[2024-11-18 00:34:00.609080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a13a0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.012 the state(6) to be set 00:28:37.012 [2024-11-18 00:34:00.609093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a13a0 is same with [2024-11-18 00:34:00.609093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:28:37.012 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.012 [2024-11-18 00:34:00.609107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a13a0 is same with the state(6) to be set 00:28:37.012 [2024-11-18 00:34:00.609111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.012 [2024-11-18 00:34:00.609120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a13a0 is same with the state(6) to be set 00:28:37.012 [2024-11-18 00:34:00.609126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.012 [2024-11-18 00:34:00.609132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a13a0 is same with the state(6) to be set 00:28:37.012 [2024-11-18 00:34:00.609143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.012 [2024-11-18 00:34:00.609151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a13a0 is same with the state(6) to be set 00:28:37.012 [2024-11-18 00:34:00.609167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a13a0 is same with the state(6) to be set 00:28:37.012 [2024-11-18 00:34:00.609168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.012 [2024-11-18 00:34:00.609180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a13a0 is same with the state(6) to be set 00:28:37.012 [2024-11-18 00:34:00.609186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.012 [2024-11-18 00:34:00.609193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a13a0 is same with the state(6) to be set 00:28:37.012 [2024-11-18 00:34:00.609200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.012 [2024-11-18 00:34:00.609205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a13a0 is same with the state(6) to be set 00:28:37.012 [2024-11-18 00:34:00.609217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a13a0 is same with [2024-11-18 00:34:00.609217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:1the state(6) to be set 00:28:37.012 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.012 [2024-11-18 00:34:00.609232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a13a0 is same with the state(6) to be set 00:28:37.012 [2024-11-18 00:34:00.609234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.012 [2024-11-18 00:34:00.609244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a13a0 is same with the state(6) to be set 00:28:37.012 [2024-11-18 00:34:00.609250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.012 [2024-11-18 00:34:00.609256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a13a0 is same with the state(6) to be set 00:28:37.012 [2024-11-18 00:34:00.609265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.012 [2024-11-18 00:34:00.609270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a13a0 is same with the state(6) to be set 00:28:37.012 [2024-11-18 00:34:00.609282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:1[2024-11-18 00:34:00.609283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a13a0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.012 the state(6) to be set 00:28:37.012 [2024-11-18 00:34:00.609317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a13a0 is same with [2024-11-18 00:34:00.609318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:28:37.012 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.012 [2024-11-18 00:34:00.609334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a13a0 is same with the state(6) to be set 00:28:37.012 [2024-11-18 00:34:00.609339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.012 [2024-11-18 00:34:00.609347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a13a0 is same with the state(6) to be set 00:28:37.012 [2024-11-18 00:34:00.609354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.012 [2024-11-18 00:34:00.609360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a13a0 is same with the state(6) to be set 00:28:37.012 [2024-11-18 00:34:00.609374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a13a0 is same with the state(6) to be set 00:28:37.012 [2024-11-18 00:34:00.609375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.012 [2024-11-18 00:34:00.609386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a13a0 is same with the state(6) to be set 00:28:37.012 [2024-11-18 00:34:00.609391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.012 [2024-11-18 00:34:00.609399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a13a0 is same with the state(6) to be set 00:28:37.012 [2024-11-18 00:34:00.609408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.012 [2024-11-18 00:34:00.609412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a13a0 is same with the state(6) to be set 00:28:37.012 [2024-11-18 00:34:00.609423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-18 00:34:00.609425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a13a0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.012 the state(6) to be set 00:28:37.012 [2024-11-18 00:34:00.609439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a13a0 is same with the state(6) to be set 00:28:37.012 [2024-11-18 00:34:00.609441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.012 [2024-11-18 00:34:00.609452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a13a0 is same with the state(6) to be set 00:28:37.012 [2024-11-18 00:34:00.609456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.012 [2024-11-18 00:34:00.609465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a13a0 is same with the state(6) to be set 00:28:37.012 [2024-11-18 00:34:00.609472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.012 [2024-11-18 00:34:00.609477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a13a0 is same with the state(6) to be set 00:28:37.012 [2024-11-18 00:34:00.609487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.012 [2024-11-18 00:34:00.609490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a13a0 is same with the state(6) to be set 00:28:37.012 [2024-11-18 00:34:00.609503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a13a0 is same with the state(6) to be set 00:28:37.013 [2024-11-18 00:34:00.609503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.013 [2024-11-18 00:34:00.609515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a13a0 is same with the state(6) to be set 00:28:37.013 [2024-11-18 00:34:00.609519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.013 [2024-11-18 00:34:00.609527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a13a0 is same with the state(6) to be set 00:28:37.013 [2024-11-18 00:34:00.609536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.013 [2024-11-18 00:34:00.609540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a13a0 is same with the state(6) to be set 00:28:37.013 [2024-11-18 00:34:00.609551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.013 [2024-11-18 00:34:00.609557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a13a0 is same with the state(6) to be set 00:28:37.013 [2024-11-18 00:34:00.609568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:1[2024-11-18 00:34:00.609570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a13a0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.013 the state(6) to be set 00:28:37.013 [2024-11-18 00:34:00.609585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a13a0 is same with [2024-11-18 00:34:00.609585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:28:37.013 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.013 [2024-11-18 00:34:00.609599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a13a0 is same with the state(6) to be set 00:28:37.013 [2024-11-18 00:34:00.609603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.013 [2024-11-18 00:34:00.609619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a13a0 is same with the state(6) to be set 00:28:37.013 [2024-11-18 00:34:00.609623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.013 [2024-11-18 00:34:00.609631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a13a0 is same with the state(6) to be set 00:28:37.013 [2024-11-18 00:34:00.609639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.013 [2024-11-18 00:34:00.609644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a13a0 is same with the state(6) to be set 00:28:37.013 [2024-11-18 00:34:00.609654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.013 [2024-11-18 00:34:00.609657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a13a0 is same with the state(6) to be set 00:28:37.013 [2024-11-18 00:34:00.609669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a13a0 is same with the state(6) to be set 00:28:37.013 [2024-11-18 00:34:00.609671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.013 [2024-11-18 00:34:00.609684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a13a0 is same with the state(6) to be set 00:28:37.013 [2024-11-18 00:34:00.609686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.013 [2024-11-18 00:34:00.609696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a13a0 is same with the state(6) to be set 00:28:37.013 [2024-11-18 00:34:00.609702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.013 [2024-11-18 00:34:00.609709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a13a0 is same with the state(6) to be set 00:28:37.013 [2024-11-18 00:34:00.609717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.013 [2024-11-18 00:34:00.609722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a13a0 is same with the state(6) to be set 00:28:37.013 [2024-11-18 00:34:00.609734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a13a0 is same with [2024-11-18 00:34:00.609733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:1the state(6) to be set 00:28:37.013 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.013 [2024-11-18 00:34:00.609751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a13a0 is same with the state(6) to be set 00:28:37.013 [2024-11-18 00:34:00.609754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.013 [2024-11-18 00:34:00.609763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a13a0 is same with the state(6) to be set 00:28:37.013 [2024-11-18 00:34:00.609771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.013 [2024-11-18 00:34:00.609776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a13a0 is same with the state(6) to be set 00:28:37.013 [2024-11-18 00:34:00.609786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-18 00:34:00.609788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a13a0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.013 the state(6) to be set 00:28:37.013 [2024-11-18 00:34:00.609802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a13a0 is same with the state(6) to be set 00:28:37.013 [2024-11-18 00:34:00.609804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.013 [2024-11-18 00:34:00.609819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.013 [2024-11-18 00:34:00.609834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.013 [2024-11-18 00:34:00.609848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.013 [2024-11-18 00:34:00.609864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.013 [2024-11-18 00:34:00.609878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.013 [2024-11-18 00:34:00.609894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.013 [2024-11-18 00:34:00.609908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.013 [2024-11-18 00:34:00.609924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.013 [2024-11-18 00:34:00.609939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.013 [2024-11-18 00:34:00.609954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.013 [2024-11-18 00:34:00.609968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.013 [2024-11-18 00:34:00.609984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.013 [2024-11-18 00:34:00.609998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.013 [2024-11-18 00:34:00.610014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.013 [2024-11-18 00:34:00.610028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.013 [2024-11-18 00:34:00.610044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.013 [2024-11-18 00:34:00.610061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.013 [2024-11-18 00:34:00.610078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.013 [2024-11-18 00:34:00.610107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.013 [2024-11-18 00:34:00.610123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.013 [2024-11-18 00:34:00.610138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.013 [2024-11-18 00:34:00.610153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.013 [2024-11-18 00:34:00.610167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.013 [2024-11-18 00:34:00.610189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.013 [2024-11-18 00:34:00.610204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.013 [2024-11-18 00:34:00.610219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.013 [2024-11-18 00:34:00.610233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.013 [2024-11-18 00:34:00.610248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.013 [2024-11-18 00:34:00.610262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.013 [2024-11-18 00:34:00.610278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.013 [2024-11-18 00:34:00.610306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.013 [2024-11-18 00:34:00.610332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.013 [2024-11-18 00:34:00.610347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.013 [2024-11-18 00:34:00.610362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.013 [2024-11-18 00:34:00.610377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.013 [2024-11-18 00:34:00.610392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.013 [2024-11-18 00:34:00.610406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.013 [2024-11-18 00:34:00.610421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.014 [2024-11-18 00:34:00.610436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.014 [2024-11-18 00:34:00.610451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.014 [2024-11-18 00:34:00.610465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.014 [2024-11-18 00:34:00.610485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.014 [2024-11-18 00:34:00.610500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.014 [2024-11-18 00:34:00.610516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.014 [2024-11-18 00:34:00.610530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.014 [2024-11-18 00:34:00.610546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.014 [2024-11-18 00:34:00.610561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.014 [2024-11-18 00:34:00.610576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.014 [2024-11-18 00:34:00.610590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.014 [2024-11-18 00:34:00.610606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.014 [2024-11-18 00:34:00.610630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.014 [2024-11-18 00:34:00.610646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.014 [2024-11-18 00:34:00.610660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.014 [2024-11-18 00:34:00.610675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.014 [2024-11-18 00:34:00.610689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.014 [2024-11-18 00:34:00.610710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.014 [2024-11-18 00:34:00.610725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.014 [2024-11-18 00:34:00.610740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.014 [2024-11-18 00:34:00.610754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.014 [2024-11-18 00:34:00.610769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.014 [2024-11-18 00:34:00.610784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.014 [2024-11-18 00:34:00.610799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.014 [2024-11-18 00:34:00.610813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.014 [2024-11-18 00:34:00.610828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.014 [2024-11-18 00:34:00.610842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.014 [2024-11-18 00:34:00.610881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.014 [2024-11-18 00:34:00.611223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ece0 is same with the state(6) to be set 00:28:37.014 [2024-11-18 00:34:00.611246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ece0 is same with the state(6) to be set 00:28:37.014 [2024-11-18 00:34:00.611260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ece0 is same with the state(6) to be set 00:28:37.014 [2024-11-18 00:34:00.611273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ece0 is same with the state(6) to be set 00:28:37.014 [2024-11-18 00:34:00.611285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ece0 is same with the state(6) to be set 00:28:37.014 [2024-11-18 00:34:00.611309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ece0 is same with the state(6) to be set 00:28:37.014 [2024-11-18 00:34:00.611330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ece0 is same with the state(6) to be set 00:28:37.014 [2024-11-18 00:34:00.611343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ece0 is same with the state(6) to be set 00:28:37.014 [2024-11-18 00:34:00.611355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ece0 is same with the state(6) to be set 00:28:37.014 [2024-11-18 00:34:00.611367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ece0 is same with the state(6) to be set 00:28:37.014 [2024-11-18 00:34:00.611379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ece0 is same with the state(6) to be set 00:28:37.014 [2024-11-18 00:34:00.611391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ece0 is same with the state(6) to be set 00:28:37.014 [2024-11-18 00:34:00.611403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ece0 is same with the state(6) to be set 00:28:37.014 [2024-11-18 00:34:00.611415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ece0 is same with the state(6) to be set 00:28:37.014 [2024-11-18 00:34:00.611427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ece0 is same with the state(6) to be set 00:28:37.014 [2024-11-18 00:34:00.611439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ece0 is same with the state(6) to be set 00:28:37.014 [2024-11-18 00:34:00.611452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ece0 is same with the state(6) to be set 00:28:37.014 [2024-11-18 00:34:00.611464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ece0 is same with the state(6) to be set 00:28:37.014 [2024-11-18 00:34:00.611481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ece0 is same with the state(6) to be set 00:28:37.014 [2024-11-18 00:34:00.611494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ece0 is same with the state(6) to be set 00:28:37.014 [2024-11-18 00:34:00.611506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ece0 is same with the state(6) to be set 00:28:37.014 [2024-11-18 00:34:00.611518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ece0 is same with the state(6) to be set 00:28:37.014 [2024-11-18 00:34:00.611530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ece0 is same with the state(6) to be set 00:28:37.014 [2024-11-18 00:34:00.611542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ece0 is same with the state(6) to be set 00:28:37.014 [2024-11-18 00:34:00.611554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ece0 is same with the state(6) to be set 00:28:37.014 [2024-11-18 00:34:00.611566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ece0 is same with the state(6) to be set 00:28:37.014 [2024-11-18 00:34:00.611578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ece0 is same with the state(6) to be set 00:28:37.014 [2024-11-18 00:34:00.611594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ece0 is same with the state(6) to be set 00:28:37.014 [2024-11-18 00:34:00.611618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ece0 is same with the state(6) to be set 00:28:37.014 [2024-11-18 00:34:00.611630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ece0 is same with the state(6) to be set 00:28:37.014 [2024-11-18 00:34:00.611642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ece0 is same with the state(6) to be set 00:28:37.014 [2024-11-18 00:34:00.611654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ece0 is same with the state(6) to be set 00:28:37.014 [2024-11-18 00:34:00.611666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ece0 is same with the state(6) to be set 00:28:37.014 [2024-11-18 00:34:00.611677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ece0 is same with the state(6) to be set 00:28:37.014 [2024-11-18 00:34:00.611689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ece0 is same with the state(6) to be set 00:28:37.014 [2024-11-18 00:34:00.611701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ece0 is same with the state(6) to be set 00:28:37.014 [2024-11-18 00:34:00.611713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ece0 is same with the state(6) to be set 00:28:37.014 [2024-11-18 00:34:00.611725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ece0 is same with the state(6) to be set 00:28:37.014 [2024-11-18 00:34:00.611737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ece0 is same with the state(6) to be set 00:28:37.014 [2024-11-18 00:34:00.611748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ece0 is same with the state(6) to be set 00:28:37.014 [2024-11-18 00:34:00.611770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ece0 is same with the state(6) to be set 00:28:37.014 [2024-11-18 00:34:00.611782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ece0 is same with the state(6) to be set 00:28:37.014 [2024-11-18 00:34:00.611794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ece0 is same with the state(6) to be set 00:28:37.014 [2024-11-18 00:34:00.611806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ece0 is same with the state(6) to be set 00:28:37.014 [2024-11-18 00:34:00.611817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ece0 is same with the state(6) to be set 00:28:37.014 [2024-11-18 00:34:00.611829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ece0 is same with the state(6) to be set 00:28:37.014 [2024-11-18 00:34:00.611841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ece0 is same with the state(6) to be set 00:28:37.014 [2024-11-18 00:34:00.611853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ece0 is same with the state(6) to be set 00:28:37.014 [2024-11-18 00:34:00.611865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ece0 is same with the state(6) to be set 00:28:37.014 [2024-11-18 00:34:00.611877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ece0 is same with the state(6) to be set 00:28:37.014 [2024-11-18 00:34:00.611889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ece0 is same with the state(6) to be set 00:28:37.014 [2024-11-18 00:34:00.611902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ece0 is same with the state(6) to be set 00:28:37.014 [2024-11-18 00:34:00.614177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229f1b0 is same with the state(6) to be set 00:28:37.015 [2024-11-18 00:34:00.614212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229f1b0 is same with the state(6) to be set 00:28:37.015 [2024-11-18 00:34:00.614427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.015 [2024-11-18 00:34:00.614457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.015 [2024-11-18 00:34:00.614479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.015 [2024-11-18 00:34:00.614496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.015 [2024-11-18 00:34:00.614514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.015 [2024-11-18 00:34:00.614528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.015 [2024-11-18 00:34:00.614545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.015 [2024-11-18 00:34:00.614560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.015 [2024-11-18 00:34:00.614576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.015 [2024-11-18 00:34:00.614591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.015 [2024-11-18 00:34:00.614616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.015 [2024-11-18 00:34:00.614631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.015 [2024-11-18 00:34:00.614647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.015 [2024-11-18 00:34:00.614661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.015 [2024-11-18 00:34:00.614677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.015 [2024-11-18 00:34:00.614690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.015 [2024-11-18 00:34:00.614706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.015 [2024-11-18 00:34:00.614720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.015 [2024-11-18 00:34:00.614736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.015 [2024-11-18 00:34:00.614750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.015 [2024-11-18 00:34:00.614765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.015 [2024-11-18 00:34:00.614779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.015 [2024-11-18 00:34:00.614804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.015 [2024-11-18 00:34:00.614818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.015 [2024-11-18 00:34:00.614834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.015 [2024-11-18 00:34:00.614853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.015 [2024-11-18 00:34:00.614870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.015 [2024-11-18 00:34:00.614885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.015 [2024-11-18 00:34:00.614901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.015 [2024-11-18 00:34:00.614915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.015 [2024-11-18 00:34:00.614930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.015 [2024-11-18 00:34:00.614953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.015 [2024-11-18 00:34:00.614968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.015 [2024-11-18 00:34:00.614988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.015 [2024-11-18 00:34:00.615004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.015 [2024-11-18 00:34:00.615018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.015 [2024-11-18 00:34:00.615033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.015 [2024-11-18 00:34:00.615047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.015 [2024-11-18 00:34:00.615051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229f6a0 is same with [2024-11-18 00:34:00.615062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:1the state(6) to be set 00:28:37.015 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.015 [2024-11-18 00:34:00.615080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.015 [2024-11-18 00:34:00.615083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229f6a0 is same with the state(6) to be set 00:28:37.015 [2024-11-18 00:34:00.615096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:1[2024-11-18 00:34:00.615099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229f6a0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.015 the state(6) to be set 00:28:37.015 [2024-11-18 00:34:00.615112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.015 [2024-11-18 00:34:00.615115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229f6a0 is same with the state(6) to be set 00:28:37.015 [2024-11-18 00:34:00.615128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.015 [2024-11-18 00:34:00.615142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.015 [2024-11-18 00:34:00.615157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.015 [2024-11-18 00:34:00.615172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.015 [2024-11-18 00:34:00.615187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.015 [2024-11-18 00:34:00.615206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.015 [2024-11-18 00:34:00.615224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.015 [2024-11-18 00:34:00.615238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.015 [2024-11-18 00:34:00.615254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.015 [2024-11-18 00:34:00.615268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.015 [2024-11-18 00:34:00.615283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.015 [2024-11-18 00:34:00.615297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.015 [2024-11-18 00:34:00.615325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.015 [2024-11-18 00:34:00.615342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.015 [2024-11-18 00:34:00.615359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.015 [2024-11-18 00:34:00.615374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.015 [2024-11-18 00:34:00.615389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.015 [2024-11-18 00:34:00.615403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.015 [2024-11-18 00:34:00.615419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.015 [2024-11-18 00:34:00.615433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.015 [2024-11-18 00:34:00.615449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.015 [2024-11-18 00:34:00.615462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.015 [2024-11-18 00:34:00.615478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.015 [2024-11-18 00:34:00.615498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.015 [2024-11-18 00:34:00.615514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.015 [2024-11-18 00:34:00.615529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.015 [2024-11-18 00:34:00.615544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.015 [2024-11-18 00:34:00.615558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.015 [2024-11-18 00:34:00.615573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.015 [2024-11-18 00:34:00.615588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.015 [2024-11-18 00:34:00.615633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.016 [2024-11-18 00:34:00.615647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.016 [2024-11-18 00:34:00.615662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.016 [2024-11-18 00:34:00.615676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.016 [2024-11-18 00:34:00.615691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.016 [2024-11-18 00:34:00.615705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.016 [2024-11-18 00:34:00.615720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.016 [2024-11-18 00:34:00.615733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.016 [2024-11-18 00:34:00.615748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.016 [2024-11-18 00:34:00.615761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.016 [2024-11-18 00:34:00.615776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.016 [2024-11-18 00:34:00.615791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.016 [2024-11-18 00:34:00.615807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.016 [2024-11-18 00:34:00.615822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.016 [2024-11-18 00:34:00.615823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229fb70 is same with the state(6) to be set 00:28:37.016 [2024-11-18 00:34:00.615837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.016 [2024-11-18 00:34:00.615851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229fb70 is same with [2024-11-18 00:34:00.615852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:28:37.016 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.016 [2024-11-18 00:34:00.615867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229fb70 is same with the state(6) to be set 00:28:37.016 [2024-11-18 00:34:00.615870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.016 [2024-11-18 00:34:00.615879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229fb70 is same with the state(6) to be set 00:28:37.016 [2024-11-18 00:34:00.615884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.016 [2024-11-18 00:34:00.615891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229fb70 is same with the state(6) to be set 00:28:37.016 [2024-11-18 00:34:00.615900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.016 [2024-11-18 00:34:00.615904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229fb70 is same with the state(6) to be set 00:28:37.016 [2024-11-18 00:34:00.615914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.016 [2024-11-18 00:34:00.615922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229fb70 is same with the state(6) to be set 00:28:37.016 [2024-11-18 00:34:00.615929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.016 [2024-11-18 00:34:00.615935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229fb70 is same with the state(6) to be set 00:28:37.016 [2024-11-18 00:34:00.615943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.016 [2024-11-18 00:34:00.615947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229fb70 is same with the state(6) to be set 00:28:37.016 [2024-11-18 00:34:00.615958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229fb70 is same with [2024-11-18 00:34:00.615959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:12the state(6) to be set 00:28:37.016 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.016 [2024-11-18 00:34:00.615972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229fb70 is same with the state(6) to be set 00:28:37.016 [2024-11-18 00:34:00.615974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.016 [2024-11-18 00:34:00.615984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229fb70 is same with the state(6) to be set 00:28:37.016 [2024-11-18 00:34:00.615990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.016 [2024-11-18 00:34:00.615996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229fb70 is same with the state(6) to be set 00:28:37.016 [2024-11-18 00:34:00.616005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-18 00:34:00.616007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229fb70 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.016 the state(6) to be set 00:28:37.016 [2024-11-18 00:34:00.616020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229fb70 is same with the state(6) to be set 00:28:37.016 [2024-11-18 00:34:00.616022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.016 [2024-11-18 00:34:00.616031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229fb70 is same with the state(6) to be set 00:28:37.016 [2024-11-18 00:34:00.616036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.016 [2024-11-18 00:34:00.616044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229fb70 is same with the state(6) to be set 00:28:37.016 [2024-11-18 00:34:00.616052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.016 [2024-11-18 00:34:00.616057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229fb70 is same with the state(6) to be set 00:28:37.016 [2024-11-18 00:34:00.616066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.016 [2024-11-18 00:34:00.616070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229fb70 is same with the state(6) to be set 00:28:37.016 [2024-11-18 00:34:00.616081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229fb70 is same with the state(6) to be set 00:28:37.016 [2024-11-18 00:34:00.616081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.016 [2024-11-18 00:34:00.616093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229fb70 is same with the state(6) to be set 00:28:37.016 [2024-11-18 00:34:00.616101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.016 [2024-11-18 00:34:00.616105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229fb70 is same with the state(6) to be set 00:28:37.016 [2024-11-18 00:34:00.616118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229fb70 is same with [2024-11-18 00:34:00.616117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:12the state(6) to be set 00:28:37.016 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.016 [2024-11-18 00:34:00.616131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229fb70 is same with the state(6) to be set 00:28:37.016 [2024-11-18 00:34:00.616133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.016 [2024-11-18 00:34:00.616144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229fb70 is same with the state(6) to be set 00:28:37.016 [2024-11-18 00:34:00.616149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.016 [2024-11-18 00:34:00.616156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229fb70 is same with the state(6) to be set 00:28:37.016 [2024-11-18 00:34:00.616164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.016 [2024-11-18 00:34:00.616168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229fb70 is same with the state(6) to be set 00:28:37.016 [2024-11-18 00:34:00.616180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229fb70 is same with [2024-11-18 00:34:00.616180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:12the state(6) to be set 00:28:37.016 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.016 [2024-11-18 00:34:00.616195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229fb70 is same with the state(6) to be set 00:28:37.016 [2024-11-18 00:34:00.616196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.016 [2024-11-18 00:34:00.616207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229fb70 is same with the state(6) to be set 00:28:37.016 [2024-11-18 00:34:00.616212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.017 [2024-11-18 00:34:00.616219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229fb70 is same with the state(6) to be set 00:28:37.017 [2024-11-18 00:34:00.616226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.017 [2024-11-18 00:34:00.616231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229fb70 is same with the state(6) to be set 00:28:37.017 [2024-11-18 00:34:00.616241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.017 [2024-11-18 00:34:00.616249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229fb70 is same with the state(6) to be set 00:28:37.017 [2024-11-18 00:34:00.616255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.017 [2024-11-18 00:34:00.616262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229fb70 is same with the state(6) to be set 00:28:37.017 [2024-11-18 00:34:00.616270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.017 [2024-11-18 00:34:00.616274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229fb70 is same with the state(6) to be set 00:28:37.017 [2024-11-18 00:34:00.616287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-18 00:34:00.616288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229fb70 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.017 the state(6) to be set 00:28:37.017 [2024-11-18 00:34:00.616336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229fb70 is same with the state(6) to be set 00:28:37.017 [2024-11-18 00:34:00.616341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.017 [2024-11-18 00:34:00.616352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229fb70 is same with the state(6) to be set 00:28:37.017 [2024-11-18 00:34:00.616359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.017 [2024-11-18 00:34:00.616364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229fb70 is same with the state(6) to be set 00:28:37.017 [2024-11-18 00:34:00.616374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:12[2024-11-18 00:34:00.616377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229fb70 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.017 the state(6) to be set 00:28:37.017 [2024-11-18 00:34:00.616390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229fb70 is same with [2024-11-18 00:34:00.616391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:28:37.017 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.017 [2024-11-18 00:34:00.616404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229fb70 is same with the state(6) to be set 00:28:37.017 [2024-11-18 00:34:00.616407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.017 [2024-11-18 00:34:00.616416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229fb70 is same with the state(6) to be set 00:28:37.017 [2024-11-18 00:34:00.616422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.017 [2024-11-18 00:34:00.616429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229fb70 is same with the state(6) to be set 00:28:37.017 [2024-11-18 00:34:00.616437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.017 [2024-11-18 00:34:00.616441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229fb70 is same with the state(6) to be set 00:28:37.017 [2024-11-18 00:34:00.616451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-18 00:34:00.616453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229fb70 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.017 the state(6) to be set 00:28:37.017 [2024-11-18 00:34:00.616467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229fb70 is same with the state(6) to be set 00:28:37.017 [2024-11-18 00:34:00.616468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.017 [2024-11-18 00:34:00.616478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229fb70 is same with the state(6) to be set 00:28:37.017 [2024-11-18 00:34:00.616482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.017 [2024-11-18 00:34:00.616491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229fb70 is same with the state(6) to be set 00:28:37.017 [2024-11-18 00:34:00.616498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:12[2024-11-18 00:34:00.616502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229fb70 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.017 the state(6) to be set 00:28:37.017 [2024-11-18 00:34:00.616517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229fb70 is same with [2024-11-18 00:34:00.616517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:28:37.017 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.017 [2024-11-18 00:34:00.616530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229fb70 is same with the state(6) to be set 00:28:37.017 [2024-11-18 00:34:00.616543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229fb70 is same with the state(6) to be set 00:28:37.017 [2024-11-18 00:34:00.616554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229fb70 is same with the state(6) to be set 00:28:37.017 [2024-11-18 00:34:00.616567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229fb70 is same with the state(6) to be set 00:28:37.017 [2024-11-18 00:34:00.616579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229fb70 is same with the state(6) to be set 00:28:37.017 [2024-11-18 00:34:00.616590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229fb70 is same with the state(6) to be set 00:28:37.017 [2024-11-18 00:34:00.616602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229fb70 is same with the state(6) to be set 00:28:37.017 [2024-11-18 00:34:00.616614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229fb70 is same with the state(6) to be set 00:28:37.017 [2024-11-18 00:34:00.616648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229fb70 is same with the state(6) to be set 00:28:37.017 [2024-11-18 00:34:00.616660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229fb70 is same with the state(6) to be set 00:28:37.017 [2024-11-18 00:34:00.616671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229fb70 is same with the state(6) to be set 00:28:37.017 [2024-11-18 00:34:00.616682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229fb70 is same with the state(6) to be set 00:28:37.017 [2024-11-18 00:34:00.616866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.017 [2024-11-18 00:34:00.616893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.017 [2024-11-18 00:34:00.616915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.017 [2024-11-18 00:34:00.616932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.017 [2024-11-18 00:34:00.616948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.017 [2024-11-18 00:34:00.616963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.017 [2024-11-18 00:34:00.616979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.017 [2024-11-18 00:34:00.616994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.017 [2024-11-18 00:34:00.617010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.017 [2024-11-18 00:34:00.617024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.017 [2024-11-18 00:34:00.617040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.017 [2024-11-18 00:34:00.617059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.017 [2024-11-18 00:34:00.617076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.017 [2024-11-18 00:34:00.617091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.017 [2024-11-18 00:34:00.617106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.017 [2024-11-18 00:34:00.617121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.017 [2024-11-18 00:34:00.617136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.017 [2024-11-18 00:34:00.617154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.017 [2024-11-18 00:34:00.617170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.017 [2024-11-18 00:34:00.617184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.017 [2024-11-18 00:34:00.617200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.017 [2024-11-18 00:34:00.617216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.017 [2024-11-18 00:34:00.617232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.017 [2024-11-18 00:34:00.617246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.017 [2024-11-18 00:34:00.617262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.017 [2024-11-18 00:34:00.617276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.017 [2024-11-18 00:34:00.617291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.017 [2024-11-18 00:34:00.617324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.017 [2024-11-18 00:34:00.617342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.017 [2024-11-18 00:34:00.617358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.017 [2024-11-18 00:34:00.617373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.018 [2024-11-18 00:34:00.617388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.018 [2024-11-18 00:34:00.617404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.018 [2024-11-18 00:34:00.617419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.018 [2024-11-18 00:34:00.617435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.018 [2024-11-18 00:34:00.617450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.018 [2024-11-18 00:34:00.617470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.018 [2024-11-18 00:34:00.617485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.018 [2024-11-18 00:34:00.617501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.018 [2024-11-18 00:34:00.617516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.018 [2024-11-18 00:34:00.617532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.018 [2024-11-18 00:34:00.617546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.018 [2024-11-18 00:34:00.617561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.018 [2024-11-18 00:34:00.617576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.018 [2024-11-18 00:34:00.617592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.018 [2024-11-18 00:34:00.617606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.018 [2024-11-18 00:34:00.617631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.018 [2024-11-18 00:34:00.617646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.018 [2024-11-18 00:34:00.617662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.018 [2024-11-18 00:34:00.617676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.018 [2024-11-18 00:34:00.617692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.018 [2024-11-18 00:34:00.617707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.018 [2024-11-18 00:34:00.617728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.018 [2024-11-18 00:34:00.617742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.018 [2024-11-18 00:34:00.617758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.018 [2024-11-18 00:34:00.617773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.018 [2024-11-18 00:34:00.617788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.018 [2024-11-18 00:34:00.617802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.018 [2024-11-18 00:34:00.617818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.018 [2024-11-18 00:34:00.617832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.018 [2024-11-18 00:34:00.617848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.018 [2024-11-18 00:34:00.617881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.018 [2024-11-18 00:34:00.617897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.018 [2024-11-18 00:34:00.617911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.018 [2024-11-18 00:34:00.617927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.018 [2024-11-18 00:34:00.617942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.018 [2024-11-18 00:34:00.617957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.018 [2024-11-18 00:34:00.617971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.018 [2024-11-18 00:34:00.617980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0040 is same with [2024-11-18 00:34:00.617986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:1the state(6) to be set 00:28:37.018 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.018 [2024-11-18 00:34:00.618004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.018 [2024-11-18 00:34:00.618005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0040 is same with the state(6) to be set 00:28:37.018 [2024-11-18 00:34:00.618019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0040 is same with the state(6) to be set 00:28:37.018 [2024-11-18 00:34:00.618021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.018 [2024-11-18 00:34:00.618031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0040 is same with the state(6) to be set 00:28:37.018 [2024-11-18 00:34:00.618035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.018 [2024-11-18 00:34:00.618044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0040 is same with the state(6) to be set 00:28:37.018 [2024-11-18 00:34:00.618051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.018 [2024-11-18 00:34:00.618056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0040 is same with the state(6) to be set 00:28:37.018 [2024-11-18 00:34:00.618065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.018 [2024-11-18 00:34:00.618068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0040 is same with the state(6) to be set 00:28:37.018 [2024-11-18 00:34:00.618080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0040 is same with the state(6) to be set 00:28:37.018 [2024-11-18 00:34:00.618081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.018 [2024-11-18 00:34:00.618092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0040 is same with the state(6) to be set 00:28:37.018 [2024-11-18 00:34:00.618096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.018 [2024-11-18 00:34:00.618104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0040 is same with the state(6) to be set 00:28:37.018 [2024-11-18 00:34:00.618111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.018 [2024-11-18 00:34:00.618122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0040 is same with the state(6) to be set 00:28:37.018 [2024-11-18 00:34:00.618126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.018 [2024-11-18 00:34:00.618136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0040 is same with the state(6) to be set 00:28:37.018 [2024-11-18 00:34:00.618143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.018 [2024-11-18 00:34:00.618148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0040 is same with the state(6) to be set 00:28:37.018 [2024-11-18 00:34:00.618157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.018 [2024-11-18 00:34:00.618160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0040 is same with the state(6) to be set 00:28:37.018 [2024-11-18 00:34:00.618172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0040 is same with [2024-11-18 00:34:00.618173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:1the state(6) to be set 00:28:37.018 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.018 [2024-11-18 00:34:00.618187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0040 is same with the state(6) to be set 00:28:37.018 [2024-11-18 00:34:00.618188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.018 [2024-11-18 00:34:00.618199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0040 is same with the state(6) to be set 00:28:37.018 [2024-11-18 00:34:00.618204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.018 [2024-11-18 00:34:00.618211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0040 is same with the state(6) to be set 00:28:37.018 [2024-11-18 00:34:00.618218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.018 [2024-11-18 00:34:00.618223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0040 is same with the state(6) to be set 00:28:37.018 [2024-11-18 00:34:00.618235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0040 is same with [2024-11-18 00:34:00.618234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128the state(6) to be set 00:28:37.018 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.018 [2024-11-18 00:34:00.618249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0040 is same with the state(6) to be set 00:28:37.018 [2024-11-18 00:34:00.618251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.018 [2024-11-18 00:34:00.618261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0040 is same with the state(6) to be set 00:28:37.018 [2024-11-18 00:34:00.618267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.018 [2024-11-18 00:34:00.618273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0040 is same with the state(6) to be set 00:28:37.018 [2024-11-18 00:34:00.618281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.019 [2024-11-18 00:34:00.618285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0040 is same with the state(6) to be set 00:28:37.019 [2024-11-18 00:34:00.618326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0040 is same with [2024-11-18 00:34:00.618328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128the state(6) to be set 00:28:37.019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.019 [2024-11-18 00:34:00.618350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0040 is same with [2024-11-18 00:34:00.618352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:28:37.019 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.019 [2024-11-18 00:34:00.618365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0040 is same with the state(6) to be set 00:28:37.019 [2024-11-18 00:34:00.618370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.019 [2024-11-18 00:34:00.618378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0040 is same with the state(6) to be set 00:28:37.019 [2024-11-18 00:34:00.618385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.019 [2024-11-18 00:34:00.618391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0040 is same with the state(6) to be set 00:28:37.019 [2024-11-18 00:34:00.618401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128[2024-11-18 00:34:00.618403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0040 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.019 the state(6) to be set 00:28:37.019 [2024-11-18 00:34:00.618418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0040 is same with [2024-11-18 00:34:00.618417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:28:37.019 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.019 [2024-11-18 00:34:00.618432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0040 is same with the state(6) to be set 00:28:37.019 [2024-11-18 00:34:00.618436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.019 [2024-11-18 00:34:00.618446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0040 is same with the state(6) to be set 00:28:37.019 [2024-11-18 00:34:00.618452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.019 [2024-11-18 00:34:00.618459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0040 is same with the state(6) to be set 00:28:37.019 [2024-11-18 00:34:00.618468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128[2024-11-18 00:34:00.618471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0040 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.019 the state(6) to be set 00:28:37.019 [2024-11-18 00:34:00.618485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0040 is same with [2024-11-18 00:34:00.618485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:28:37.019 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.019 [2024-11-18 00:34:00.618499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0040 is same with the state(6) to be set 00:28:37.019 [2024-11-18 00:34:00.618503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.019 [2024-11-18 00:34:00.618512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0040 is same with the state(6) to be set 00:28:37.019 [2024-11-18 00:34:00.618518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.019 [2024-11-18 00:34:00.618525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0040 is same with the state(6) to be set 00:28:37.019 [2024-11-18 00:34:00.618534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.019 [2024-11-18 00:34:00.618540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0040 is same with the state(6) to be set 00:28:37.019 [2024-11-18 00:34:00.618549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.019 [2024-11-18 00:34:00.618553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0040 is same with the state(6) to be set 00:28:37.019 [2024-11-18 00:34:00.618565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128[2024-11-18 00:34:00.618566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0040 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.019 the state(6) to be set 00:28:37.019 [2024-11-18 00:34:00.618581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0040 is same with the state(6) to be set 00:28:37.019 [2024-11-18 00:34:00.618582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.019 [2024-11-18 00:34:00.618593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0040 is same with the state(6) to be set 00:28:37.019 [2024-11-18 00:34:00.618599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.019 [2024-11-18 00:34:00.618606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0040 is same with the state(6) to be set 00:28:37.019 [2024-11-18 00:34:00.618614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.019 [2024-11-18 00:34:00.618644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0040 is same with the state(6) to be set 00:28:37.019 [2024-11-18 00:34:00.618655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:12[2024-11-18 00:34:00.618657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0040 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.019 the state(6) to be set 00:28:37.019 [2024-11-18 00:34:00.618670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0040 is same with [2024-11-18 00:34:00.618671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:28:37.019 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.019 [2024-11-18 00:34:00.618684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0040 is same with the state(6) to be set 00:28:37.019 [2024-11-18 00:34:00.618687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.019 [2024-11-18 00:34:00.618696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0040 is same with the state(6) to be set 00:28:37.019 [2024-11-18 00:34:00.618702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.019 [2024-11-18 00:34:00.618708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0040 is same with the state(6) to be set 00:28:37.019 [2024-11-18 00:34:00.618717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.019 [2024-11-18 00:34:00.618720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0040 is same with the state(6) to be set 00:28:37.019 [2024-11-18 00:34:00.618732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-18 00:34:00.618733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0040 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.019 the state(6) to be set 00:28:37.019 [2024-11-18 00:34:00.618752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.019 [2024-11-18 00:34:00.618757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0040 is same with the state(6) to be set 00:28:37.019 [2024-11-18 00:34:00.618766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.019 [2024-11-18 00:34:00.618770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0040 is same with the state(6) to be set 00:28:37.019 [2024-11-18 00:34:00.618782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:12[2024-11-18 00:34:00.618784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0040 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.019 the state(6) to be set 00:28:37.019 [2024-11-18 00:34:00.618800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-18 00:34:00.618801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0040 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.019 the state(6) to be set 00:28:37.019 [2024-11-18 00:34:00.618817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0040 is same with the state(6) to be set 00:28:37.019 [2024-11-18 00:34:00.618818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.019 [2024-11-18 00:34:00.618828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0040 is same with the state(6) to be set 00:28:37.019 [2024-11-18 00:34:00.618834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.019 [2024-11-18 00:34:00.618841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0040 is same with the state(6) to be set 00:28:37.019 [2024-11-18 00:34:00.618849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.019 [2024-11-18 00:34:00.618853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0040 is same with the state(6) to be set 00:28:37.019 [2024-11-18 00:34:00.618863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.019 [2024-11-18 00:34:00.618865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0040 is same with the state(6) to be set 00:28:37.019 [2024-11-18 00:34:00.618878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0040 is same with [2024-11-18 00:34:00.618878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:12the state(6) to be set 00:28:37.019 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.019 [2024-11-18 00:34:00.618893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.019 [2024-11-18 00:34:00.618908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.019 [2024-11-18 00:34:00.618929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.019 [2024-11-18 00:34:00.618945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.019 [2024-11-18 00:34:00.618959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.019 [2024-11-18 00:34:00.618973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.019 [2024-11-18 00:34:00.618987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.019 [2024-11-18 00:34:00.619028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.020 [2024-11-18 00:34:00.619534] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:28:37.020 [2024-11-18 00:34:00.619586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ac450 (9): Bad file descriptor 00:28:37.020 [2024-11-18 00:34:00.619671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.020 [2024-11-18 00:34:00.619694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.020 [2024-11-18 00:34:00.619709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.020 [2024-11-18 00:34:00.619723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.020 [2024-11-18 00:34:00.619737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.020 [2024-11-18 00:34:00.619759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.020 [2024-11-18 00:34:00.619773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.020 [2024-11-18 00:34:00.619786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.020 [2024-11-18 00:34:00.619799] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1807ea0 is same with the state(6) to be set 00:28:37.020 [2024-11-18 00:34:00.619860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.020 [2024-11-18 00:34:00.619892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.020 [2024-11-18 00:34:00.619907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.020 [2024-11-18 00:34:00.619920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.020 [2024-11-18 00:34:00.619942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.020 [2024-11-18 00:34:00.619955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.020 [2024-11-18 00:34:00.619969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.020 [2024-11-18 00:34:00.619982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.020 [2024-11-18 00:34:00.619995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb590 is same with the state(6) to be set 00:28:37.020 [2024-11-18 00:34:00.620032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.020 [2024-11-18 00:34:00.620052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.020 [2024-11-18 00:34:00.620067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.020 [2024-11-18 00:34:00.620081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.020 [2024-11-18 00:34:00.620099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.020 [2024-11-18 00:34:00.620113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.020 [2024-11-18 00:34:00.620127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.020 [2024-11-18 00:34:00.620140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.020 [2024-11-18 00:34:00.620152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f4f50 is same with the state(6) to be set 00:28:37.020 [2024-11-18 00:34:00.620198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.020 [2024-11-18 00:34:00.620219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.020 [2024-11-18 00:34:00.620234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.020 [2024-11-18 00:34:00.620248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.020 [2024-11-18 00:34:00.620264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.020 [2024-11-18 00:34:00.620278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.020 [2024-11-18 00:34:00.620292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.020 [2024-11-18 00:34:00.620318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.020 [2024-11-18 00:34:00.620333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e6a90 is same with the state(6) to be set 00:28:37.020 [2024-11-18 00:34:00.620331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0510 is same with the state(6) to be set 00:28:37.020 [2024-11-18 00:34:00.620359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0510 is same with the state(6) to be set 00:28:37.020 [2024-11-18 00:34:00.620373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0510 is same with the state(6) to be set 00:28:37.020 [2024-11-18 00:34:00.620380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-11-18 00:34:00.620385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0510 is same with id:0 cdw10:00000000 cdw11:00000000 00:28:37.020 the state(6) to be set 00:28:37.020 [2024-11-18 00:34:00.620400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0510 is same with the state(6) to be set 00:28:37.020 [2024-11-18 00:34:00.620402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.020 [2024-11-18 00:34:00.620413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0510 is same with the state(6) to be set 00:28:37.020 [2024-11-18 00:34:00.620417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.020 [2024-11-18 00:34:00.620425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0510 is same with the state(6) to be set 00:28:37.020 [2024-11-18 00:34:00.620431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.020 [2024-11-18 00:34:00.620438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0510 is same with the state(6) to be set 00:28:37.020 [2024-11-18 00:34:00.620445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.020 [2024-11-18 00:34:00.620456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0510 is same with [2024-11-18 00:34:00.620458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(6) to be set 00:28:37.020 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.020 [2024-11-18 00:34:00.620471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0510 is same with the state(6) to be set 00:28:37.020 [2024-11-18 00:34:00.620473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.020 [2024-11-18 00:34:00.620484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0510 is same with the state(6) to be set 00:28:37.020 [2024-11-18 00:34:00.620487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.020 [2024-11-18 00:34:00.620497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0510 is same with the state(6) to be set 00:28:37.020 [2024-11-18 00:34:00.620500] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a6860 is same with the state(6) to be set 00:28:37.020 [2024-11-18 00:34:00.620510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0510 is same with the state(6) to be set 00:28:37.020 [2024-11-18 00:34:00.620523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0510 is same with the state(6) to be set 00:28:37.020 [2024-11-18 00:34:00.620535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0510 is same with the state(6) to be set 00:28:37.020 [2024-11-18 00:34:00.620540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.020 [2024-11-18 00:34:00.620547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0510 is same with the state(6) to be set 00:28:37.020 [2024-11-18 00:34:00.620560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0510 is same with [2024-11-18 00:34:00.620560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(6) to be set 00:28:37.020 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.020 [2024-11-18 00:34:00.620575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0510 is same with the state(6) to be set 00:28:37.020 [2024-11-18 00:34:00.620578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.020 [2024-11-18 00:34:00.620587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0510 is same with the state(6) to be set 00:28:37.020 [2024-11-18 00:34:00.620591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.021 [2024-11-18 00:34:00.620600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0510 is same with the state(6) to be set 00:28:37.021 [2024-11-18 00:34:00.620605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.021 [2024-11-18 00:34:00.620622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-11-18 00:34:00.620622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0510 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.021 the state(6) to be set 00:28:37.021 [2024-11-18 00:34:00.620637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0510 is same with [2024-11-18 00:34:00.620638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsthe state(6) to be set 00:28:37.021 id:0 cdw10:00000000 cdw11:00000000 00:28:37.021 [2024-11-18 00:34:00.620651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0510 is same with [2024-11-18 00:34:00.620652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(6) to be set 00:28:37.021 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.021 [2024-11-18 00:34:00.620668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0510 is same with [2024-11-18 00:34:00.620669] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b40a0 is same the state(6) to be set 00:28:37.021 with the state(6) to be set 00:28:37.021 [2024-11-18 00:34:00.620685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0510 is same with the state(6) to be set 00:28:37.021 [2024-11-18 00:34:00.620698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0510 is same with the state(6) to be set 00:28:37.021 [2024-11-18 00:34:00.620710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0510 is same with the state(6) to be set 00:28:37.021 [2024-11-18 00:34:00.620714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.021 [2024-11-18 00:34:00.620722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0510 is same with the state(6) to be set 00:28:37.021 [2024-11-18 00:34:00.620736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0510 is same with [2024-11-18 00:34:00.620736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(6) to be set 00:28:37.021 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.021 [2024-11-18 00:34:00.620750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0510 is same with the state(6) to be set 00:28:37.021 [2024-11-18 00:34:00.620753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.021 [2024-11-18 00:34:00.620763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0510 is same with the state(6) to be set 00:28:37.021 [2024-11-18 00:34:00.620766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.021 [2024-11-18 00:34:00.620775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0510 is same with the state(6) to be set 00:28:37.021 [2024-11-18 00:34:00.620781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.021 [2024-11-18 00:34:00.620789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0510 is same with the state(6) to be set 00:28:37.021 [2024-11-18 00:34:00.620795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.021 [2024-11-18 00:34:00.620802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0510 is same with the state(6) to be set 00:28:37.021 [2024-11-18 00:34:00.620811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.021 [2024-11-18 00:34:00.620816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0510 is same with the state(6) to be set 00:28:37.021 [2024-11-18 00:34:00.620825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.021 [2024-11-18 00:34:00.620830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0510 is same with the state(6) to be set 00:28:37.021 [2024-11-18 00:34:00.620838] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e7e10 is same with the state(6) to be set 00:28:37.021 [2024-11-18 00:34:00.620843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0510 is same with the state(6) to be set 00:28:37.021 [2024-11-18 00:34:00.620855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0510 is same with the state(6) to be set 00:28:37.021 [2024-11-18 00:34:00.620871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0510 is same with the state(6) to be set 00:28:37.021 [2024-11-18 00:34:00.620883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0510 is same with the state(6) to be set 00:28:37.021 [2024-11-18 00:34:00.620895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0510 is same with the state(6) to be set 00:28:37.021 [2024-11-18 00:34:00.620907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0510 is same with the state(6) to be set 00:28:37.021 [2024-11-18 00:34:00.620920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0510 is same with the state(6) to be set 00:28:37.021 [2024-11-18 00:34:00.620931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0510 is same with the state(6) to be set 00:28:37.021 [2024-11-18 00:34:00.620943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0510 is same with the state(6) to be set 00:28:37.021 [2024-11-18 00:34:00.620955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0510 is same with the state(6) to be set 00:28:37.021 [2024-11-18 00:34:00.620972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0510 is same with the state(6) to be set 00:28:37.021 [2024-11-18 00:34:00.620985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0510 is same with the state(6) to be set 00:28:37.021 [2024-11-18 00:34:00.620998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0510 is same with the state(6) to be set 00:28:37.021 [2024-11-18 00:34:00.621010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0510 is same with the state(6) to be set 00:28:37.021 [2024-11-18 00:34:00.621022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0510 is same with the state(6) to be set 00:28:37.021 [2024-11-18 00:34:00.621035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0510 is same with the state(6) to be set 00:28:37.021 [2024-11-18 00:34:00.621047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0510 is same with the state(6) to be set 00:28:37.021 [2024-11-18 00:34:00.621062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0510 is same with the state(6) to be set 00:28:37.021 [2024-11-18 00:34:00.621078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0510 is same with the state(6) to be set 00:28:37.021 [2024-11-18 00:34:00.621090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0510 is same with the state(6) to be set 00:28:37.021 [2024-11-18 00:34:00.621102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0510 is same with the state(6) to be set 00:28:37.021 [2024-11-18 00:34:00.621114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0510 is same with the state(6) to be set 00:28:37.021 [2024-11-18 00:34:00.621127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0510 is same with the state(6) to be set 00:28:37.021 [2024-11-18 00:34:00.621140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0510 is same with the state(6) to be set 00:28:37.021 [2024-11-18 00:34:00.621154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0510 is same with the state(6) to be set 00:28:37.021 [2024-11-18 00:34:00.621166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0510 is same with the state(6) to be set 00:28:37.021 [2024-11-18 00:34:00.621177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0510 is same with the state(6) to be set 00:28:37.021 [2024-11-18 00:34:00.622276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0a00 is same with the state(6) to be set 00:28:37.021 [2024-11-18 00:34:00.622304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0a00 is same with the state(6) to be set 00:28:37.021 [2024-11-18 00:34:00.622357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0a00 is same with the state(6) to be set 00:28:37.021 [2024-11-18 00:34:00.622372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0a00 is same with the state(6) to be set 00:28:37.021 [2024-11-18 00:34:00.622385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0a00 is same with the state(6) to be set 00:28:37.021 [2024-11-18 00:34:00.622396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0a00 is same with the state(6) to be set 00:28:37.021 [2024-11-18 00:34:00.622409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0a00 is same with the state(6) to be set 00:28:37.021 [2024-11-18 00:34:00.622421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0a00 is same with the state(6) to be set 00:28:37.021 [2024-11-18 00:34:00.622432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0a00 is same with the state(6) to be set 00:28:37.021 [2024-11-18 00:34:00.622444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0a00 is same with the state(6) to be set 00:28:37.021 [2024-11-18 00:34:00.622455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0a00 is same with the state(6) to be set 00:28:37.021 [2024-11-18 00:34:00.622467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0a00 is same with the state(6) to be set 00:28:37.021 [2024-11-18 00:34:00.622478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0a00 is same with the state(6) to be set 00:28:37.021 [2024-11-18 00:34:00.622489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0a00 is same with the state(6) to be set 00:28:37.021 [2024-11-18 00:34:00.622501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0a00 is same with the state(6) to be set 00:28:37.021 [2024-11-18 00:34:00.622512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0a00 is same with the state(6) to be set 00:28:37.021 [2024-11-18 00:34:00.622524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0a00 is same with the state(6) to be set 00:28:37.021 [2024-11-18 00:34:00.622536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0a00 is same with the state(6) to be set 00:28:37.021 [2024-11-18 00:34:00.622547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0a00 is same with the state(6) to be set 00:28:37.021 [2024-11-18 00:34:00.622559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0a00 is same with the state(6) to be set 00:28:37.021 [2024-11-18 00:34:00.622570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0a00 is same with the state(6) to be set 00:28:37.021 [2024-11-18 00:34:00.622582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0a00 is same with the state(6) to be set 00:28:37.021 [2024-11-18 00:34:00.622593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0a00 is same with the state(6) to be set 00:28:37.021 [2024-11-18 00:34:00.622605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0a00 is same with the state(6) to be set 00:28:37.021 [2024-11-18 00:34:00.622616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0a00 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.622628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0a00 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.622648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0a00 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.622659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0a00 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.622671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0a00 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.622686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0a00 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.622698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0a00 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.622711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0a00 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.622724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0a00 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.622735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0a00 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.622747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0a00 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.622759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0a00 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.622771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0a00 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.622782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0a00 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.622794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0a00 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.622805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0a00 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.622817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0a00 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.622844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0a00 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.622857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0a00 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.622868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0a00 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.622879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0a00 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.622890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0a00 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.622902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0a00 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.622913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0a00 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.622924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0a00 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.622936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0a00 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.622949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0a00 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.622962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0a00 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.622973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0a00 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.622984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0a00 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.622995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0a00 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.623006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0a00 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.623017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0a00 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.623032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0a00 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.623043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0a00 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.623055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0a00 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.623066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0a00 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.623077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0a00 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.623088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0a00 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.623603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:28:37.022 [2024-11-18 00:34:00.623649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:28:37.022 [2024-11-18 00:34:00.623677] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17cb590 (9): Bad file descriptor 00:28:37.022 [2024-11-18 00:34:00.623709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13a6860 (9): Bad file descriptor 00:28:37.022 [2024-11-18 00:34:00.623949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0ed0 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.623975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0ed0 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.623989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0ed0 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.624001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0ed0 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.624013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0ed0 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.624025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0ed0 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.624037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0ed0 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.624050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0ed0 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.624078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0ed0 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.624100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0ed0 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.624112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0ed0 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.624124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0ed0 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.624136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0ed0 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.624148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0ed0 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.624160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0ed0 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.624172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0ed0 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.624185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0ed0 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.624208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0ed0 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.624222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0ed0 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.624234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0ed0 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.624247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0ed0 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.624259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0ed0 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.624271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0ed0 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.624283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0ed0 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.624295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0ed0 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.624324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0ed0 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.624338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0ed0 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.624351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0ed0 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.624363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0ed0 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.624375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0ed0 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.624387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0ed0 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.624399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0ed0 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.624411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0ed0 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.624424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0ed0 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.624437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0ed0 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.624449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0ed0 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.624461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0ed0 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.624473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0ed0 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.624486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0ed0 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.624497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0ed0 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.624509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0ed0 is same with the state(6) to be set 00:28:37.022 [2024-11-18 00:34:00.624521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0ed0 is same with the state(6) to be set 00:28:37.023 [2024-11-18 00:34:00.624533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0ed0 is same with the state(6) to be set 00:28:37.023 [2024-11-18 00:34:00.624545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0ed0 is same with the state(6) to be set 00:28:37.023 [2024-11-18 00:34:00.624562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0ed0 is same with the state(6) to be set 00:28:37.023 [2024-11-18 00:34:00.624574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0ed0 is same with the state(6) to be set 00:28:37.023 [2024-11-18 00:34:00.624586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0ed0 is same with the state(6) to be set 00:28:37.023 [2024-11-18 00:34:00.624603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0ed0 is same with the state(6) to be set 00:28:37.023 [2024-11-18 00:34:00.624617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0ed0 is same with the state(6) to be set 00:28:37.023 [2024-11-18 00:34:00.624634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0ed0 is same with [2024-11-18 00:34:00.624627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.023 the state(6) to be set 00:28:37.023 [2024-11-18 00:34:00.624650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0ed0 is same with the state(6) to be set 00:28:37.023 [2024-11-18 00:34:00.624658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13ac450 wit[2024-11-18 00:34:00.624662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0ed0 is same with h addr=10.0.0.2, port=4420 00:28:37.023 the state(6) to be set 00:28:37.023 [2024-11-18 00:34:00.624676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0ed0 is same with the state(6) to be set 00:28:37.023 [2024-11-18 00:34:00.624676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ac450 is same with the state(6) to be set 00:28:37.023 [2024-11-18 00:34:00.624688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0ed0 is same with the state(6) to be set 00:28:37.023 [2024-11-18 00:34:00.624701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0ed0 is same with the state(6) to be set 00:28:37.023 [2024-11-18 00:34:00.624713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0ed0 is same with the state(6) to be set 00:28:37.023 [2024-11-18 00:34:00.624725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0ed0 is same with the state(6) to be set 00:28:37.023 [2024-11-18 00:34:00.624737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0ed0 is same with the state(6) to be set 00:28:37.023 [2024-11-18 00:34:00.624749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0ed0 is same with the state(6) to be set 00:28:37.023 [2024-11-18 00:34:00.624761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0ed0 is same with the state(6) to be set 00:28:37.023 [2024-11-18 00:34:00.624773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0ed0 is same with the state(6) to be set 00:28:37.023 [2024-11-18 00:34:00.624785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0ed0 is same with the state(6) to be set 00:28:37.023 [2024-11-18 00:34:00.624796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a0ed0 is same with the state(6) to be set 00:28:37.023 [2024-11-18 00:34:00.625685] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:37.023 [2024-11-18 00:34:00.625880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.023 [2024-11-18 00:34:00.625908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13a6860 with addr=10.0.0.2, port=4420 00:28:37.023 [2024-11-18 00:34:00.625926] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a6860 is same with the state(6) to be set 00:28:37.023 [2024-11-18 00:34:00.626043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.023 [2024-11-18 00:34:00.626078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17cb590 with addr=10.0.0.2, port=4420 00:28:37.023 [2024-11-18 00:34:00.626099] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb590 is same with the state(6) to be set 00:28:37.023 [2024-11-18 00:34:00.626119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ac450 (9): Bad file descriptor 00:28:37.023 [2024-11-18 00:34:00.626201] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:37.023 [2024-11-18 00:34:00.626288] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:37.023 [2024-11-18 00:34:00.626638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13a6860 (9): Bad file descriptor 00:28:37.023 [2024-11-18 00:34:00.626665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17cb590 (9): Bad file descriptor 00:28:37.023 [2024-11-18 00:34:00.626684] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:28:37.023 [2024-11-18 00:34:00.626698] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:28:37.023 [2024-11-18 00:34:00.626724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:28:37.023 [2024-11-18 00:34:00.626739] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:28:37.023 [2024-11-18 00:34:00.626845] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:37.023 [2024-11-18 00:34:00.626927] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:37.023 [2024-11-18 00:34:00.627081] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:28:37.023 [2024-11-18 00:34:00.627102] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:28:37.023 [2024-11-18 00:34:00.627115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:28:37.023 [2024-11-18 00:34:00.627129] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:28:37.023 [2024-11-18 00:34:00.627144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:28:37.023 [2024-11-18 00:34:00.627157] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:28:37.023 [2024-11-18 00:34:00.627169] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:28:37.023 [2024-11-18 00:34:00.627182] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:28:37.023 [2024-11-18 00:34:00.627272] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:37.023 [2024-11-18 00:34:00.627447] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:37.023 [2024-11-18 00:34:00.629606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.023 [2024-11-18 00:34:00.629642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.023 [2024-11-18 00:34:00.629659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.023 [2024-11-18 00:34:00.629674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.023 [2024-11-18 00:34:00.629688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.023 [2024-11-18 00:34:00.629701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.023 [2024-11-18 00:34:00.629715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.023 [2024-11-18 00:34:00.629729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.023 [2024-11-18 00:34:00.629751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18080d0 is same with the state(6) to be set 00:28:37.023 [2024-11-18 00:34:00.629784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1807ea0 (9): Bad file descriptor 00:28:37.023 [2024-11-18 00:34:00.629837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.023 [2024-11-18 00:34:00.629858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.023 [2024-11-18 00:34:00.629874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.023 [2024-11-18 00:34:00.629887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.023 [2024-11-18 00:34:00.629901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.023 [2024-11-18 00:34:00.629915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.023 [2024-11-18 00:34:00.629929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.023 [2024-11-18 00:34:00.629942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.023 [2024-11-18 00:34:00.629955] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd330 is same with the state(6) to be set 00:28:37.023 [2024-11-18 00:34:00.629985] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f4f50 (9): Bad file descriptor 00:28:37.023 [2024-11-18 00:34:00.630018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e6a90 (9): Bad file descriptor 00:28:37.023 [2024-11-18 00:34:00.630048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13b40a0 (9): Bad file descriptor 00:28:37.023 [2024-11-18 00:34:00.630078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e7e10 (9): Bad file descriptor 00:28:37.023 [2024-11-18 00:34:00.633855] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:28:37.023 [2024-11-18 00:34:00.634068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.023 [2024-11-18 00:34:00.634098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13ac450 with addr=10.0.0.2, port=4420 00:28:37.023 [2024-11-18 00:34:00.634116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ac450 is same with the state(6) to be set 00:28:37.023 [2024-11-18 00:34:00.634177] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ac450 (9): Bad file descriptor 00:28:37.023 [2024-11-18 00:34:00.634236] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:28:37.023 [2024-11-18 00:34:00.634254] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:28:37.023 [2024-11-18 00:34:00.634271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:28:37.023 [2024-11-18 00:34:00.634286] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:28:37.023 [2024-11-18 00:34:00.634848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:28:37.023 [2024-11-18 00:34:00.634874] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:28:37.023 [2024-11-18 00:34:00.635011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.024 [2024-11-18 00:34:00.635050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17cb590 with addr=10.0.0.2, port=4420 00:28:37.024 [2024-11-18 00:34:00.635068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb590 is same with the state(6) to be set 00:28:37.024 [2024-11-18 00:34:00.635154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.024 [2024-11-18 00:34:00.635179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13a6860 with addr=10.0.0.2, port=4420 00:28:37.024 [2024-11-18 00:34:00.635196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a6860 is same with the state(6) to be set 00:28:37.024 [2024-11-18 00:34:00.635254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17cb590 (9): Bad file descriptor 00:28:37.024 [2024-11-18 00:34:00.635277] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13a6860 (9): Bad file descriptor 00:28:37.024 [2024-11-18 00:34:00.635339] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:28:37.024 [2024-11-18 00:34:00.635358] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:28:37.024 [2024-11-18 00:34:00.635373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:28:37.024 [2024-11-18 00:34:00.635387] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:28:37.024 [2024-11-18 00:34:00.635401] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:28:37.024 [2024-11-18 00:34:00.635414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:28:37.024 [2024-11-18 00:34:00.635427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:28:37.024 [2024-11-18 00:34:00.635439] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:28:37.024 [2024-11-18 00:34:00.639646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18080d0 (9): Bad file descriptor 00:28:37.024 [2024-11-18 00:34:00.639730] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dd330 (9): Bad file descriptor 00:28:37.024 [2024-11-18 00:34:00.639935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.024 [2024-11-18 00:34:00.639966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.024 [2024-11-18 00:34:00.639998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.024 [2024-11-18 00:34:00.640015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.024 [2024-11-18 00:34:00.640032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.024 [2024-11-18 00:34:00.640048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.024 [2024-11-18 00:34:00.640064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.024 [2024-11-18 00:34:00.640078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.024 [2024-11-18 00:34:00.640095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.024 [2024-11-18 00:34:00.640110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.024 [2024-11-18 00:34:00.640126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.024 [2024-11-18 00:34:00.640148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.024 [2024-11-18 00:34:00.640165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.024 [2024-11-18 00:34:00.640181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.024 [2024-11-18 00:34:00.640197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.024 [2024-11-18 00:34:00.640211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.024 [2024-11-18 00:34:00.640227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.024 [2024-11-18 00:34:00.640241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.024 [2024-11-18 00:34:00.640258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.024 [2024-11-18 00:34:00.640273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.024 [2024-11-18 00:34:00.640289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.024 [2024-11-18 00:34:00.640304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.024 [2024-11-18 00:34:00.640330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.024 [2024-11-18 00:34:00.640346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.024 [2024-11-18 00:34:00.640363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.024 [2024-11-18 00:34:00.640378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.024 [2024-11-18 00:34:00.640394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.024 [2024-11-18 00:34:00.640408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.024 [2024-11-18 00:34:00.640425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.024 [2024-11-18 00:34:00.640440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.024 [2024-11-18 00:34:00.640455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.024 [2024-11-18 00:34:00.640470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.024 [2024-11-18 00:34:00.640486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.024 [2024-11-18 00:34:00.640500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.024 [2024-11-18 00:34:00.640516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.024 [2024-11-18 00:34:00.640531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.024 [2024-11-18 00:34:00.640550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.024 [2024-11-18 00:34:00.640565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.024 [2024-11-18 00:34:00.640582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.024 [2024-11-18 00:34:00.640597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.024 [2024-11-18 00:34:00.640613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.024 [2024-11-18 00:34:00.640628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.024 [2024-11-18 00:34:00.640644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.024 [2024-11-18 00:34:00.640659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.024 [2024-11-18 00:34:00.640676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.024 [2024-11-18 00:34:00.640701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.024 [2024-11-18 00:34:00.640717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.024 [2024-11-18 00:34:00.640732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.024 [2024-11-18 00:34:00.640759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.024 [2024-11-18 00:34:00.640773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.024 [2024-11-18 00:34:00.640790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.024 [2024-11-18 00:34:00.640806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.024 [2024-11-18 00:34:00.640822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.024 [2024-11-18 00:34:00.640837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.024 [2024-11-18 00:34:00.640853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.024 [2024-11-18 00:34:00.640868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.024 [2024-11-18 00:34:00.640884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.024 [2024-11-18 00:34:00.640899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.024 [2024-11-18 00:34:00.640914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.024 [2024-11-18 00:34:00.640929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.024 [2024-11-18 00:34:00.640945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.025 [2024-11-18 00:34:00.640963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.025 [2024-11-18 00:34:00.640980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.025 [2024-11-18 00:34:00.640995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.025 [2024-11-18 00:34:00.641011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.025 [2024-11-18 00:34:00.641025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.025 [2024-11-18 00:34:00.641042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.025 [2024-11-18 00:34:00.641057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.025 [2024-11-18 00:34:00.641074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.025 [2024-11-18 00:34:00.641088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.025 [2024-11-18 00:34:00.641105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.025 [2024-11-18 00:34:00.641120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.025 [2024-11-18 00:34:00.641136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.025 [2024-11-18 00:34:00.641150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.025 [2024-11-18 00:34:00.641166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.025 [2024-11-18 00:34:00.641180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.025 [2024-11-18 00:34:00.641197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.025 [2024-11-18 00:34:00.641212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.025 [2024-11-18 00:34:00.641229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.025 [2024-11-18 00:34:00.641244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.025 [2024-11-18 00:34:00.641260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.025 [2024-11-18 00:34:00.641275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.025 [2024-11-18 00:34:00.641291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.025 [2024-11-18 00:34:00.641306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.025 [2024-11-18 00:34:00.641343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.025 [2024-11-18 00:34:00.641359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.025 [2024-11-18 00:34:00.641379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.025 [2024-11-18 00:34:00.641395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.025 [2024-11-18 00:34:00.641411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.025 [2024-11-18 00:34:00.641425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.025 [2024-11-18 00:34:00.641442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.025 [2024-11-18 00:34:00.641456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.025 [2024-11-18 00:34:00.641472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.025 [2024-11-18 00:34:00.641486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.025 [2024-11-18 00:34:00.641502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.025 [2024-11-18 00:34:00.641517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.025 [2024-11-18 00:34:00.641533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.025 [2024-11-18 00:34:00.641547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.025 [2024-11-18 00:34:00.641563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.025 [2024-11-18 00:34:00.641578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.025 [2024-11-18 00:34:00.641595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.025 [2024-11-18 00:34:00.641618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.025 [2024-11-18 00:34:00.641634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.025 [2024-11-18 00:34:00.641649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.025 [2024-11-18 00:34:00.641666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.025 [2024-11-18 00:34:00.641680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.025 [2024-11-18 00:34:00.641695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af610 is same with the state(6) to be set 00:28:37.025 [2024-11-18 00:34:00.642940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.025 [2024-11-18 00:34:00.642965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.025 [2024-11-18 00:34:00.642987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.025 [2024-11-18 00:34:00.643003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.025 [2024-11-18 00:34:00.643026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.025 [2024-11-18 00:34:00.643041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.025 [2024-11-18 00:34:00.643058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.025 [2024-11-18 00:34:00.643073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.025 [2024-11-18 00:34:00.643089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.025 [2024-11-18 00:34:00.643104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.025 [2024-11-18 00:34:00.643119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.025 [2024-11-18 00:34:00.643134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.025 [2024-11-18 00:34:00.643151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.025 [2024-11-18 00:34:00.643166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.025 [2024-11-18 00:34:00.643182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.025 [2024-11-18 00:34:00.643196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.025 [2024-11-18 00:34:00.643213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.025 [2024-11-18 00:34:00.643228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.025 [2024-11-18 00:34:00.643244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.026 [2024-11-18 00:34:00.643259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.026 [2024-11-18 00:34:00.643275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.026 [2024-11-18 00:34:00.643289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.026 [2024-11-18 00:34:00.643305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.026 [2024-11-18 00:34:00.643329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.026 [2024-11-18 00:34:00.643347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.026 [2024-11-18 00:34:00.643362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.026 [2024-11-18 00:34:00.643378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.026 [2024-11-18 00:34:00.643392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.026 [2024-11-18 00:34:00.643408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.026 [2024-11-18 00:34:00.643428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.026 [2024-11-18 00:34:00.643444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.026 [2024-11-18 00:34:00.643459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.026 [2024-11-18 00:34:00.643475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.026 [2024-11-18 00:34:00.643490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.026 [2024-11-18 00:34:00.643508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.026 [2024-11-18 00:34:00.643522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.026 [2024-11-18 00:34:00.643538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.026 [2024-11-18 00:34:00.643553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.026 [2024-11-18 00:34:00.643570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.026 [2024-11-18 00:34:00.643584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.026 [2024-11-18 00:34:00.643601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.026 [2024-11-18 00:34:00.643615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.026 [2024-11-18 00:34:00.643631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.026 [2024-11-18 00:34:00.643646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.026 [2024-11-18 00:34:00.643664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.026 [2024-11-18 00:34:00.643679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.026 [2024-11-18 00:34:00.643695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.026 [2024-11-18 00:34:00.643710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.026 [2024-11-18 00:34:00.643726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.026 [2024-11-18 00:34:00.643740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.026 [2024-11-18 00:34:00.643756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.026 [2024-11-18 00:34:00.643772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.026 [2024-11-18 00:34:00.643788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.026 [2024-11-18 00:34:00.643803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.026 [2024-11-18 00:34:00.643823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.026 [2024-11-18 00:34:00.643839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.026 [2024-11-18 00:34:00.643855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.026 [2024-11-18 00:34:00.643870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.026 [2024-11-18 00:34:00.643886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.026 [2024-11-18 00:34:00.643901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.026 [2024-11-18 00:34:00.643929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.026 [2024-11-18 00:34:00.643944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.026 [2024-11-18 00:34:00.643961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.026 [2024-11-18 00:34:00.643976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.026 [2024-11-18 00:34:00.643992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.026 [2024-11-18 00:34:00.644006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.026 [2024-11-18 00:34:00.644023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.026 [2024-11-18 00:34:00.644037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.026 [2024-11-18 00:34:00.644053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.026 [2024-11-18 00:34:00.644069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.026 [2024-11-18 00:34:00.644085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.026 [2024-11-18 00:34:00.644098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.026 [2024-11-18 00:34:00.644115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.026 [2024-11-18 00:34:00.644129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.026 [2024-11-18 00:34:00.644145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.026 [2024-11-18 00:34:00.644159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.026 [2024-11-18 00:34:00.644176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.026 [2024-11-18 00:34:00.644190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.026 [2024-11-18 00:34:00.644205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.026 [2024-11-18 00:34:00.644224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.026 [2024-11-18 00:34:00.644241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.026 [2024-11-18 00:34:00.644255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.026 [2024-11-18 00:34:00.644271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.026 [2024-11-18 00:34:00.644286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.026 [2024-11-18 00:34:00.644301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.026 [2024-11-18 00:34:00.644324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.026 [2024-11-18 00:34:00.644342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.026 [2024-11-18 00:34:00.644357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.026 [2024-11-18 00:34:00.644373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.026 [2024-11-18 00:34:00.644388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.026 [2024-11-18 00:34:00.644404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.026 [2024-11-18 00:34:00.644419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.026 [2024-11-18 00:34:00.644435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.026 [2024-11-18 00:34:00.644450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.026 [2024-11-18 00:34:00.644466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.026 [2024-11-18 00:34:00.644481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.026 [2024-11-18 00:34:00.644496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.026 [2024-11-18 00:34:00.644511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.027 [2024-11-18 00:34:00.644527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.027 [2024-11-18 00:34:00.644541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.027 [2024-11-18 00:34:00.644557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.027 [2024-11-18 00:34:00.644572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.027 [2024-11-18 00:34:00.644588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.027 [2024-11-18 00:34:00.644602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.027 [2024-11-18 00:34:00.644622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.027 [2024-11-18 00:34:00.644638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.027 [2024-11-18 00:34:00.644654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.027 [2024-11-18 00:34:00.644669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.027 [2024-11-18 00:34:00.644684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.027 [2024-11-18 00:34:00.644699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.027 [2024-11-18 00:34:00.644714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.027 [2024-11-18 00:34:00.644729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.027 [2024-11-18 00:34:00.644745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.027 [2024-11-18 00:34:00.644760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.027 [2024-11-18 00:34:00.644775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.027 [2024-11-18 00:34:00.644790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.027 [2024-11-18 00:34:00.644806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.027 [2024-11-18 00:34:00.644820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.027 [2024-11-18 00:34:00.644847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.027 [2024-11-18 00:34:00.644862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.027 [2024-11-18 00:34:00.644877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.027 [2024-11-18 00:34:00.644892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.027 [2024-11-18 00:34:00.644908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.027 [2024-11-18 00:34:00.644922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.027 [2024-11-18 00:34:00.644938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.027 [2024-11-18 00:34:00.644952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.027 [2024-11-18 00:34:00.644968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.027 [2024-11-18 00:34:00.644983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.027 [2024-11-18 00:34:00.644997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b4860 is same with the state(6) to be set 00:28:37.027 [2024-11-18 00:34:00.646241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.027 [2024-11-18 00:34:00.646265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.027 [2024-11-18 00:34:00.646286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.027 [2024-11-18 00:34:00.646303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.027 [2024-11-18 00:34:00.646330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.027 [2024-11-18 00:34:00.646347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.027 [2024-11-18 00:34:00.646364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.027 [2024-11-18 00:34:00.646378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.027 [2024-11-18 00:34:00.646394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.027 [2024-11-18 00:34:00.646408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.027 [2024-11-18 00:34:00.646424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.027 [2024-11-18 00:34:00.646438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.027 [2024-11-18 00:34:00.646454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.027 [2024-11-18 00:34:00.646469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.027 [2024-11-18 00:34:00.646486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.027 [2024-11-18 00:34:00.646500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.027 [2024-11-18 00:34:00.646517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.027 [2024-11-18 00:34:00.646532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.027 [2024-11-18 00:34:00.646548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.027 [2024-11-18 00:34:00.646562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.027 [2024-11-18 00:34:00.646578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.027 [2024-11-18 00:34:00.646592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.027 [2024-11-18 00:34:00.646608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.027 [2024-11-18 00:34:00.646622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.027 [2024-11-18 00:34:00.646639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.027 [2024-11-18 00:34:00.646658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.027 [2024-11-18 00:34:00.646675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.027 [2024-11-18 00:34:00.646690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.027 [2024-11-18 00:34:00.646706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.027 [2024-11-18 00:34:00.646721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.027 [2024-11-18 00:34:00.646737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.027 [2024-11-18 00:34:00.646752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.027 [2024-11-18 00:34:00.646767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.027 [2024-11-18 00:34:00.646783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.027 [2024-11-18 00:34:00.646799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.027 [2024-11-18 00:34:00.646814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.027 [2024-11-18 00:34:00.646830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.027 [2024-11-18 00:34:00.646844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.027 [2024-11-18 00:34:00.646860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.027 [2024-11-18 00:34:00.646874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.027 [2024-11-18 00:34:00.646890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.027 [2024-11-18 00:34:00.646905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.027 [2024-11-18 00:34:00.646921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.027 [2024-11-18 00:34:00.646935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.027 [2024-11-18 00:34:00.646951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.027 [2024-11-18 00:34:00.646966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.027 [2024-11-18 00:34:00.646982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.027 [2024-11-18 00:34:00.646996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.027 [2024-11-18 00:34:00.647012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.028 [2024-11-18 00:34:00.647026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.028 [2024-11-18 00:34:00.647045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.028 [2024-11-18 00:34:00.647061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.028 [2024-11-18 00:34:00.647077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.028 [2024-11-18 00:34:00.647091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.028 [2024-11-18 00:34:00.647107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.028 [2024-11-18 00:34:00.647122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.028 [2024-11-18 00:34:00.647138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.028 [2024-11-18 00:34:00.647152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.028 [2024-11-18 00:34:00.647168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.028 [2024-11-18 00:34:00.647182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.028 [2024-11-18 00:34:00.647199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.028 [2024-11-18 00:34:00.647213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.028 [2024-11-18 00:34:00.647229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.028 [2024-11-18 00:34:00.647243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.028 [2024-11-18 00:34:00.647260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.028 [2024-11-18 00:34:00.647274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.028 [2024-11-18 00:34:00.647290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.028 [2024-11-18 00:34:00.647304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.028 [2024-11-18 00:34:00.647330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.028 [2024-11-18 00:34:00.647346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.028 [2024-11-18 00:34:00.647362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.028 [2024-11-18 00:34:00.647377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.028 [2024-11-18 00:34:00.647393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.028 [2024-11-18 00:34:00.647407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.028 [2024-11-18 00:34:00.647424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.028 [2024-11-18 00:34:00.647442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.028 [2024-11-18 00:34:00.647460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.028 [2024-11-18 00:34:00.647475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.028 [2024-11-18 00:34:00.647490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.028 [2024-11-18 00:34:00.647504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.028 [2024-11-18 00:34:00.647520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.028 [2024-11-18 00:34:00.647535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.028 [2024-11-18 00:34:00.647550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.028 [2024-11-18 00:34:00.647565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.028 [2024-11-18 00:34:00.647580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.028 [2024-11-18 00:34:00.647603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.028 [2024-11-18 00:34:00.647619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.028 [2024-11-18 00:34:00.647633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.028 [2024-11-18 00:34:00.647649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.028 [2024-11-18 00:34:00.647672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.028 [2024-11-18 00:34:00.647687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.028 [2024-11-18 00:34:00.647702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.028 [2024-11-18 00:34:00.647718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.028 [2024-11-18 00:34:00.647732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.028 [2024-11-18 00:34:00.647748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.028 [2024-11-18 00:34:00.647762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.028 [2024-11-18 00:34:00.647778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.028 [2024-11-18 00:34:00.647792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.028 [2024-11-18 00:34:00.647807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.028 [2024-11-18 00:34:00.647820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.028 [2024-11-18 00:34:00.647845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.028 [2024-11-18 00:34:00.647860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.028 [2024-11-18 00:34:00.647877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.028 [2024-11-18 00:34:00.647891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.028 [2024-11-18 00:34:00.647907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.028 [2024-11-18 00:34:00.647921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.028 [2024-11-18 00:34:00.647937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.028 [2024-11-18 00:34:00.647952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.028 [2024-11-18 00:34:00.647980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.028 [2024-11-18 00:34:00.647994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.028 [2024-11-18 00:34:00.648010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.028 [2024-11-18 00:34:00.648024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.028 [2024-11-18 00:34:00.648040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.028 [2024-11-18 00:34:00.648055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.028 [2024-11-18 00:34:00.648070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.028 [2024-11-18 00:34:00.648084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.028 [2024-11-18 00:34:00.648100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.028 [2024-11-18 00:34:00.648125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.028 [2024-11-18 00:34:00.648141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.028 [2024-11-18 00:34:00.648155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.028 [2024-11-18 00:34:00.648171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.028 [2024-11-18 00:34:00.648187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.028 [2024-11-18 00:34:00.648203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.028 [2024-11-18 00:34:00.648217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.028 [2024-11-18 00:34:00.648233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.028 [2024-11-18 00:34:00.648251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.028 [2024-11-18 00:34:00.648268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.029 [2024-11-18 00:34:00.648282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.029 [2024-11-18 00:34:00.648297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b5da0 is same with the state(6) to be set 00:28:37.029 [2024-11-18 00:34:00.649558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.029 [2024-11-18 00:34:00.649582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.029 [2024-11-18 00:34:00.649604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.029 [2024-11-18 00:34:00.649621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.029 [2024-11-18 00:34:00.649637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.029 [2024-11-18 00:34:00.649652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.029 [2024-11-18 00:34:00.649669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.029 [2024-11-18 00:34:00.649684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.029 [2024-11-18 00:34:00.649700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.029 [2024-11-18 00:34:00.649715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.029 [2024-11-18 00:34:00.649731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.029 [2024-11-18 00:34:00.649745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.029 [2024-11-18 00:34:00.649761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.029 [2024-11-18 00:34:00.649776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.029 [2024-11-18 00:34:00.649792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.029 [2024-11-18 00:34:00.649806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.029 [2024-11-18 00:34:00.649823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.029 [2024-11-18 00:34:00.649838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.029 [2024-11-18 00:34:00.649854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.029 [2024-11-18 00:34:00.649868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.029 [2024-11-18 00:34:00.649885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.029 [2024-11-18 00:34:00.649899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.029 [2024-11-18 00:34:00.649920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.029 [2024-11-18 00:34:00.649935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.029 [2024-11-18 00:34:00.649952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.029 [2024-11-18 00:34:00.649967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.029 [2024-11-18 00:34:00.649983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.029 [2024-11-18 00:34:00.649997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.029 [2024-11-18 00:34:00.650013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.029 [2024-11-18 00:34:00.650028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.029 [2024-11-18 00:34:00.650043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.029 [2024-11-18 00:34:00.650058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.029 [2024-11-18 00:34:00.650075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.029 [2024-11-18 00:34:00.650089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.029 [2024-11-18 00:34:00.650107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.029 [2024-11-18 00:34:00.650122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.029 [2024-11-18 00:34:00.650139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.029 [2024-11-18 00:34:00.650153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.029 [2024-11-18 00:34:00.650169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.029 [2024-11-18 00:34:00.650183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.029 [2024-11-18 00:34:00.650199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.029 [2024-11-18 00:34:00.650214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.029 [2024-11-18 00:34:00.650230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.029 [2024-11-18 00:34:00.650244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.029 [2024-11-18 00:34:00.650260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.029 [2024-11-18 00:34:00.650274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.029 [2024-11-18 00:34:00.650290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.029 [2024-11-18 00:34:00.650308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.029 [2024-11-18 00:34:00.650334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.029 [2024-11-18 00:34:00.650349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.029 [2024-11-18 00:34:00.650366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.029 [2024-11-18 00:34:00.650380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.029 [2024-11-18 00:34:00.650396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.029 [2024-11-18 00:34:00.650411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.029 [2024-11-18 00:34:00.650427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.029 [2024-11-18 00:34:00.650442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.029 [2024-11-18 00:34:00.650458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.029 [2024-11-18 00:34:00.650472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.029 [2024-11-18 00:34:00.650488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.029 [2024-11-18 00:34:00.650502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.029 [2024-11-18 00:34:00.650518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.029 [2024-11-18 00:34:00.650533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.029 [2024-11-18 00:34:00.650549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.029 [2024-11-18 00:34:00.650563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.029 [2024-11-18 00:34:00.650579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.029 [2024-11-18 00:34:00.650605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.029 [2024-11-18 00:34:00.650621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.030 [2024-11-18 00:34:00.650635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.030 [2024-11-18 00:34:00.650651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.030 [2024-11-18 00:34:00.650669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.030 [2024-11-18 00:34:00.650685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.030 [2024-11-18 00:34:00.650701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.030 [2024-11-18 00:34:00.650721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.030 [2024-11-18 00:34:00.650746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.030 [2024-11-18 00:34:00.650762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.030 [2024-11-18 00:34:00.650777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.030 [2024-11-18 00:34:00.650793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.030 [2024-11-18 00:34:00.650807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.030 [2024-11-18 00:34:00.650824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.030 [2024-11-18 00:34:00.650838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.030 [2024-11-18 00:34:00.650854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.030 [2024-11-18 00:34:00.650869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.030 [2024-11-18 00:34:00.650884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.030 [2024-11-18 00:34:00.650899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.030 [2024-11-18 00:34:00.650915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.030 [2024-11-18 00:34:00.650929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.030 [2024-11-18 00:34:00.650946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.030 [2024-11-18 00:34:00.650960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.030 [2024-11-18 00:34:00.650976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.030 [2024-11-18 00:34:00.650991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.030 [2024-11-18 00:34:00.651006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.030 [2024-11-18 00:34:00.651020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.030 [2024-11-18 00:34:00.651036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.030 [2024-11-18 00:34:00.651051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.030 [2024-11-18 00:34:00.651066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.030 [2024-11-18 00:34:00.651081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.030 [2024-11-18 00:34:00.651097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.030 [2024-11-18 00:34:00.651115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.030 [2024-11-18 00:34:00.651132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.030 [2024-11-18 00:34:00.651146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.030 [2024-11-18 00:34:00.651162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.030 [2024-11-18 00:34:00.651176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.030 [2024-11-18 00:34:00.651192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.030 [2024-11-18 00:34:00.651208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.030 [2024-11-18 00:34:00.651224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.030 [2024-11-18 00:34:00.651239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.030 [2024-11-18 00:34:00.651255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.030 [2024-11-18 00:34:00.651269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.030 [2024-11-18 00:34:00.651285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.030 [2024-11-18 00:34:00.651299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.030 [2024-11-18 00:34:00.651385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.030 [2024-11-18 00:34:00.651406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.030 [2024-11-18 00:34:00.651423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.030 [2024-11-18 00:34:00.651437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.030 [2024-11-18 00:34:00.651453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.030 [2024-11-18 00:34:00.651467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.030 [2024-11-18 00:34:00.651484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.030 [2024-11-18 00:34:00.651498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.030 [2024-11-18 00:34:00.651514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.030 [2024-11-18 00:34:00.651529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.030 [2024-11-18 00:34:00.651544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.030 [2024-11-18 00:34:00.651559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.030 [2024-11-18 00:34:00.651580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.030 [2024-11-18 00:34:00.651595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.030 [2024-11-18 00:34:00.651612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.030 [2024-11-18 00:34:00.651626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.030 [2024-11-18 00:34:00.651643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.030 [2024-11-18 00:34:00.651662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.030 [2024-11-18 00:34:00.651676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b7370 is same with the state(6) to be set 00:28:37.030 [2024-11-18 00:34:00.652999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.030 [2024-11-18 00:34:00.653023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.030 [2024-11-18 00:34:00.653044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.030 [2024-11-18 00:34:00.653061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.030 [2024-11-18 00:34:00.653078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.030 [2024-11-18 00:34:00.653093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.030 [2024-11-18 00:34:00.653110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.030 [2024-11-18 00:34:00.653124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.030 [2024-11-18 00:34:00.653140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.030 [2024-11-18 00:34:00.653154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.030 [2024-11-18 00:34:00.653171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.030 [2024-11-18 00:34:00.653186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.030 [2024-11-18 00:34:00.653203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.030 [2024-11-18 00:34:00.653225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.030 [2024-11-18 00:34:00.653242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.030 [2024-11-18 00:34:00.653257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.030 [2024-11-18 00:34:00.653273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.031 [2024-11-18 00:34:00.653296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.031 [2024-11-18 00:34:00.653326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.031 [2024-11-18 00:34:00.653351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.031 [2024-11-18 00:34:00.653368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.031 [2024-11-18 00:34:00.653382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.031 [2024-11-18 00:34:00.653398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.031 [2024-11-18 00:34:00.653412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.031 [2024-11-18 00:34:00.653427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.031 [2024-11-18 00:34:00.653442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.031 [2024-11-18 00:34:00.653458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.031 [2024-11-18 00:34:00.653472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.031 [2024-11-18 00:34:00.653488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.031 [2024-11-18 00:34:00.653503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.031 [2024-11-18 00:34:00.653519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.031 [2024-11-18 00:34:00.653533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.031 [2024-11-18 00:34:00.653549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.031 [2024-11-18 00:34:00.653564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.031 [2024-11-18 00:34:00.653580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.031 [2024-11-18 00:34:00.653594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.031 [2024-11-18 00:34:00.653610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.031 [2024-11-18 00:34:00.653624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.031 [2024-11-18 00:34:00.653640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.031 [2024-11-18 00:34:00.653655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.031 [2024-11-18 00:34:00.653670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.031 [2024-11-18 00:34:00.653685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.031 [2024-11-18 00:34:00.653700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.031 [2024-11-18 00:34:00.653719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.031 [2024-11-18 00:34:00.653736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.031 [2024-11-18 00:34:00.653752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.031 [2024-11-18 00:34:00.653769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.031 [2024-11-18 00:34:00.653784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.031 [2024-11-18 00:34:00.653800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.031 [2024-11-18 00:34:00.653815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.031 [2024-11-18 00:34:00.653831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.031 [2024-11-18 00:34:00.653846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.031 [2024-11-18 00:34:00.653863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.031 [2024-11-18 00:34:00.653877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.031 [2024-11-18 00:34:00.653893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.031 [2024-11-18 00:34:00.653907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.031 [2024-11-18 00:34:00.653922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.031 [2024-11-18 00:34:00.653937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.031 [2024-11-18 00:34:00.653954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.031 [2024-11-18 00:34:00.653968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.031 [2024-11-18 00:34:00.653984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.031 [2024-11-18 00:34:00.653998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.031 [2024-11-18 00:34:00.654015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.031 [2024-11-18 00:34:00.654029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.031 [2024-11-18 00:34:00.654044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.031 [2024-11-18 00:34:00.654058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.031 [2024-11-18 00:34:00.654075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.031 [2024-11-18 00:34:00.654089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.031 [2024-11-18 00:34:00.654109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.031 [2024-11-18 00:34:00.654125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.031 [2024-11-18 00:34:00.654142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.031 [2024-11-18 00:34:00.654157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.031 [2024-11-18 00:34:00.654173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.031 [2024-11-18 00:34:00.654188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.031 [2024-11-18 00:34:00.654204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.031 [2024-11-18 00:34:00.654218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.031 [2024-11-18 00:34:00.654234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.031 [2024-11-18 00:34:00.654249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.031 [2024-11-18 00:34:00.654276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.031 [2024-11-18 00:34:00.654291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.031 [2024-11-18 00:34:00.654307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.031 [2024-11-18 00:34:00.654342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.031 [2024-11-18 00:34:00.654359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.031 [2024-11-18 00:34:00.654374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.031 [2024-11-18 00:34:00.654391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.031 [2024-11-18 00:34:00.654405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.031 [2024-11-18 00:34:00.654421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.031 [2024-11-18 00:34:00.654436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.031 [2024-11-18 00:34:00.654452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.031 [2024-11-18 00:34:00.654467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.031 [2024-11-18 00:34:00.654483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.031 [2024-11-18 00:34:00.654498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.031 [2024-11-18 00:34:00.654514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.031 [2024-11-18 00:34:00.654533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.031 [2024-11-18 00:34:00.654549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.031 [2024-11-18 00:34:00.654564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.032 [2024-11-18 00:34:00.654580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.032 [2024-11-18 00:34:00.654594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.032 [2024-11-18 00:34:00.654610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.032 [2024-11-18 00:34:00.654624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.032 [2024-11-18 00:34:00.654640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.032 [2024-11-18 00:34:00.654654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.032 [2024-11-18 00:34:00.654671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.032 [2024-11-18 00:34:00.654685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.032 [2024-11-18 00:34:00.654701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.032 [2024-11-18 00:34:00.654716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.032 [2024-11-18 00:34:00.654732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.032 [2024-11-18 00:34:00.654746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.032 [2024-11-18 00:34:00.654762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.032 [2024-11-18 00:34:00.654777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.032 [2024-11-18 00:34:00.654794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.032 [2024-11-18 00:34:00.654809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.032 [2024-11-18 00:34:00.654825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.032 [2024-11-18 00:34:00.654839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.032 [2024-11-18 00:34:00.654855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.032 [2024-11-18 00:34:00.654869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.032 [2024-11-18 00:34:00.654884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.032 [2024-11-18 00:34:00.654899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.032 [2024-11-18 00:34:00.654918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.032 [2024-11-18 00:34:00.654934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.032 [2024-11-18 00:34:00.654950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.032 [2024-11-18 00:34:00.654965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.032 [2024-11-18 00:34:00.654981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.032 [2024-11-18 00:34:00.654995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.032 [2024-11-18 00:34:00.655011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.032 [2024-11-18 00:34:00.655026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.032 [2024-11-18 00:34:00.655041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.032 [2024-11-18 00:34:00.655056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.032 [2024-11-18 00:34:00.655071] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fec60 is same with the state(6) to be set 00:28:37.032 [2024-11-18 00:34:00.656282] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:28:37.032 [2024-11-18 00:34:00.656323] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:28:37.032 [2024-11-18 00:34:00.656345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:28:37.032 [2024-11-18 00:34:00.656363] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:28:37.032 [2024-11-18 00:34:00.656487] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:28:37.032 [2024-11-18 00:34:00.656604] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:28:37.032 [2024-11-18 00:34:00.656830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.032 [2024-11-18 00:34:00.656868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13b40a0 with addr=10.0.0.2, port=4420 00:28:37.032 [2024-11-18 00:34:00.656887] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b40a0 is same with the state(6) to be set 00:28:37.032 [2024-11-18 00:34:00.656980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.032 [2024-11-18 00:34:00.657006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e6a90 with addr=10.0.0.2, port=4420 00:28:37.032 [2024-11-18 00:34:00.657023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e6a90 is same with the state(6) to be set 00:28:37.032 [2024-11-18 00:34:00.657124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.032 [2024-11-18 00:34:00.657151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f4f50 with addr=10.0.0.2, port=4420 00:28:37.032 [2024-11-18 00:34:00.657167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f4f50 is same with the state(6) to be set 00:28:37.032 [2024-11-18 00:34:00.657256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.032 [2024-11-18 00:34:00.657281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e7e10 with addr=10.0.0.2, port=4420 00:28:37.032 [2024-11-18 00:34:00.657302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e7e10 is same with the state(6) to be set 00:28:37.032 [2024-11-18 00:34:00.658411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.032 [2024-11-18 00:34:00.658438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.032 [2024-11-18 00:34:00.658462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.032 [2024-11-18 00:34:00.658479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.032 [2024-11-18 00:34:00.658497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.032 [2024-11-18 00:34:00.658512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.032 [2024-11-18 00:34:00.658528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.032 [2024-11-18 00:34:00.658543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.032 [2024-11-18 00:34:00.658560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.032 [2024-11-18 00:34:00.658574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.032 [2024-11-18 00:34:00.658590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.032 [2024-11-18 00:34:00.658604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.032 [2024-11-18 00:34:00.658626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.032 [2024-11-18 00:34:00.658641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.032 [2024-11-18 00:34:00.658657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.032 [2024-11-18 00:34:00.658672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.032 [2024-11-18 00:34:00.658687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.032 [2024-11-18 00:34:00.658702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.032 [2024-11-18 00:34:00.658717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.032 [2024-11-18 00:34:00.658732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.032 [2024-11-18 00:34:00.658748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.032 [2024-11-18 00:34:00.658762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.032 [2024-11-18 00:34:00.658778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.032 [2024-11-18 00:34:00.658793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.032 [2024-11-18 00:34:00.658815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.032 [2024-11-18 00:34:00.658831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.032 [2024-11-18 00:34:00.658848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.032 [2024-11-18 00:34:00.658863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.032 [2024-11-18 00:34:00.658880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.032 [2024-11-18 00:34:00.658895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.033 [2024-11-18 00:34:00.658916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.033 [2024-11-18 00:34:00.658931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.033 [2024-11-18 00:34:00.658947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.033 [2024-11-18 00:34:00.658961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.033 [2024-11-18 00:34:00.658977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.033 [2024-11-18 00:34:00.658992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.033 [2024-11-18 00:34:00.659008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.033 [2024-11-18 00:34:00.659023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.033 [2024-11-18 00:34:00.659038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.033 [2024-11-18 00:34:00.659053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.033 [2024-11-18 00:34:00.659069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.033 [2024-11-18 00:34:00.659084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.033 [2024-11-18 00:34:00.659111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.033 [2024-11-18 00:34:00.659125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.033 [2024-11-18 00:34:00.659141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.033 [2024-11-18 00:34:00.659155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.033 [2024-11-18 00:34:00.659178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.033 [2024-11-18 00:34:00.659192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.033 [2024-11-18 00:34:00.659208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.033 [2024-11-18 00:34:00.659226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.033 [2024-11-18 00:34:00.659243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.033 [2024-11-18 00:34:00.659258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.033 [2024-11-18 00:34:00.659274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.033 [2024-11-18 00:34:00.659288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.033 [2024-11-18 00:34:00.659305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.033 [2024-11-18 00:34:00.659329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.033 [2024-11-18 00:34:00.659369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.033 [2024-11-18 00:34:00.659385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.033 [2024-11-18 00:34:00.659402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.033 [2024-11-18 00:34:00.659417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.033 [2024-11-18 00:34:00.659433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.033 [2024-11-18 00:34:00.659448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.033 [2024-11-18 00:34:00.659465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.033 [2024-11-18 00:34:00.659479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.033 [2024-11-18 00:34:00.659495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.033 [2024-11-18 00:34:00.659509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.033 [2024-11-18 00:34:00.659525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.033 [2024-11-18 00:34:00.659540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.033 [2024-11-18 00:34:00.659556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.033 [2024-11-18 00:34:00.659571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.033 [2024-11-18 00:34:00.659587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.033 [2024-11-18 00:34:00.659602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.033 [2024-11-18 00:34:00.659620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.033 [2024-11-18 00:34:00.659635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.033 [2024-11-18 00:34:00.659659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.033 [2024-11-18 00:34:00.659676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.033 [2024-11-18 00:34:00.659693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.033 [2024-11-18 00:34:00.659708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.033 [2024-11-18 00:34:00.659724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.033 [2024-11-18 00:34:00.659738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.033 [2024-11-18 00:34:00.659755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.033 [2024-11-18 00:34:00.659769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.033 [2024-11-18 00:34:00.659785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.033 [2024-11-18 00:34:00.659800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.033 [2024-11-18 00:34:00.659816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.033 [2024-11-18 00:34:00.659832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.033 [2024-11-18 00:34:00.659848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.033 [2024-11-18 00:34:00.659863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.033 [2024-11-18 00:34:00.659878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.033 [2024-11-18 00:34:00.659893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.033 [2024-11-18 00:34:00.659909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.033 [2024-11-18 00:34:00.659923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.033 [2024-11-18 00:34:00.659938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.033 [2024-11-18 00:34:00.659953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.033 [2024-11-18 00:34:00.659968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.033 [2024-11-18 00:34:00.659983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.033 [2024-11-18 00:34:00.659998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.033 [2024-11-18 00:34:00.660013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.033 [2024-11-18 00:34:00.660029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.033 [2024-11-18 00:34:00.660047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.033 [2024-11-18 00:34:00.660064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.033 [2024-11-18 00:34:00.660079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.033 [2024-11-18 00:34:00.660095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.033 [2024-11-18 00:34:00.660110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.033 [2024-11-18 00:34:00.660137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.033 [2024-11-18 00:34:00.660152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.033 [2024-11-18 00:34:00.660168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.033 [2024-11-18 00:34:00.660182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.033 [2024-11-18 00:34:00.660208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.034 [2024-11-18 00:34:00.660223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.034 [2024-11-18 00:34:00.660238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.034 [2024-11-18 00:34:00.660253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.034 [2024-11-18 00:34:00.660269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.034 [2024-11-18 00:34:00.660283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.034 [2024-11-18 00:34:00.660317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.034 [2024-11-18 00:34:00.660334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.034 [2024-11-18 00:34:00.660351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.034 [2024-11-18 00:34:00.660366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.034 [2024-11-18 00:34:00.660382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.034 [2024-11-18 00:34:00.660396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.034 [2024-11-18 00:34:00.660412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.034 [2024-11-18 00:34:00.660426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.034 [2024-11-18 00:34:00.660443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.034 [2024-11-18 00:34:00.660458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.034 [2024-11-18 00:34:00.660478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.034 [2024-11-18 00:34:00.660494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.034 [2024-11-18 00:34:00.660509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.034 [2024-11-18 00:34:00.660524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.034 [2024-11-18 00:34:00.660539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f9f20 is same with the state(6) to be set 00:28:37.034 [2024-11-18 00:34:00.661805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.034 [2024-11-18 00:34:00.661829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.034 [2024-11-18 00:34:00.661850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.034 [2024-11-18 00:34:00.661866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.034 [2024-11-18 00:34:00.661883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.034 [2024-11-18 00:34:00.661907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.034 [2024-11-18 00:34:00.661924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.034 [2024-11-18 00:34:00.661938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.034 [2024-11-18 00:34:00.661955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.034 [2024-11-18 00:34:00.661972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.034 [2024-11-18 00:34:00.661988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.034 [2024-11-18 00:34:00.662003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.034 [2024-11-18 00:34:00.662019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.034 [2024-11-18 00:34:00.662033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.034 [2024-11-18 00:34:00.662049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.034 [2024-11-18 00:34:00.662064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.034 [2024-11-18 00:34:00.662080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.034 [2024-11-18 00:34:00.662094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.034 [2024-11-18 00:34:00.662111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.034 [2024-11-18 00:34:00.662126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.034 [2024-11-18 00:34:00.662142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.034 [2024-11-18 00:34:00.662162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.034 [2024-11-18 00:34:00.662179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.034 [2024-11-18 00:34:00.662194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.034 [2024-11-18 00:34:00.662211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.034 [2024-11-18 00:34:00.662226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.034 [2024-11-18 00:34:00.662243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.034 [2024-11-18 00:34:00.662257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.034 [2024-11-18 00:34:00.662273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.034 [2024-11-18 00:34:00.662288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.034 [2024-11-18 00:34:00.662326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.034 [2024-11-18 00:34:00.662344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.034 [2024-11-18 00:34:00.662360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.034 [2024-11-18 00:34:00.662375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.034 [2024-11-18 00:34:00.662391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.034 [2024-11-18 00:34:00.662406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.034 [2024-11-18 00:34:00.662422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.034 [2024-11-18 00:34:00.662437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.034 [2024-11-18 00:34:00.662453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.034 [2024-11-18 00:34:00.662467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.034 [2024-11-18 00:34:00.662484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.034 [2024-11-18 00:34:00.662499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.034 [2024-11-18 00:34:00.662516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.034 [2024-11-18 00:34:00.662530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.034 [2024-11-18 00:34:00.662546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.034 [2024-11-18 00:34:00.662561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.034 [2024-11-18 00:34:00.662582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.034 [2024-11-18 00:34:00.662598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.034 [2024-11-18 00:34:00.662614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.034 [2024-11-18 00:34:00.662628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.034 [2024-11-18 00:34:00.662644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.035 [2024-11-18 00:34:00.662660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.035 [2024-11-18 00:34:00.662676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.035 [2024-11-18 00:34:00.662691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.035 [2024-11-18 00:34:00.662707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.035 [2024-11-18 00:34:00.662721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.035 [2024-11-18 00:34:00.662737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.035 [2024-11-18 00:34:00.662762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.035 [2024-11-18 00:34:00.662778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.035 [2024-11-18 00:34:00.662792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.035 [2024-11-18 00:34:00.662808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.035 [2024-11-18 00:34:00.662834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.035 [2024-11-18 00:34:00.662850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.035 [2024-11-18 00:34:00.662865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.035 [2024-11-18 00:34:00.662880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.035 [2024-11-18 00:34:00.662899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.035 [2024-11-18 00:34:00.662915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.035 [2024-11-18 00:34:00.662930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.035 [2024-11-18 00:34:00.662946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.035 [2024-11-18 00:34:00.662960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.035 [2024-11-18 00:34:00.662976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.035 [2024-11-18 00:34:00.662995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.035 [2024-11-18 00:34:00.663012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.035 [2024-11-18 00:34:00.663026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.035 [2024-11-18 00:34:00.663043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.035 [2024-11-18 00:34:00.663057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.035 [2024-11-18 00:34:00.663074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.035 [2024-11-18 00:34:00.663089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.035 [2024-11-18 00:34:00.663106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.035 [2024-11-18 00:34:00.663120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.035 [2024-11-18 00:34:00.663136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.035 [2024-11-18 00:34:00.663151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.035 [2024-11-18 00:34:00.663168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.035 [2024-11-18 00:34:00.663182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.035 [2024-11-18 00:34:00.663204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.035 [2024-11-18 00:34:00.663218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.035 [2024-11-18 00:34:00.663234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.035 [2024-11-18 00:34:00.663248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.035 [2024-11-18 00:34:00.663264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.035 [2024-11-18 00:34:00.663279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.035 [2024-11-18 00:34:00.663295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.035 [2024-11-18 00:34:00.663318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.035 [2024-11-18 00:34:00.663337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.035 [2024-11-18 00:34:00.663352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.035 [2024-11-18 00:34:00.663369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.035 [2024-11-18 00:34:00.663384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.035 [2024-11-18 00:34:00.663404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.035 [2024-11-18 00:34:00.663419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.035 [2024-11-18 00:34:00.663436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.035 [2024-11-18 00:34:00.663450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.035 [2024-11-18 00:34:00.663466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.035 [2024-11-18 00:34:00.663480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.035 [2024-11-18 00:34:00.663496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.035 [2024-11-18 00:34:00.663510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.035 [2024-11-18 00:34:00.663526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.035 [2024-11-18 00:34:00.663541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.035 [2024-11-18 00:34:00.663557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.035 [2024-11-18 00:34:00.663571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.035 [2024-11-18 00:34:00.663587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.035 [2024-11-18 00:34:00.663602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.035 [2024-11-18 00:34:00.663619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.035 [2024-11-18 00:34:00.663634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.035 [2024-11-18 00:34:00.663650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.035 [2024-11-18 00:34:00.663665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.035 [2024-11-18 00:34:00.663680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.035 [2024-11-18 00:34:00.663695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.035 [2024-11-18 00:34:00.663711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.035 [2024-11-18 00:34:00.663726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.035 [2024-11-18 00:34:00.663742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.035 [2024-11-18 00:34:00.663756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.035 [2024-11-18 00:34:00.663772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.035 [2024-11-18 00:34:00.663790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.035 [2024-11-18 00:34:00.663807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.035 [2024-11-18 00:34:00.663821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.035 [2024-11-18 00:34:00.663837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.035 [2024-11-18 00:34:00.663851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.035 [2024-11-18 00:34:00.663868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.035 [2024-11-18 00:34:00.663882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.035 [2024-11-18 00:34:00.663896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd780 is same with the state(6) to be set 00:28:37.036 [2024-11-18 00:34:00.666272] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:28:37.036 [2024-11-18 00:34:00.666334] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:28:37.036 [2024-11-18 00:34:00.666359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:28:37.036 [2024-11-18 00:34:00.666379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:28:37.036 task offset: 24576 on job bdev=Nvme1n1 fails 00:28:37.036 00:28:37.036 Latency(us) 00:28:37.036 [2024-11-17T23:34:00.858Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:37.036 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:37.036 Job: Nvme1n1 ended in about 0.90 seconds with error 00:28:37.036 Verification LBA range: start 0x0 length 0x400 00:28:37.036 Nvme1n1 : 0.90 214.20 13.39 71.40 0.00 221525.24 7670.14 262532.36 00:28:37.036 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:37.036 Job: Nvme2n1 ended in about 0.92 seconds with error 00:28:37.036 Verification LBA range: start 0x0 length 0x400 00:28:37.036 Nvme2n1 : 0.92 154.60 9.66 57.30 0.00 291207.46 18058.81 279620.27 00:28:37.036 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:37.036 Job: Nvme3n1 ended in about 0.90 seconds with error 00:28:37.036 Verification LBA range: start 0x0 length 0x400 00:28:37.036 Nvme3n1 : 0.90 212.34 13.27 70.78 0.00 214289.92 9514.86 257872.02 00:28:37.036 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:37.036 Job: Nvme4n1 ended in about 0.91 seconds with error 00:28:37.036 Verification LBA range: start 0x0 length 0x400 00:28:37.036 Nvme4n1 : 0.91 212.07 13.25 70.69 0.00 210021.97 5534.15 250104.79 00:28:37.036 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:37.036 Job: Nvme5n1 ended in about 0.93 seconds with error 00:28:37.036 Verification LBA range: start 0x0 length 0x400 00:28:37.036 Nvme5n1 : 0.93 137.89 8.62 68.94 0.00 281671.30 21845.33 260978.92 00:28:37.036 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:37.036 Job: Nvme6n1 ended in about 0.93 seconds with error 00:28:37.036 Verification LBA range: start 0x0 length 0x400 00:28:37.036 Nvme6n1 : 0.93 137.40 8.59 68.70 0.00 276611.22 21651.15 259425.47 00:28:37.036 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:37.036 Job: Nvme7n1 ended in about 0.93 seconds with error 00:28:37.036 Verification LBA range: start 0x0 length 0x400 00:28:37.036 Nvme7n1 : 0.93 136.91 8.56 68.45 0.00 271828.51 18447.17 264085.81 00:28:37.036 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:37.036 Job: Nvme8n1 ended in about 0.94 seconds with error 00:28:37.036 Verification LBA range: start 0x0 length 0x400 00:28:37.036 Nvme8n1 : 0.94 139.86 8.74 67.81 0.00 263476.02 16602.45 242337.56 00:28:37.036 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:37.036 Job: Nvme9n1 ended in about 0.95 seconds with error 00:28:37.036 Verification LBA range: start 0x0 length 0x400 00:28:37.036 Nvme9n1 : 0.95 135.15 8.45 67.57 0.00 264312.60 20971.52 268746.15 00:28:37.036 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:37.036 Job: Nvme10n1 ended in about 0.94 seconds with error 00:28:37.036 Verification LBA range: start 0x0 length 0x400 00:28:37.036 Nvme10n1 : 0.94 136.41 8.53 68.21 0.00 255623.77 20874.43 292047.83 00:28:37.036 [2024-11-17T23:34:00.858Z] =================================================================================================================== 00:28:37.036 [2024-11-17T23:34:00.858Z] Total : 1616.84 101.05 679.86 0.00 251538.38 5534.15 292047.83 00:28:37.036 [2024-11-18 00:34:00.696437] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:37.036 [2024-11-18 00:34:00.696523] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:28:37.036 [2024-11-18 00:34:00.696794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.036 [2024-11-18 00:34:00.696841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1807ea0 with addr=10.0.0.2, port=4420 00:28:37.036 [2024-11-18 00:34:00.696863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1807ea0 is same with the state(6) to be set 00:28:37.036 [2024-11-18 00:34:00.696893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13b40a0 (9): Bad file descriptor 00:28:37.036 [2024-11-18 00:34:00.696927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e6a90 (9): Bad file descriptor 00:28:37.036 [2024-11-18 00:34:00.696946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f4f50 (9): Bad file descriptor 00:28:37.036 [2024-11-18 00:34:00.696966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e7e10 (9): Bad file descriptor 00:28:37.036 [2024-11-18 00:34:00.697250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.036 [2024-11-18 00:34:00.697282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13ac450 with addr=10.0.0.2, port=4420 00:28:37.036 [2024-11-18 00:34:00.697320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ac450 is same with the state(6) to be set 00:28:37.036 [2024-11-18 00:34:00.697415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.036 [2024-11-18 00:34:00.697443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13a6860 with addr=10.0.0.2, port=4420 00:28:37.036 [2024-11-18 00:34:00.697460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a6860 is same with the state(6) to be set 00:28:37.036 [2024-11-18 00:34:00.697539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.036 [2024-11-18 00:34:00.697567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17cb590 with addr=10.0.0.2, port=4420 00:28:37.036 [2024-11-18 00:34:00.697584] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb590 is same with the state(6) to be set 00:28:37.036 [2024-11-18 00:34:00.697664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.036 [2024-11-18 00:34:00.697699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dd330 with addr=10.0.0.2, port=4420 00:28:37.036 [2024-11-18 00:34:00.697716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd330 is same with the state(6) to be set 00:28:37.036 [2024-11-18 00:34:00.697805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.036 [2024-11-18 00:34:00.697840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18080d0 with addr=10.0.0.2, port=4420 00:28:37.036 [2024-11-18 00:34:00.697857] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18080d0 is same with the state(6) to be set 00:28:37.036 [2024-11-18 00:34:00.697875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1807ea0 (9): Bad file descriptor 00:28:37.036 [2024-11-18 00:34:00.697894] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:28:37.036 [2024-11-18 00:34:00.697909] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:28:37.036 [2024-11-18 00:34:00.697927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:28:37.036 [2024-11-18 00:34:00.697944] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:28:37.036 [2024-11-18 00:34:00.697960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:28:37.036 [2024-11-18 00:34:00.697973] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:28:37.036 [2024-11-18 00:34:00.697986] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:28:37.036 [2024-11-18 00:34:00.697999] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:28:37.036 [2024-11-18 00:34:00.698013] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:28:37.036 [2024-11-18 00:34:00.698025] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:28:37.036 [2024-11-18 00:34:00.698038] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:28:37.036 [2024-11-18 00:34:00.698050] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:28:37.036 [2024-11-18 00:34:00.698064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:28:37.036 [2024-11-18 00:34:00.698076] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:28:37.036 [2024-11-18 00:34:00.698089] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:28:37.036 [2024-11-18 00:34:00.698102] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:28:37.036 [2024-11-18 00:34:00.698153] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:28:37.036 [2024-11-18 00:34:00.698878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ac450 (9): Bad file descriptor 00:28:37.036 [2024-11-18 00:34:00.698910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13a6860 (9): Bad file descriptor 00:28:37.036 [2024-11-18 00:34:00.698933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17cb590 (9): Bad file descriptor 00:28:37.036 [2024-11-18 00:34:00.698950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dd330 (9): Bad file descriptor 00:28:37.036 [2024-11-18 00:34:00.698968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18080d0 (9): Bad file descriptor 00:28:37.036 [2024-11-18 00:34:00.698984] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:28:37.036 [2024-11-18 00:34:00.698998] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:28:37.036 [2024-11-18 00:34:00.699012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:28:37.036 [2024-11-18 00:34:00.699030] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:28:37.036 [2024-11-18 00:34:00.699092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:28:37.036 [2024-11-18 00:34:00.699117] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:28:37.036 [2024-11-18 00:34:00.699135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:28:37.036 [2024-11-18 00:34:00.699152] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:28:37.037 [2024-11-18 00:34:00.699196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:28:37.037 [2024-11-18 00:34:00.699213] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:28:37.037 [2024-11-18 00:34:00.699233] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:28:37.037 [2024-11-18 00:34:00.699246] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:28:37.037 [2024-11-18 00:34:00.699260] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:28:37.037 [2024-11-18 00:34:00.699273] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:28:37.037 [2024-11-18 00:34:00.699286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:28:37.037 [2024-11-18 00:34:00.699298] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:28:37.037 [2024-11-18 00:34:00.699323] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:28:37.037 [2024-11-18 00:34:00.699338] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:28:37.037 [2024-11-18 00:34:00.699351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:28:37.037 [2024-11-18 00:34:00.699364] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:28:37.037 [2024-11-18 00:34:00.699379] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:28:37.037 [2024-11-18 00:34:00.699392] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:28:37.037 [2024-11-18 00:34:00.699404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:28:37.037 [2024-11-18 00:34:00.699416] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:28:37.037 [2024-11-18 00:34:00.699430] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:28:37.037 [2024-11-18 00:34:00.699443] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:28:37.037 [2024-11-18 00:34:00.699456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:28:37.037 [2024-11-18 00:34:00.699468] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:28:37.037 [2024-11-18 00:34:00.699596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.037 [2024-11-18 00:34:00.699627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e7e10 with addr=10.0.0.2, port=4420 00:28:37.037 [2024-11-18 00:34:00.699643] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e7e10 is same with the state(6) to be set 00:28:37.037 [2024-11-18 00:34:00.699735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.037 [2024-11-18 00:34:00.699761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f4f50 with addr=10.0.0.2, port=4420 00:28:37.037 [2024-11-18 00:34:00.699783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f4f50 is same with the state(6) to be set 00:28:37.037 [2024-11-18 00:34:00.699854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.037 [2024-11-18 00:34:00.699880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e6a90 with addr=10.0.0.2, port=4420 00:28:37.037 [2024-11-18 00:34:00.699896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e6a90 is same with the state(6) to be set 00:28:37.037 [2024-11-18 00:34:00.699985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.037 [2024-11-18 00:34:00.700018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13b40a0 with addr=10.0.0.2, port=4420 00:28:37.037 [2024-11-18 00:34:00.700034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b40a0 is same with the state(6) to be set 00:28:37.037 [2024-11-18 00:34:00.700077] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e7e10 (9): Bad file descriptor 00:28:37.037 [2024-11-18 00:34:00.700101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f4f50 (9): Bad file descriptor 00:28:37.037 [2024-11-18 00:34:00.700120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e6a90 (9): Bad file descriptor 00:28:37.037 [2024-11-18 00:34:00.700138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13b40a0 (9): Bad file descriptor 00:28:37.037 [2024-11-18 00:34:00.700180] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:28:37.037 [2024-11-18 00:34:00.700198] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:28:37.037 [2024-11-18 00:34:00.700211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:28:37.037 [2024-11-18 00:34:00.700224] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:28:37.037 [2024-11-18 00:34:00.700239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:28:37.037 [2024-11-18 00:34:00.700252] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:28:37.037 [2024-11-18 00:34:00.700265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:28:37.037 [2024-11-18 00:34:00.700277] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:28:37.037 [2024-11-18 00:34:00.700290] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:28:37.037 [2024-11-18 00:34:00.700322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:28:37.037 [2024-11-18 00:34:00.700337] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:28:37.037 [2024-11-18 00:34:00.700350] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:28:37.037 [2024-11-18 00:34:00.700364] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:28:37.037 [2024-11-18 00:34:00.700377] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:28:37.037 [2024-11-18 00:34:00.700390] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:28:37.037 [2024-11-18 00:34:00.700401] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:28:37.297 00:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:28:38.679 00:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 327309 00:28:38.679 00:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:28:38.679 00:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 327309 00:28:38.679 00:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:28:38.679 00:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:38.679 00:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:28:38.679 00:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:38.679 00:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 327309 00:28:38.679 00:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:28:38.679 00:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:38.679 00:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:28:38.679 00:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:28:38.679 00:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:28:38.679 00:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:38.679 00:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:28:38.679 00:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:38.679 00:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:38.679 00:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:38.679 00:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:38.679 00:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:38.679 00:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:28:38.679 00:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:38.679 00:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:28:38.679 00:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:38.679 00:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:38.679 rmmod nvme_tcp 00:28:38.679 rmmod nvme_fabrics 00:28:38.679 rmmod nvme_keyring 00:28:38.679 00:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:38.679 00:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:28:38.679 00:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:28:38.679 00:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 327136 ']' 00:28:38.679 00:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 327136 00:28:38.679 00:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 327136 ']' 00:28:38.679 00:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 327136 00:28:38.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (327136) - No such process 00:28:38.679 00:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 327136 is not found' 00:28:38.679 Process with pid 327136 is not found 00:28:38.679 00:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:38.679 00:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:38.679 00:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:38.679 00:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:28:38.679 00:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:28:38.679 00:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:38.679 00:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:28:38.679 00:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:38.679 00:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:38.679 00:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:38.679 00:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:38.679 00:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:40.582 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:40.582 00:28:40.582 real 0m7.628s 00:28:40.582 user 0m19.012s 00:28:40.582 sys 0m1.484s 00:28:40.582 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:40.582 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:40.582 ************************************ 00:28:40.582 END TEST nvmf_shutdown_tc3 00:28:40.582 ************************************ 00:28:40.582 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:28:40.582 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:28:40.582 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:28:40.582 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:40.582 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:40.582 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:40.582 ************************************ 00:28:40.582 START TEST nvmf_shutdown_tc4 00:28:40.582 ************************************ 00:28:40.582 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:40.583 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:40.583 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:40.583 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:40.583 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:40.583 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:40.583 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:28:40.583 00:28:40.583 --- 10.0.0.2 ping statistics --- 00:28:40.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:40.583 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:28:40.583 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:40.583 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:40.583 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:28:40.583 00:28:40.583 --- 10.0.0.1 ping statistics --- 00:28:40.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:40.583 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:28:40.842 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:40.842 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:28:40.842 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:40.842 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:40.842 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:40.842 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:40.842 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:40.842 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:40.842 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:40.842 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:40.842 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:40.842 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:40.842 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:40.842 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=328214 00:28:40.842 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:40.842 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 328214 00:28:40.842 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 328214 ']' 00:28:40.842 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:40.842 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:40.842 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:40.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:40.842 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:40.842 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:40.842 [2024-11-18 00:34:04.481371] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:28:40.842 [2024-11-18 00:34:04.481444] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:40.842 [2024-11-18 00:34:04.554063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:40.842 [2024-11-18 00:34:04.599565] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:40.842 [2024-11-18 00:34:04.599615] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:40.842 [2024-11-18 00:34:04.599637] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:40.842 [2024-11-18 00:34:04.599648] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:40.842 [2024-11-18 00:34:04.599657] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:40.842 [2024-11-18 00:34:04.601163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:40.842 [2024-11-18 00:34:04.601227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:40.842 [2024-11-18 00:34:04.601293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:40.842 [2024-11-18 00:34:04.601296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:41.101 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:41.101 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:28:41.101 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:41.101 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:41.101 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:41.101 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:41.101 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:41.101 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.101 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:41.101 [2024-11-18 00:34:04.744068] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:41.101 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.101 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:41.101 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:41.101 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:41.101 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:41.101 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:41.101 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:41.101 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:41.101 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:41.101 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:41.101 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:41.101 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:41.101 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:41.101 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:41.101 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:41.101 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:41.101 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:41.101 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:41.101 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:41.101 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:41.101 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:41.101 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:41.101 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:41.101 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:41.101 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:41.101 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:41.101 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:41.101 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.101 00:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:41.101 Malloc1 00:28:41.101 [2024-11-18 00:34:04.839327] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:41.101 Malloc2 00:28:41.101 Malloc3 00:28:41.359 Malloc4 00:28:41.359 Malloc5 00:28:41.359 Malloc6 00:28:41.359 Malloc7 00:28:41.359 Malloc8 00:28:41.617 Malloc9 00:28:41.617 Malloc10 00:28:41.617 00:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.617 00:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:41.617 00:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:41.617 00:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:41.617 00:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=328361 00:28:41.617 00:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:28:41.617 00:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:28:41.617 [2024-11-18 00:34:05.372905] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:46.891 00:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:46.891 00:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 328214 00:28:46.891 00:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 328214 ']' 00:28:46.891 00:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 328214 00:28:46.891 00:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:28:46.891 00:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:46.891 00:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 328214 00:28:46.891 00:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:46.891 00:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:46.891 00:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 328214' 00:28:46.891 killing process with pid 328214 00:28:46.891 00:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 328214 00:28:46.891 00:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 328214 00:28:46.891 [2024-11-18 00:34:10.365752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685850 is same with the state(6) to be set 00:28:46.891 [2024-11-18 00:34:10.365840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685850 is same with the state(6) to be set 00:28:46.891 [2024-11-18 00:34:10.365857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685850 is same with the state(6) to be set 00:28:46.891 [2024-11-18 00:34:10.365871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685850 is same with the state(6) to be set 00:28:46.891 [2024-11-18 00:34:10.365902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685850 is same with the state(6) to be set 00:28:46.891 [2024-11-18 00:34:10.367469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16849e0 is same with the state(6) to be set 00:28:46.891 [2024-11-18 00:34:10.367520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16849e0 is same with the state(6) to be set 00:28:46.891 [2024-11-18 00:34:10.367537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16849e0 is same with the state(6) to be set 00:28:46.891 [2024-11-18 00:34:10.367551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16849e0 is same with the state(6) to be set 00:28:46.891 [2024-11-18 00:34:10.367563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16849e0 is same with the state(6) to be set 00:28:46.891 [2024-11-18 00:34:10.367575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16849e0 is same with the state(6) to be set 00:28:46.892 [2024-11-18 00:34:10.369715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16860c0 is same with the state(6) to be set 00:28:46.892 [2024-11-18 00:34:10.369754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16860c0 is same with the state(6) to be set 00:28:46.892 [2024-11-18 00:34:10.369772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16860c0 is same with the state(6) to be set 00:28:46.892 [2024-11-18 00:34:10.369786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16860c0 is same with the state(6) to be set 00:28:46.892 [2024-11-18 00:34:10.369799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16860c0 is same with the state(6) to be set 00:28:46.892 [2024-11-18 00:34:10.369816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16860c0 is same with the state(6) to be set 00:28:46.892 [2024-11-18 00:34:10.369832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16860c0 is same with the state(6) to be set 00:28:46.892 [2024-11-18 00:34:10.369846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16860c0 is same with the state(6) to be set 00:28:46.892 [2024-11-18 00:34:10.369858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16860c0 is same with the state(6) to be set 00:28:46.892 [2024-11-18 00:34:10.369886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16860c0 is same with the state(6) to be set 00:28:46.892 [2024-11-18 00:34:10.370822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1686590 is same with the state(6) to be set 00:28:46.892 [2024-11-18 00:34:10.370857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1686590 is same with the state(6) to be set 00:28:46.892 [2024-11-18 00:34:10.370875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1686590 is same with the state(6) to be set 00:28:46.892 Write completed with error (sct=0, sc=8) 00:28:46.892 [2024-11-18 00:34:10.370888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1686590 is same with the state(6) to be set 00:28:46.892 [2024-11-18 00:34:10.370905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1686590 is same with Write completed with error (sct=0, sc=8) 00:28:46.892 the state(6) to be set 00:28:46.892 [2024-11-18 00:34:10.370921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1686590 is same with the state(6) to be set 00:28:46.892 Write completed with error (sct=0, sc=8) 00:28:46.892 [2024-11-18 00:34:10.370934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1686590 is same with the state(6) to be set 00:28:46.892 starting I/O failed: -6 00:28:46.892 Write completed with error (sct=0, sc=8) 00:28:46.892 Write completed with error (sct=0, sc=8) 00:28:46.892 Write completed with error (sct=0, sc=8) 00:28:46.892 Write completed with error (sct=0, sc=8) 00:28:46.892 starting I/O failed: -6 00:28:46.892 Write completed with error (sct=0, sc=8) 00:28:46.892 Write completed with error (sct=0, sc=8) 00:28:46.892 Write completed with error (sct=0, sc=8) 00:28:46.892 Write completed with error (sct=0, sc=8) 00:28:46.892 starting I/O failed: -6 00:28:46.892 Write completed with error (sct=0, sc=8) 00:28:46.892 Write completed with error (sct=0, sc=8) 00:28:46.892 Write completed with error (sct=0, sc=8) 00:28:46.892 Write completed with error (sct=0, sc=8) 00:28:46.892 starting I/O failed: -6 00:28:46.892 Write completed with error (sct=0, sc=8) 00:28:46.892 Write completed with error (sct=0, sc=8) 00:28:46.892 Write completed with error (sct=0, sc=8) 00:28:46.892 Write completed with error (sct=0, sc=8) 00:28:46.892 starting I/O failed: -6 00:28:46.892 Write completed with error (sct=0, sc=8) 00:28:46.892 Write completed with error (sct=0, sc=8) 00:28:46.892 Write completed with error (sct=0, sc=8) 00:28:46.892 Write completed with error (sct=0, sc=8) 00:28:46.892 starting I/O failed: -6 00:28:46.892 Write completed with error (sct=0, sc=8) 00:28:46.892 Write completed with error (sct=0, sc=8) 00:28:46.892 Write completed with error (sct=0, sc=8) 00:28:46.892 Write completed with error (sct=0, sc=8) 00:28:46.892 starting I/O failed: -6 00:28:46.892 Write completed with error (sct=0, sc=8) 00:28:46.892 Write completed with error (sct=0, sc=8) 00:28:46.892 Write completed with error (sct=0, sc=8) 00:28:46.892 Write completed with error (sct=0, sc=8) 00:28:46.892 starting I/O failed: -6 00:28:46.892 Write completed with error (sct=0, sc=8) 00:28:46.892 Write completed with error (sct=0, sc=8) 00:28:46.892 Write completed with error (sct=0, sc=8) 00:28:46.892 Write completed with error (sct=0, sc=8) 00:28:46.892 starting I/O failed: -6 00:28:46.892 Write completed with error (sct=0, sc=8) 00:28:46.892 Write completed with error (sct=0, sc=8) 00:28:46.892 Write completed with error (sct=0, sc=8) 00:28:46.892 Write completed with error (sct=0, sc=8) 00:28:46.892 starting I/O failed: -6 00:28:46.892 [2024-11-18 00:34:10.371707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:46.892 Write completed with error (sct=0, sc=8) 00:28:46.892 Write completed with error (sct=0, sc=8) 00:28:46.892 Write completed with error (sct=0, sc=8) 00:28:46.892 starting I/O failed: -6 00:28:46.892 Write completed with error (sct=0, sc=8) 00:28:46.892 starting I/O failed: -6 00:28:46.892 Write completed with error (sct=0, sc=8) 00:28:46.892 Write completed with error (sct=0, sc=8) 00:28:46.892 Write completed with error (sct=0, sc=8) 00:28:46.892 starting I/O failed: -6 00:28:46.892 Write completed with error (sct=0, sc=8) 00:28:46.892 starting I/O failed: -6 00:28:46.892 Write completed with error (sct=0, sc=8) 00:28:46.892 Write completed with error (sct=0, sc=8) 00:28:46.892 Write completed with error (sct=0, sc=8) 00:28:46.892 starting I/O failed: -6 00:28:46.892 Write completed with error (sct=0, sc=8) 00:28:46.892 starting I/O failed: -6 00:28:46.892 Write completed with error (sct=0, sc=8) 00:28:46.892 Write completed with error (sct=0, sc=8) 00:28:46.892 Write completed with error (sct=0, sc=8) 00:28:46.893 starting I/O failed: -6 00:28:46.893 Write completed with error (sct=0, sc=8) 00:28:46.893 starting I/O failed: -6 00:28:46.893 Write completed with error (sct=0, sc=8) 00:28:46.893 Write completed with error (sct=0, sc=8) 00:28:46.893 Write completed with error (sct=0, sc=8) 00:28:46.893 starting I/O failed: -6 00:28:46.893 Write completed with error (sct=0, sc=8) 00:28:46.893 starting I/O failed: -6 00:28:46.893 Write completed with error (sct=0, sc=8) 00:28:46.893 Write completed with error (sct=0, sc=8) 00:28:46.893 Write completed with error (sct=0, sc=8) 00:28:46.893 starting I/O failed: -6 00:28:46.893 Write completed with error (sct=0, sc=8) 00:28:46.893 starting I/O failed: -6 00:28:46.893 Write completed with error (sct=0, sc=8) 00:28:46.893 Write completed with error (sct=0, sc=8) 00:28:46.893 Write completed with error (sct=0, sc=8) 00:28:46.893 starting I/O failed: -6 00:28:46.893 Write completed with error (sct=0, sc=8) 00:28:46.893 starting I/O failed: -6 00:28:46.893 Write completed with error (sct=0, sc=8) 00:28:46.893 Write completed with error (sct=0, sc=8) 00:28:46.893 Write completed with error (sct=0, sc=8) 00:28:46.893 starting I/O failed: -6 00:28:46.893 Write completed with error (sct=0, sc=8) 00:28:46.893 starting I/O failed: -6 00:28:46.893 Write completed with error (sct=0, sc=8) 00:28:46.893 Write completed with error (sct=0, sc=8) 00:28:46.893 Write completed with error (sct=0, sc=8) 00:28:46.893 starting I/O failed: -6 00:28:46.893 Write completed with error (sct=0, sc=8) 00:28:46.893 starting I/O failed: -6 00:28:46.893 [2024-11-18 00:34:10.372644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685bf0 is same with Write completed with error (sct=0, sc=8) 00:28:46.893 the state(6) to be set 00:28:46.893 Write completed with error (sct=0, sc=8) 00:28:46.893 [2024-11-18 00:34:10.372677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685bf0 is same with the state(6) to be set 00:28:46.893 [2024-11-18 00:34:10.372693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685bf0 is same with Write completed with error (sct=0, sc=8) 00:28:46.893 the state(6) to be set 00:28:46.893 starting I/O failed: -6 00:28:46.893 [2024-11-18 00:34:10.372708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685bf0 is same with the state(6) to be set 00:28:46.893 Write completed with error (sct=0, sc=8) 00:28:46.893 [2024-11-18 00:34:10.372721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685bf0 is same with starting I/O failed: -6 00:28:46.893 the state(6) to be set 00:28:46.893 [2024-11-18 00:34:10.372734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685bf0 is same with Write completed with error (sct=0, sc=8) 00:28:46.893 the state(6) to be set 00:28:46.893 [2024-11-18 00:34:10.372747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685bf0 is same with the state(6) to be set 00:28:46.893 Write completed with error (sct=0, sc=8) 00:28:46.893 [2024-11-18 00:34:10.372765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685bf0 is same with the state(6) to be set 00:28:46.893 Write completed with error (sct=0, sc=8) 00:28:46.893 [2024-11-18 00:34:10.372778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1685bf0 is same with the state(6) to be set 00:28:46.893 starting I/O failed: -6 00:28:46.893 Write completed with error (sct=0, sc=8) 00:28:46.893 starting I/O failed: -6 00:28:46.893 Write completed with error (sct=0, sc=8) 00:28:46.893 Write completed with error (sct=0, sc=8) 00:28:46.893 [2024-11-18 00:34:10.372868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:46.893 Write completed with error (sct=0, sc=8) 00:28:46.893 starting I/O failed: -6 00:28:46.893 Write completed with error (sct=0, sc=8) 00:28:46.893 starting I/O failed: -6 00:28:46.893 Write completed with error (sct=0, sc=8) 00:28:46.893 starting I/O failed: -6 00:28:46.893 Write completed with error (sct=0, sc=8) 00:28:46.893 Write completed with error (sct=0, sc=8) 00:28:46.893 starting I/O failed: -6 00:28:46.893 Write completed with error (sct=0, sc=8) 00:28:46.893 starting I/O failed: -6 00:28:46.893 Write completed with error (sct=0, sc=8) 00:28:46.893 starting I/O failed: -6 00:28:46.893 Write completed with error (sct=0, sc=8) 00:28:46.893 Write completed with error (sct=0, sc=8) 00:28:46.893 starting I/O failed: -6 00:28:46.893 Write completed with error (sct=0, sc=8) 00:28:46.893 starting I/O failed: -6 00:28:46.893 Write completed with error (sct=0, sc=8) 00:28:46.893 starting I/O failed: -6 00:28:46.893 Write completed with error (sct=0, sc=8) 00:28:46.893 Write completed with error (sct=0, sc=8) 00:28:46.893 starting I/O failed: -6 00:28:46.893 Write completed with error (sct=0, sc=8) 00:28:46.893 starting I/O failed: -6 00:28:46.893 Write completed with error (sct=0, sc=8) 00:28:46.893 starting I/O failed: -6 00:28:46.893 Write completed with error (sct=0, sc=8) 00:28:46.893 Write completed with error (sct=0, sc=8) 00:28:46.893 starting I/O failed: -6 00:28:46.893 Write completed with error (sct=0, sc=8) 00:28:46.893 starting I/O failed: -6 00:28:46.893 Write completed with error (sct=0, sc=8) 00:28:46.893 starting I/O failed: -6 00:28:46.893 Write completed with error (sct=0, sc=8) 00:28:46.893 Write completed with error (sct=0, sc=8) 00:28:46.893 starting I/O failed: -6 00:28:46.893 Write completed with error (sct=0, sc=8) 00:28:46.894 starting I/O failed: -6 00:28:46.894 Write completed with error (sct=0, sc=8) 00:28:46.894 starting I/O failed: -6 00:28:46.894 Write completed with error (sct=0, sc=8) 00:28:46.894 Write completed with error (sct=0, sc=8) 00:28:46.894 starting I/O failed: -6 00:28:46.894 Write completed with error (sct=0, sc=8) 00:28:46.894 starting I/O failed: -6 00:28:46.894 Write completed with error (sct=0, sc=8) 00:28:46.894 starting I/O failed: -6 00:28:46.894 Write completed with error (sct=0, sc=8) 00:28:46.894 Write completed with error (sct=0, sc=8) 00:28:46.894 starting I/O failed: -6 00:28:46.894 Write completed with error (sct=0, sc=8) 00:28:46.894 starting I/O failed: -6 00:28:46.894 Write completed with error (sct=0, sc=8) 00:28:46.894 starting I/O failed: -6 00:28:46.894 Write completed with error (sct=0, sc=8) 00:28:46.894 Write completed with error (sct=0, sc=8) 00:28:46.894 starting I/O failed: -6 00:28:46.894 Write completed with error (sct=0, sc=8) 00:28:46.894 starting I/O failed: -6 00:28:46.894 Write completed with error (sct=0, sc=8) 00:28:46.894 starting I/O failed: -6 00:28:46.894 Write completed with error (sct=0, sc=8) 00:28:46.894 Write completed with error (sct=0, sc=8) 00:28:46.894 starting I/O failed: -6 00:28:46.894 Write completed with error (sct=0, sc=8) 00:28:46.894 starting I/O failed: -6 00:28:46.894 Write completed with error (sct=0, sc=8) 00:28:46.894 starting I/O failed: -6 00:28:46.894 Write completed with error (sct=0, sc=8) 00:28:46.894 Write completed with error (sct=0, sc=8) 00:28:46.894 starting I/O failed: -6 00:28:46.894 Write completed with error (sct=0, sc=8) 00:28:46.894 starting I/O failed: -6 00:28:46.894 Write completed with error (sct=0, sc=8) 00:28:46.894 starting I/O failed: -6 00:28:46.894 Write completed with error (sct=0, sc=8) 00:28:46.894 Write completed with error (sct=0, sc=8) 00:28:46.894 starting I/O failed: -6 00:28:46.894 Write completed with error (sct=0, sc=8) 00:28:46.894 starting I/O failed: -6 00:28:46.894 Write completed with error (sct=0, sc=8) 00:28:46.894 starting I/O failed: -6 00:28:46.894 Write completed with error (sct=0, sc=8) 00:28:46.894 [2024-11-18 00:34:10.373991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:46.894 [2024-11-18 00:34:10.374001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ae2b0 is same with the state(6) to be set 00:28:46.894 [2024-11-18 00:34:10.374028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ae2b0 is same with the state(6) to be set 00:28:46.894 [2024-11-18 00:34:10.374041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ae2b0 is same with the state(6) to be set 00:28:46.894 [2024-11-18 00:34:10.374054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ae2b0 is same with the state(6) to be set 00:28:46.894 [2024-11-18 00:34:10.374065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ae2b0 is same with the state(6) to be set 00:28:46.894 [2024-11-18 00:34:10.374077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ae2b0 is same with the state(6) to be set 00:28:46.894 [2024-11-18 00:34:10.374090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ae2b0 is same with the state(6) to be set 00:28:46.894 [2024-11-18 00:34:10.374102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ae2b0 is same with the state(6) to be set 00:28:46.894 [2024-11-18 00:34:10.374113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ae2b0 is same with the state(6) to be set 00:28:46.894 Write completed with error (sct=0, sc=8) 00:28:46.894 [2024-11-18 00:34:10.374125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ae2b0 is same with the state(6) to be set 00:28:46.894 starting I/O failed: -6 00:28:46.894 Write completed with error (sct=0, sc=8) 00:28:46.894 starting I/O failed: -6 00:28:46.894 Write completed with error (sct=0, sc=8) 00:28:46.894 starting I/O failed: -6 00:28:46.894 Write completed with error (sct=0, sc=8) 00:28:46.894 starting I/O failed: -6 00:28:46.894 Write completed with error (sct=0, sc=8) 00:28:46.894 starting I/O failed: -6 00:28:46.894 Write completed with error (sct=0, sc=8) 00:28:46.894 starting I/O failed: -6 00:28:46.894 Write completed with error (sct=0, sc=8) 00:28:46.894 starting I/O failed: -6 00:28:46.894 Write completed with error (sct=0, sc=8) 00:28:46.894 starting I/O failed: -6 00:28:46.894 Write completed with error (sct=0, sc=8) 00:28:46.894 starting I/O failed: -6 00:28:46.894 Write completed with error (sct=0, sc=8) 00:28:46.894 starting I/O failed: -6 00:28:46.894 Write completed with error (sct=0, sc=8) 00:28:46.894 starting I/O failed: -6 00:28:46.894 Write completed with error (sct=0, sc=8) 00:28:46.894 starting I/O failed: -6 00:28:46.894 Write completed with error (sct=0, sc=8) 00:28:46.894 starting I/O failed: -6 00:28:46.894 Write completed with error (sct=0, sc=8) 00:28:46.894 starting I/O failed: -6 00:28:46.894 Write completed with error (sct=0, sc=8) 00:28:46.894 starting I/O failed: -6 00:28:46.894 Write completed with error (sct=0, sc=8) 00:28:46.894 starting I/O failed: -6 00:28:46.894 Write completed with error (sct=0, sc=8) 00:28:46.894 starting I/O failed: -6 00:28:46.894 Write completed with error (sct=0, sc=8) 00:28:46.894 starting I/O failed: -6 00:28:46.894 Write completed with error (sct=0, sc=8) 00:28:46.894 starting I/O failed: -6 00:28:46.894 Write completed with error (sct=0, sc=8) 00:28:46.894 starting I/O failed: -6 00:28:46.894 Write completed with error (sct=0, sc=8) 00:28:46.894 starting I/O failed: -6 00:28:46.894 Write completed with error (sct=0, sc=8) 00:28:46.894 starting I/O failed: -6 00:28:46.894 Write completed with error (sct=0, sc=8) 00:28:46.894 starting I/O failed: -6 00:28:46.895 Write completed with error (sct=0, sc=8) 00:28:46.895 starting I/O failed: -6 00:28:46.895 Write completed with error (sct=0, sc=8) 00:28:46.895 starting I/O failed: -6 00:28:46.895 Write completed with error (sct=0, sc=8) 00:28:46.895 starting I/O failed: -6 00:28:46.895 Write completed with error (sct=0, sc=8) 00:28:46.895 starting I/O failed: -6 00:28:46.895 Write completed with error (sct=0, sc=8) 00:28:46.895 starting I/O failed: -6 00:28:46.895 Write completed with error (sct=0, sc=8) 00:28:46.895 starting I/O failed: -6 00:28:46.895 [2024-11-18 00:34:10.374702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ae780 is same with the state(6) to be set 00:28:46.895 Write completed with error (sct=0, sc=8) 00:28:46.895 starting I/O failed: -6 00:28:46.895 [2024-11-18 00:34:10.374732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ae780 is same with the state(6) to be set 00:28:46.895 Write completed with error (sct=0, sc=8) 00:28:46.895 [2024-11-18 00:34:10.374747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ae780 is same with the state(6) to be set 00:28:46.895 starting I/O failed: -6 00:28:46.895 [2024-11-18 00:34:10.374760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ae780 is same with the state(6) to be set 00:28:46.895 Write completed with error (sct=0, sc=8) 00:28:46.895 starting I/O failed: -6 00:28:46.895 [2024-11-18 00:34:10.374772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ae780 is same with the state(6) to be set 00:28:46.895 Write completed with error (sct=0, sc=8) 00:28:46.895 [2024-11-18 00:34:10.374784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ae780 is same with the state(6) to be set 00:28:46.895 starting I/O failed: -6 00:28:46.895 Write completed with error (sct=0, sc=8) 00:28:46.895 starting I/O failed: -6 00:28:46.895 Write completed with error (sct=0, sc=8) 00:28:46.895 starting I/O failed: -6 00:28:46.895 Write completed with error (sct=0, sc=8) 00:28:46.895 starting I/O failed: -6 00:28:46.895 Write completed with error (sct=0, sc=8) 00:28:46.895 starting I/O failed: -6 00:28:46.895 Write completed with error (sct=0, sc=8) 00:28:46.895 starting I/O failed: -6 00:28:46.895 Write completed with error (sct=0, sc=8) 00:28:46.895 starting I/O failed: -6 00:28:46.895 Write completed with error (sct=0, sc=8) 00:28:46.895 starting I/O failed: -6 00:28:46.895 Write completed with error (sct=0, sc=8) 00:28:46.895 starting I/O failed: -6 00:28:46.895 Write completed with error (sct=0, sc=8) 00:28:46.895 starting I/O failed: -6 00:28:46.895 Write completed with error (sct=0, sc=8) 00:28:46.895 starting I/O failed: -6 00:28:46.895 Write completed with error (sct=0, sc=8) 00:28:46.895 starting I/O failed: -6 00:28:46.895 Write completed with error (sct=0, sc=8) 00:28:46.895 starting I/O failed: -6 00:28:46.895 Write completed with error (sct=0, sc=8) 00:28:46.895 starting I/O failed: -6 00:28:46.895 Write completed with error (sct=0, sc=8) 00:28:46.895 starting I/O failed: -6 00:28:46.895 Write completed with error (sct=0, sc=8) 00:28:46.895 starting I/O failed: -6 00:28:46.895 Write completed with error (sct=0, sc=8) 00:28:46.895 starting I/O failed: -6 00:28:46.895 Write completed with error (sct=0, sc=8) 00:28:46.895 starting I/O failed: -6 00:28:46.895 Write completed with error (sct=0, sc=8) 00:28:46.895 starting I/O failed: -6 00:28:46.895 Write completed with error (sct=0, sc=8) 00:28:46.895 [2024-11-18 00:34:10.375142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18aec50 is same with starting I/O failed: -6 00:28:46.895 the state(6) to be set 00:28:46.895 Write completed with error (sct=0, sc=8) 00:28:46.895 starting I/O failed: -6 00:28:46.895 [2024-11-18 00:34:10.375177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18aec50 is same with Write completed with error (sct=0, sc=8) 00:28:46.895 the state(6) to be set 00:28:46.895 starting I/O failed: -6 00:28:46.895 [2024-11-18 00:34:10.375206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18aec50 is same with the state(6) to be set 00:28:46.895 Write completed with error (sct=0, sc=8) 00:28:46.895 starting I/O failed: -6 00:28:46.895 [2024-11-18 00:34:10.375219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18aec50 is same with the state(6) to be set 00:28:46.895 Write completed with error (sct=0, sc=8) 00:28:46.895 [2024-11-18 00:34:10.375232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18aec50 is same with the state(6) to be set 00:28:46.895 starting I/O failed: -6 00:28:46.895 [2024-11-18 00:34:10.375245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18aec50 is same with the state(6) to be set 00:28:46.895 Write completed with error (sct=0, sc=8) 00:28:46.895 [2024-11-18 00:34:10.375258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18aec50 is same with starting I/O failed: -6 00:28:46.895 the state(6) to be set 00:28:46.895 [2024-11-18 00:34:10.375271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18aec50 is same with the state(6) to be set 00:28:46.895 Write completed with error (sct=0, sc=8) 00:28:46.895 starting I/O failed: -6 00:28:46.895 Write completed with error (sct=0, sc=8) 00:28:46.895 starting I/O failed: -6 00:28:46.895 Write completed with error (sct=0, sc=8) 00:28:46.895 starting I/O failed: -6 00:28:46.895 [2024-11-18 00:34:10.375687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.895 [2024-11-18 00:34:10.375692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18adde0 is same with the state(6) to be set 00:28:46.895 NVMe io qpair process completion error 00:28:46.895 [2024-11-18 00:34:10.375720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18adde0 is same with the state(6) to be set 00:28:46.895 [2024-11-18 00:34:10.375734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18adde0 is same with the state(6) to be set 00:28:46.895 [2024-11-18 00:34:10.375747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18adde0 is same with the state(6) to be set 00:28:46.895 [2024-11-18 00:34:10.375758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18adde0 is same with the state(6) to be set 00:28:46.895 [2024-11-18 00:34:10.375770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18adde0 is same with the state(6) to be set 00:28:46.895 Write completed with error (sct=0, sc=8) 00:28:46.895 Write completed with error (sct=0, sc=8) 00:28:46.895 Write completed with error (sct=0, sc=8) 00:28:46.895 starting I/O failed: -6 00:28:46.895 Write completed with error (sct=0, sc=8) 00:28:46.895 Write completed with error (sct=0, sc=8) 00:28:46.895 Write completed with error (sct=0, sc=8) 00:28:46.895 Write completed with error (sct=0, sc=8) 00:28:46.895 starting I/O failed: -6 00:28:46.895 Write completed with error (sct=0, sc=8) 00:28:46.895 Write completed with error (sct=0, sc=8) 00:28:46.895 Write completed with error (sct=0, sc=8) 00:28:46.895 Write completed with error (sct=0, sc=8) 00:28:46.895 starting I/O failed: -6 00:28:46.895 Write completed with error (sct=0, sc=8) 00:28:46.895 Write completed with error (sct=0, sc=8) 00:28:46.895 Write completed with error (sct=0, sc=8) 00:28:46.895 Write completed with error (sct=0, sc=8) 00:28:46.895 starting I/O failed: -6 00:28:46.895 Write completed with error (sct=0, sc=8) 00:28:46.895 Write completed with error (sct=0, sc=8) 00:28:46.895 Write completed with error (sct=0, sc=8) 00:28:46.895 Write completed with error (sct=0, sc=8) 00:28:46.895 starting I/O failed: -6 00:28:46.895 Write completed with error (sct=0, sc=8) 00:28:46.895 Write completed with error (sct=0, sc=8) 00:28:46.895 Write completed with error (sct=0, sc=8) 00:28:46.895 Write completed with error (sct=0, sc=8) 00:28:46.896 starting I/O failed: -6 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 starting I/O failed: -6 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 starting I/O failed: -6 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 starting I/O failed: -6 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 starting I/O failed: -6 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 [2024-11-18 00:34:10.383119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 starting I/O failed: -6 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 starting I/O failed: -6 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 starting I/O failed: -6 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 starting I/O failed: -6 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 [2024-11-18 00:34:10.383511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1865430 is same with the state(6) to be set 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 starting I/O failed: -6 00:28:46.896 [2024-11-18 00:34:10.383547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1865430 is same with the state(6) to be set 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 starting I/O failed: -6 00:28:46.896 [2024-11-18 00:34:10.383562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1865430 is same with the state(6) to be set 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 starting I/O failed: -6 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 starting I/O failed: -6 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 starting I/O failed: -6 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 starting I/O failed: -6 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 starting I/O failed: -6 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 starting I/O failed: -6 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 starting I/O failed: -6 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 starting I/O failed: -6 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 starting I/O failed: -6 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 starting I/O failed: -6 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 starting I/O failed: -6 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 starting I/O failed: -6 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 starting I/O failed: -6 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 starting I/O failed: -6 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 starting I/O failed: -6 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 starting I/O failed: -6 00:28:46.896 [2024-11-18 00:34:10.384224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:46.896 starting I/O failed: -6 00:28:46.896 starting I/O failed: -6 00:28:46.896 starting I/O failed: -6 00:28:46.896 starting I/O failed: -6 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 starting I/O failed: -6 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 starting I/O failed: -6 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 starting I/O failed: -6 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 starting I/O failed: -6 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 starting I/O failed: -6 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 starting I/O failed: -6 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 starting I/O failed: -6 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 starting I/O failed: -6 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 starting I/O failed: -6 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 starting I/O failed: -6 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 starting I/O failed: -6 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 starting I/O failed: -6 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 starting I/O failed: -6 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 starting I/O failed: -6 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 starting I/O failed: -6 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 starting I/O failed: -6 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 starting I/O failed: -6 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 [2024-11-18 00:34:10.385202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1863750 is same with starting I/O failed: -6 00:28:46.896 the state(6) to be set 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 [2024-11-18 00:34:10.385236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1863750 is same with the state(6) to be set 00:28:46.896 [2024-11-18 00:34:10.385253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1863750 is same with the state(6) to be set 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 [2024-11-18 00:34:10.385265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1863750 is same with starting I/O failed: -6 00:28:46.896 the state(6) to be set 00:28:46.896 [2024-11-18 00:34:10.385279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1863750 is same with the state(6) to be set 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 [2024-11-18 00:34:10.385291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1863750 is same with starting I/O failed: -6 00:28:46.896 the state(6) to be set 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 starting I/O failed: -6 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 starting I/O failed: -6 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 starting I/O failed: -6 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 starting I/O failed: -6 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.896 Write completed with error (sct=0, sc=8) 00:28:46.897 starting I/O failed: -6 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 starting I/O failed: -6 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 starting I/O failed: -6 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 starting I/O failed: -6 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 starting I/O failed: -6 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 starting I/O failed: -6 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 starting I/O failed: -6 00:28:46.897 [2024-11-18 00:34:10.385624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1863c20 is same with the state(6) to be set 00:28:46.897 [2024-11-18 00:34:10.385648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:46.897 [2024-11-18 00:34:10.385657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1863c20 is same with the state(6) to be set 00:28:46.897 [2024-11-18 00:34:10.385673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1863c20 is same with the state(6) to be set 00:28:46.897 [2024-11-18 00:34:10.385690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1863c20 is same with the state(6) to be set 00:28:46.897 [2024-11-18 00:34:10.385702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1863c20 is same with the state(6) to be set 00:28:46.897 [2024-11-18 00:34:10.385714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1863c20 is same with the state(6) to be set 00:28:46.897 [2024-11-18 00:34:10.385726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1863c20 is same with the state(6) to be set 00:28:46.897 [2024-11-18 00:34:10.385744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1863c20 is same with the state(6) to be set 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 starting I/O failed: -6 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 starting I/O failed: -6 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 starting I/O failed: -6 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 starting I/O failed: -6 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 starting I/O failed: -6 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 starting I/O failed: -6 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 starting I/O failed: -6 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 starting I/O failed: -6 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 starting I/O failed: -6 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 starting I/O failed: -6 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 starting I/O failed: -6 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 starting I/O failed: -6 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 starting I/O failed: -6 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 starting I/O failed: -6 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 starting I/O failed: -6 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 starting I/O failed: -6 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 starting I/O failed: -6 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 starting I/O failed: -6 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 [2024-11-18 00:34:10.386121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1862db0 is same with the state(6) to be set 00:28:46.897 starting I/O failed: -6 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 [2024-11-18 00:34:10.386151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1862db0 is same with the state(6) to be set 00:28:46.897 starting I/O failed: -6 00:28:46.897 [2024-11-18 00:34:10.386165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1862db0 is same with the state(6) to be set 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 starting I/O failed: -6 00:28:46.897 [2024-11-18 00:34:10.386178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1862db0 is same with the state(6) to be set 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 [2024-11-18 00:34:10.386191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1862db0 is same with the state(6) to be set 00:28:46.897 starting I/O failed: -6 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 starting I/O failed: -6 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 starting I/O failed: -6 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 starting I/O failed: -6 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 starting I/O failed: -6 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 starting I/O failed: -6 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 starting I/O failed: -6 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 starting I/O failed: -6 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 starting I/O failed: -6 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 starting I/O failed: -6 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 starting I/O failed: -6 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 starting I/O failed: -6 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 starting I/O failed: -6 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 starting I/O failed: -6 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 starting I/O failed: -6 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 starting I/O failed: -6 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 starting I/O failed: -6 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 starting I/O failed: -6 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 starting I/O failed: -6 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 starting I/O failed: -6 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 starting I/O failed: -6 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 starting I/O failed: -6 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 starting I/O failed: -6 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 starting I/O failed: -6 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 starting I/O failed: -6 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 starting I/O failed: -6 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 starting I/O failed: -6 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 starting I/O failed: -6 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 starting I/O failed: -6 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 starting I/O failed: -6 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 starting I/O failed: -6 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 starting I/O failed: -6 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 starting I/O failed: -6 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 starting I/O failed: -6 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 starting I/O failed: -6 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 starting I/O failed: -6 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 starting I/O failed: -6 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 starting I/O failed: -6 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 starting I/O failed: -6 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 starting I/O failed: -6 00:28:46.897 [2024-11-18 00:34:10.387402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.897 NVMe io qpair process completion error 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 starting I/O failed: -6 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.897 Write completed with error (sct=0, sc=8) 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 starting I/O failed: -6 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 starting I/O failed: -6 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 starting I/O failed: -6 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 starting I/O failed: -6 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 starting I/O failed: -6 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 starting I/O failed: -6 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 starting I/O failed: -6 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 starting I/O failed: -6 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 starting I/O failed: -6 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 [2024-11-18 00:34:10.388623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 starting I/O failed: -6 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 starting I/O failed: -6 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 [2024-11-18 00:34:10.388832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1864a90 is same with the state(6) to be set 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 [2024-11-18 00:34:10.388859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1864a90 is same with the state(6) to be set 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 [2024-11-18 00:34:10.388873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1864a90 is same with the state(6) to be set 00:28:46.898 starting I/O failed: -6 00:28:46.898 [2024-11-18 00:34:10.388886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1864a90 is same with the state(6) to be set 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 [2024-11-18 00:34:10.388899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1864a90 is same with starting I/O failed: -6 00:28:46.898 the state(6) to be set 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 [2024-11-18 00:34:10.388924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1864a90 is same with the state(6) to be set 00:28:46.898 [2024-11-18 00:34:10.388938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1864a90 is same with the state(6) to be set 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 [2024-11-18 00:34:10.388950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1864a90 is same with the state(6) to be set 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 [2024-11-18 00:34:10.388962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1864a90 is same with the state(6) to be set 00:28:46.898 starting I/O failed: -6 00:28:46.898 [2024-11-18 00:34:10.388975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1864a90 is same with the state(6) to be set 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 starting I/O failed: -6 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 starting I/O failed: -6 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 starting I/O failed: -6 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 starting I/O failed: -6 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 starting I/O failed: -6 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 starting I/O failed: -6 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 starting I/O failed: -6 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 starting I/O failed: -6 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 starting I/O failed: -6 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 starting I/O failed: -6 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 starting I/O failed: -6 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 starting I/O failed: -6 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 starting I/O failed: -6 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 starting I/O failed: -6 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 starting I/O failed: -6 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 starting I/O failed: -6 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 starting I/O failed: -6 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 [2024-11-18 00:34:10.389683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.898 starting I/O failed: -6 00:28:46.898 Write completed with error (sct=0, sc=8) 00:28:46.899 starting I/O failed: -6 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 starting I/O failed: -6 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 starting I/O failed: -6 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 starting I/O failed: -6 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 starting I/O failed: -6 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 starting I/O failed: -6 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 starting I/O failed: -6 00:28:46.899 [2024-11-18 00:34:10.390025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18640f0 is same with the state(6) to be set 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 starting I/O failed: -6 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 [2024-11-18 00:34:10.390054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18640f0 is same with the state(6) to be set 00:28:46.899 [2024-11-18 00:34:10.390069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18640f0 is same with Write completed with error (sct=0, sc=8) 00:28:46.899 the state(6) to be set 00:28:46.899 starting I/O failed: -6 00:28:46.899 [2024-11-18 00:34:10.390095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18640f0 is same with Write completed with error (sct=0, sc=8) 00:28:46.899 the state(6) to be set 00:28:46.899 starting I/O failed: -6 00:28:46.899 [2024-11-18 00:34:10.390110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18640f0 is same with the state(6) to be set 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 [2024-11-18 00:34:10.390122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18640f0 is same with the state(6) to be set 00:28:46.899 starting I/O failed: -6 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 starting I/O failed: -6 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 starting I/O failed: -6 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 starting I/O failed: -6 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 starting I/O failed: -6 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 starting I/O failed: -6 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 starting I/O failed: -6 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 starting I/O failed: -6 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 starting I/O failed: -6 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 starting I/O failed: -6 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 starting I/O failed: -6 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 starting I/O failed: -6 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 starting I/O failed: -6 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 starting I/O failed: -6 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 starting I/O failed: -6 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 starting I/O failed: -6 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 starting I/O failed: -6 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 starting I/O failed: -6 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 starting I/O failed: -6 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 starting I/O failed: -6 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 starting I/O failed: -6 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 starting I/O failed: -6 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 starting I/O failed: -6 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 starting I/O failed: -6 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 starting I/O failed: -6 00:28:46.899 [2024-11-18 00:34:10.390830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 starting I/O failed: -6 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 starting I/O failed: -6 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 starting I/O failed: -6 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 starting I/O failed: -6 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 starting I/O failed: -6 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 starting I/O failed: -6 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 starting I/O failed: -6 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 starting I/O failed: -6 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 starting I/O failed: -6 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 starting I/O failed: -6 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 starting I/O failed: -6 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 starting I/O failed: -6 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 starting I/O failed: -6 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 starting I/O failed: -6 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 starting I/O failed: -6 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 starting I/O failed: -6 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 starting I/O failed: -6 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 starting I/O failed: -6 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 starting I/O failed: -6 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 starting I/O failed: -6 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 starting I/O failed: -6 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 starting I/O failed: -6 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 starting I/O failed: -6 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 starting I/O failed: -6 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 starting I/O failed: -6 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 starting I/O failed: -6 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 starting I/O failed: -6 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 starting I/O failed: -6 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 starting I/O failed: -6 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 starting I/O failed: -6 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 starting I/O failed: -6 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 starting I/O failed: -6 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.899 starting I/O failed: -6 00:28:46.899 Write completed with error (sct=0, sc=8) 00:28:46.900 starting I/O failed: -6 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 starting I/O failed: -6 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 starting I/O failed: -6 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 starting I/O failed: -6 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 starting I/O failed: -6 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 starting I/O failed: -6 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 starting I/O failed: -6 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 starting I/O failed: -6 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 starting I/O failed: -6 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 starting I/O failed: -6 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 starting I/O failed: -6 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 starting I/O failed: -6 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 starting I/O failed: -6 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 starting I/O failed: -6 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 starting I/O failed: -6 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 starting I/O failed: -6 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 starting I/O failed: -6 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 starting I/O failed: -6 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 starting I/O failed: -6 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 starting I/O failed: -6 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 starting I/O failed: -6 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 starting I/O failed: -6 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 starting I/O failed: -6 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 starting I/O failed: -6 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 starting I/O failed: -6 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 starting I/O failed: -6 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 starting I/O failed: -6 00:28:46.900 [2024-11-18 00:34:10.393004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.900 NVMe io qpair process completion error 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 starting I/O failed: -6 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 starting I/O failed: -6 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 starting I/O failed: -6 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 starting I/O failed: -6 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 starting I/O failed: -6 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 starting I/O failed: -6 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 starting I/O failed: -6 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 starting I/O failed: -6 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 starting I/O failed: -6 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 starting I/O failed: -6 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 starting I/O failed: -6 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 starting I/O failed: -6 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 starting I/O failed: -6 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 starting I/O failed: -6 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 starting I/O failed: -6 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 starting I/O failed: -6 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 starting I/O failed: -6 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 starting I/O failed: -6 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 starting I/O failed: -6 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 starting I/O failed: -6 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 starting I/O failed: -6 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 starting I/O failed: -6 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 starting I/O failed: -6 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 starting I/O failed: -6 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 starting I/O failed: -6 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 starting I/O failed: -6 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 starting I/O failed: -6 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 starting I/O failed: -6 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 starting I/O failed: -6 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 starting I/O failed: -6 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 starting I/O failed: -6 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.900 starting I/O failed: -6 00:28:46.900 Write completed with error (sct=0, sc=8) 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 [2024-11-18 00:34:10.395143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:46.901 starting I/O failed: -6 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 [2024-11-18 00:34:10.396351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.901 Write completed with error (sct=0, sc=8) 00:28:46.901 starting I/O failed: -6 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 starting I/O failed: -6 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 starting I/O failed: -6 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 starting I/O failed: -6 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 starting I/O failed: -6 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 starting I/O failed: -6 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 starting I/O failed: -6 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 starting I/O failed: -6 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 starting I/O failed: -6 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 starting I/O failed: -6 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 starting I/O failed: -6 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 starting I/O failed: -6 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 starting I/O failed: -6 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 starting I/O failed: -6 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 starting I/O failed: -6 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 starting I/O failed: -6 00:28:46.902 [2024-11-18 00:34:10.398010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:46.902 NVMe io qpair process completion error 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 starting I/O failed: -6 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 starting I/O failed: -6 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 starting I/O failed: -6 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 starting I/O failed: -6 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 starting I/O failed: -6 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 starting I/O failed: -6 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 starting I/O failed: -6 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 starting I/O failed: -6 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 starting I/O failed: -6 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 starting I/O failed: -6 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 [2024-11-18 00:34:10.399171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:46.902 starting I/O failed: -6 00:28:46.902 starting I/O failed: -6 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 starting I/O failed: -6 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 starting I/O failed: -6 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 starting I/O failed: -6 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 starting I/O failed: -6 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 starting I/O failed: -6 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 starting I/O failed: -6 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 starting I/O failed: -6 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 starting I/O failed: -6 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 starting I/O failed: -6 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 starting I/O failed: -6 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 starting I/O failed: -6 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 starting I/O failed: -6 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 starting I/O failed: -6 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 starting I/O failed: -6 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 starting I/O failed: -6 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 starting I/O failed: -6 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 starting I/O failed: -6 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 starting I/O failed: -6 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 starting I/O failed: -6 00:28:46.902 [2024-11-18 00:34:10.400292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 starting I/O failed: -6 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 starting I/O failed: -6 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 starting I/O failed: -6 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 starting I/O failed: -6 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 starting I/O failed: -6 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 starting I/O failed: -6 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 starting I/O failed: -6 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 starting I/O failed: -6 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 starting I/O failed: -6 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 starting I/O failed: -6 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 starting I/O failed: -6 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.902 starting I/O failed: -6 00:28:46.902 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 [2024-11-18 00:34:10.401483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.903 Write completed with error (sct=0, sc=8) 00:28:46.903 starting I/O failed: -6 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 starting I/O failed: -6 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 starting I/O failed: -6 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 starting I/O failed: -6 00:28:46.904 [2024-11-18 00:34:10.403236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.904 NVMe io qpair process completion error 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 starting I/O failed: -6 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 starting I/O failed: -6 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 starting I/O failed: -6 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 starting I/O failed: -6 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 starting I/O failed: -6 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 starting I/O failed: -6 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 starting I/O failed: -6 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 starting I/O failed: -6 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 starting I/O failed: -6 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 [2024-11-18 00:34:10.404511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 starting I/O failed: -6 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 starting I/O failed: -6 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 starting I/O failed: -6 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 starting I/O failed: -6 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 starting I/O failed: -6 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 starting I/O failed: -6 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 starting I/O failed: -6 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 starting I/O failed: -6 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 starting I/O failed: -6 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 starting I/O failed: -6 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 starting I/O failed: -6 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 starting I/O failed: -6 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 starting I/O failed: -6 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 starting I/O failed: -6 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 starting I/O failed: -6 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 starting I/O failed: -6 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 starting I/O failed: -6 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 starting I/O failed: -6 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 starting I/O failed: -6 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 starting I/O failed: -6 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 starting I/O failed: -6 00:28:46.904 [2024-11-18 00:34:10.405538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:46.904 starting I/O failed: -6 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 starting I/O failed: -6 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 starting I/O failed: -6 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 starting I/O failed: -6 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 starting I/O failed: -6 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 starting I/O failed: -6 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 starting I/O failed: -6 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 starting I/O failed: -6 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 starting I/O failed: -6 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 starting I/O failed: -6 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 starting I/O failed: -6 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 starting I/O failed: -6 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 starting I/O failed: -6 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 Write completed with error (sct=0, sc=8) 00:28:46.904 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 [2024-11-18 00:34:10.406704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.905 Write completed with error (sct=0, sc=8) 00:28:46.905 starting I/O failed: -6 00:28:46.906 [2024-11-18 00:34:10.410058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.906 NVMe io qpair process completion error 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 starting I/O failed: -6 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 starting I/O failed: -6 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 starting I/O failed: -6 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 starting I/O failed: -6 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 starting I/O failed: -6 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 starting I/O failed: -6 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 starting I/O failed: -6 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 starting I/O failed: -6 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 starting I/O failed: -6 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 [2024-11-18 00:34:10.411497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 starting I/O failed: -6 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 starting I/O failed: -6 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 starting I/O failed: -6 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 starting I/O failed: -6 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 starting I/O failed: -6 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 starting I/O failed: -6 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 starting I/O failed: -6 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 starting I/O failed: -6 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 starting I/O failed: -6 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 starting I/O failed: -6 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 starting I/O failed: -6 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 starting I/O failed: -6 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 starting I/O failed: -6 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 starting I/O failed: -6 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 starting I/O failed: -6 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 starting I/O failed: -6 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 starting I/O failed: -6 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 starting I/O failed: -6 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 starting I/O failed: -6 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 starting I/O failed: -6 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 starting I/O failed: -6 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 starting I/O failed: -6 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 [2024-11-18 00:34:10.412580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 starting I/O failed: -6 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 starting I/O failed: -6 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 starting I/O failed: -6 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 starting I/O failed: -6 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 starting I/O failed: -6 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 starting I/O failed: -6 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 starting I/O failed: -6 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 starting I/O failed: -6 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 starting I/O failed: -6 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 starting I/O failed: -6 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 starting I/O failed: -6 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 starting I/O failed: -6 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 starting I/O failed: -6 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 starting I/O failed: -6 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 starting I/O failed: -6 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 starting I/O failed: -6 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 starting I/O failed: -6 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 starting I/O failed: -6 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 starting I/O failed: -6 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 starting I/O failed: -6 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 starting I/O failed: -6 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 Write completed with error (sct=0, sc=8) 00:28:46.906 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 [2024-11-18 00:34:10.413741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:46.907 starting I/O failed: -6 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 [2024-11-18 00:34:10.416609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.907 NVMe io qpair process completion error 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.907 starting I/O failed: -6 00:28:46.907 Write completed with error (sct=0, sc=8) 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 starting I/O failed: -6 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 starting I/O failed: -6 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 starting I/O failed: -6 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 starting I/O failed: -6 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 starting I/O failed: -6 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 [2024-11-18 00:34:10.417763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:46.908 starting I/O failed: -6 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 starting I/O failed: -6 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 starting I/O failed: -6 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 starting I/O failed: -6 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 starting I/O failed: -6 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 starting I/O failed: -6 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 starting I/O failed: -6 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 starting I/O failed: -6 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 starting I/O failed: -6 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 starting I/O failed: -6 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 starting I/O failed: -6 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 starting I/O failed: -6 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 starting I/O failed: -6 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 starting I/O failed: -6 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 starting I/O failed: -6 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 starting I/O failed: -6 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 starting I/O failed: -6 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 starting I/O failed: -6 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 starting I/O failed: -6 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 starting I/O failed: -6 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 starting I/O failed: -6 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 starting I/O failed: -6 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 starting I/O failed: -6 00:28:46.908 [2024-11-18 00:34:10.418914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 starting I/O failed: -6 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 starting I/O failed: -6 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 starting I/O failed: -6 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 starting I/O failed: -6 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 starting I/O failed: -6 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 starting I/O failed: -6 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 starting I/O failed: -6 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 starting I/O failed: -6 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 starting I/O failed: -6 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 starting I/O failed: -6 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 starting I/O failed: -6 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 starting I/O failed: -6 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 starting I/O failed: -6 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 starting I/O failed: -6 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 starting I/O failed: -6 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 starting I/O failed: -6 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 starting I/O failed: -6 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 starting I/O failed: -6 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 starting I/O failed: -6 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 starting I/O failed: -6 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 starting I/O failed: -6 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 starting I/O failed: -6 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 starting I/O failed: -6 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 starting I/O failed: -6 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 starting I/O failed: -6 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 starting I/O failed: -6 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 starting I/O failed: -6 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 starting I/O failed: -6 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 starting I/O failed: -6 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 starting I/O failed: -6 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 starting I/O failed: -6 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 Write completed with error (sct=0, sc=8) 00:28:46.908 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 [2024-11-18 00:34:10.420073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:46.909 starting I/O failed: -6 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 [2024-11-18 00:34:10.422476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.909 NVMe io qpair process completion error 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 Write completed with error (sct=0, sc=8) 00:28:46.909 starting I/O failed: -6 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 starting I/O failed: -6 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 [2024-11-18 00:34:10.423912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:46.910 starting I/O failed: -6 00:28:46.910 starting I/O failed: -6 00:28:46.910 starting I/O failed: -6 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 starting I/O failed: -6 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 starting I/O failed: -6 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 starting I/O failed: -6 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 starting I/O failed: -6 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 starting I/O failed: -6 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 starting I/O failed: -6 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 starting I/O failed: -6 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 starting I/O failed: -6 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 starting I/O failed: -6 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 starting I/O failed: -6 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 starting I/O failed: -6 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 starting I/O failed: -6 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 starting I/O failed: -6 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 starting I/O failed: -6 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 starting I/O failed: -6 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 starting I/O failed: -6 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 starting I/O failed: -6 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 starting I/O failed: -6 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 starting I/O failed: -6 00:28:46.910 [2024-11-18 00:34:10.425049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 starting I/O failed: -6 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 starting I/O failed: -6 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 starting I/O failed: -6 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 starting I/O failed: -6 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 starting I/O failed: -6 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 starting I/O failed: -6 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 starting I/O failed: -6 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 starting I/O failed: -6 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 starting I/O failed: -6 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 starting I/O failed: -6 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 starting I/O failed: -6 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 starting I/O failed: -6 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 starting I/O failed: -6 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 starting I/O failed: -6 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 starting I/O failed: -6 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 starting I/O failed: -6 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 starting I/O failed: -6 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 starting I/O failed: -6 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 starting I/O failed: -6 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 starting I/O failed: -6 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 starting I/O failed: -6 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 starting I/O failed: -6 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 starting I/O failed: -6 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 starting I/O failed: -6 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 starting I/O failed: -6 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 starting I/O failed: -6 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 starting I/O failed: -6 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 starting I/O failed: -6 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 starting I/O failed: -6 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 starting I/O failed: -6 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 starting I/O failed: -6 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 starting I/O failed: -6 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 starting I/O failed: -6 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 starting I/O failed: -6 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 starting I/O failed: -6 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 starting I/O failed: -6 00:28:46.910 [2024-11-18 00:34:10.426198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 starting I/O failed: -6 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 starting I/O failed: -6 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 starting I/O failed: -6 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 starting I/O failed: -6 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 starting I/O failed: -6 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 starting I/O failed: -6 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 starting I/O failed: -6 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.910 starting I/O failed: -6 00:28:46.910 Write completed with error (sct=0, sc=8) 00:28:46.911 starting I/O failed: -6 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 starting I/O failed: -6 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 starting I/O failed: -6 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 starting I/O failed: -6 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 starting I/O failed: -6 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 starting I/O failed: -6 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 starting I/O failed: -6 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 starting I/O failed: -6 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 starting I/O failed: -6 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 starting I/O failed: -6 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 starting I/O failed: -6 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 starting I/O failed: -6 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 starting I/O failed: -6 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 starting I/O failed: -6 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 starting I/O failed: -6 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 starting I/O failed: -6 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 starting I/O failed: -6 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 starting I/O failed: -6 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 starting I/O failed: -6 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 starting I/O failed: -6 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 starting I/O failed: -6 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 starting I/O failed: -6 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 starting I/O failed: -6 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 starting I/O failed: -6 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 starting I/O failed: -6 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 starting I/O failed: -6 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 starting I/O failed: -6 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 starting I/O failed: -6 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 starting I/O failed: -6 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 starting I/O failed: -6 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 starting I/O failed: -6 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 starting I/O failed: -6 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 starting I/O failed: -6 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 starting I/O failed: -6 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 starting I/O failed: -6 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 starting I/O failed: -6 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 starting I/O failed: -6 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 starting I/O failed: -6 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 starting I/O failed: -6 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 starting I/O failed: -6 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 starting I/O failed: -6 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 starting I/O failed: -6 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 starting I/O failed: -6 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 starting I/O failed: -6 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 starting I/O failed: -6 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 starting I/O failed: -6 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 starting I/O failed: -6 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 starting I/O failed: -6 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 starting I/O failed: -6 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 starting I/O failed: -6 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 starting I/O failed: -6 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 starting I/O failed: -6 00:28:46.911 [2024-11-18 00:34:10.427943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.911 NVMe io qpair process completion error 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 starting I/O failed: -6 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 starting I/O failed: -6 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 starting I/O failed: -6 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 starting I/O failed: -6 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 starting I/O failed: -6 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 starting I/O failed: -6 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 starting I/O failed: -6 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 starting I/O failed: -6 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 starting I/O failed: -6 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 Write completed with error (sct=0, sc=8) 00:28:46.911 starting I/O failed: -6 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 [2024-11-18 00:34:10.429225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 starting I/O failed: -6 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 starting I/O failed: -6 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 starting I/O failed: -6 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 starting I/O failed: -6 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 starting I/O failed: -6 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 starting I/O failed: -6 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 starting I/O failed: -6 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 starting I/O failed: -6 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 starting I/O failed: -6 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 starting I/O failed: -6 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 starting I/O failed: -6 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 starting I/O failed: -6 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 starting I/O failed: -6 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 starting I/O failed: -6 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 starting I/O failed: -6 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 starting I/O failed: -6 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 starting I/O failed: -6 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 starting I/O failed: -6 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 starting I/O failed: -6 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 starting I/O failed: -6 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 starting I/O failed: -6 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 starting I/O failed: -6 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 starting I/O failed: -6 00:28:46.912 [2024-11-18 00:34:10.430367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 starting I/O failed: -6 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 starting I/O failed: -6 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 starting I/O failed: -6 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 starting I/O failed: -6 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 starting I/O failed: -6 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 starting I/O failed: -6 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 starting I/O failed: -6 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 starting I/O failed: -6 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 starting I/O failed: -6 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 starting I/O failed: -6 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 starting I/O failed: -6 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 starting I/O failed: -6 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 starting I/O failed: -6 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 starting I/O failed: -6 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 starting I/O failed: -6 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 starting I/O failed: -6 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 starting I/O failed: -6 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 starting I/O failed: -6 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 starting I/O failed: -6 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 starting I/O failed: -6 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 starting I/O failed: -6 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 starting I/O failed: -6 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 starting I/O failed: -6 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 starting I/O failed: -6 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 starting I/O failed: -6 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 starting I/O failed: -6 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 starting I/O failed: -6 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 starting I/O failed: -6 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 starting I/O failed: -6 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 starting I/O failed: -6 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 starting I/O failed: -6 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 starting I/O failed: -6 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 starting I/O failed: -6 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 starting I/O failed: -6 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 starting I/O failed: -6 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 starting I/O failed: -6 00:28:46.912 [2024-11-18 00:34:10.431530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 starting I/O failed: -6 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 starting I/O failed: -6 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 starting I/O failed: -6 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 starting I/O failed: -6 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 starting I/O failed: -6 00:28:46.912 Write completed with error (sct=0, sc=8) 00:28:46.912 starting I/O failed: -6 00:28:46.913 Write completed with error (sct=0, sc=8) 00:28:46.913 starting I/O failed: -6 00:28:46.913 Write completed with error (sct=0, sc=8) 00:28:46.913 starting I/O failed: -6 00:28:46.913 Write completed with error (sct=0, sc=8) 00:28:46.913 starting I/O failed: -6 00:28:46.913 Write completed with error (sct=0, sc=8) 00:28:46.913 starting I/O failed: -6 00:28:46.913 Write completed with error (sct=0, sc=8) 00:28:46.913 starting I/O failed: -6 00:28:46.913 Write completed with error (sct=0, sc=8) 00:28:46.913 starting I/O failed: -6 00:28:46.913 Write completed with error (sct=0, sc=8) 00:28:46.913 starting I/O failed: -6 00:28:46.913 Write completed with error (sct=0, sc=8) 00:28:46.913 starting I/O failed: -6 00:28:46.913 Write completed with error (sct=0, sc=8) 00:28:46.913 starting I/O failed: -6 00:28:46.913 Write completed with error (sct=0, sc=8) 00:28:46.913 starting I/O failed: -6 00:28:46.913 Write completed with error (sct=0, sc=8) 00:28:46.913 starting I/O failed: -6 00:28:46.913 Write completed with error (sct=0, sc=8) 00:28:46.913 starting I/O failed: -6 00:28:46.913 Write completed with error (sct=0, sc=8) 00:28:46.913 starting I/O failed: -6 00:28:46.913 Write completed with error (sct=0, sc=8) 00:28:46.913 starting I/O failed: -6 00:28:46.913 Write completed with error (sct=0, sc=8) 00:28:46.913 starting I/O failed: -6 00:28:46.913 Write completed with error (sct=0, sc=8) 00:28:46.913 starting I/O failed: -6 00:28:46.913 Write completed with error (sct=0, sc=8) 00:28:46.913 starting I/O failed: -6 00:28:46.913 Write completed with error (sct=0, sc=8) 00:28:46.913 starting I/O failed: -6 00:28:46.913 Write completed with error (sct=0, sc=8) 00:28:46.913 starting I/O failed: -6 00:28:46.913 Write completed with error (sct=0, sc=8) 00:28:46.913 starting I/O failed: -6 00:28:46.913 Write completed with error (sct=0, sc=8) 00:28:46.913 starting I/O failed: -6 00:28:46.913 Write completed with error (sct=0, sc=8) 00:28:46.913 starting I/O failed: -6 00:28:46.913 Write completed with error (sct=0, sc=8) 00:28:46.913 starting I/O failed: -6 00:28:46.913 Write completed with error (sct=0, sc=8) 00:28:46.913 starting I/O failed: -6 00:28:46.913 Write completed with error (sct=0, sc=8) 00:28:46.913 starting I/O failed: -6 00:28:46.913 Write completed with error (sct=0, sc=8) 00:28:46.913 starting I/O failed: -6 00:28:46.913 Write completed with error (sct=0, sc=8) 00:28:46.913 starting I/O failed: -6 00:28:46.913 Write completed with error (sct=0, sc=8) 00:28:46.913 starting I/O failed: -6 00:28:46.913 Write completed with error (sct=0, sc=8) 00:28:46.913 starting I/O failed: -6 00:28:46.913 Write completed with error (sct=0, sc=8) 00:28:46.913 starting I/O failed: -6 00:28:46.913 Write completed with error (sct=0, sc=8) 00:28:46.913 starting I/O failed: -6 00:28:46.913 Write completed with error (sct=0, sc=8) 00:28:46.913 starting I/O failed: -6 00:28:46.913 Write completed with error (sct=0, sc=8) 00:28:46.913 starting I/O failed: -6 00:28:46.913 Write completed with error (sct=0, sc=8) 00:28:46.913 starting I/O failed: -6 00:28:46.913 Write completed with error (sct=0, sc=8) 00:28:46.913 starting I/O failed: -6 00:28:46.913 Write completed with error (sct=0, sc=8) 00:28:46.913 starting I/O failed: -6 00:28:46.913 Write completed with error (sct=0, sc=8) 00:28:46.913 starting I/O failed: -6 00:28:46.913 Write completed with error (sct=0, sc=8) 00:28:46.913 starting I/O failed: -6 00:28:46.913 Write completed with error (sct=0, sc=8) 00:28:46.913 starting I/O failed: -6 00:28:46.913 Write completed with error (sct=0, sc=8) 00:28:46.913 starting I/O failed: -6 00:28:46.913 Write completed with error (sct=0, sc=8) 00:28:46.913 starting I/O failed: -6 00:28:46.913 Write completed with error (sct=0, sc=8) 00:28:46.913 starting I/O failed: -6 00:28:46.913 Write completed with error (sct=0, sc=8) 00:28:46.913 starting I/O failed: -6 00:28:46.913 Write completed with error (sct=0, sc=8) 00:28:46.913 starting I/O failed: -6 00:28:46.913 Write completed with error (sct=0, sc=8) 00:28:46.913 starting I/O failed: -6 00:28:46.913 Write completed with error (sct=0, sc=8) 00:28:46.913 starting I/O failed: -6 00:28:46.913 Write completed with error (sct=0, sc=8) 00:28:46.913 starting I/O failed: -6 00:28:46.913 Write completed with error (sct=0, sc=8) 00:28:46.913 starting I/O failed: -6 00:28:46.913 Write completed with error (sct=0, sc=8) 00:28:46.913 starting I/O failed: -6 00:28:46.913 Write completed with error (sct=0, sc=8) 00:28:46.913 starting I/O failed: -6 00:28:46.913 Write completed with error (sct=0, sc=8) 00:28:46.913 starting I/O failed: -6 00:28:46.913 Write completed with error (sct=0, sc=8) 00:28:46.913 starting I/O failed: -6 00:28:46.913 Write completed with error (sct=0, sc=8) 00:28:46.913 starting I/O failed: -6 00:28:46.913 [2024-11-18 00:34:10.434860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:46.913 NVMe io qpair process completion error 00:28:46.913 Initializing NVMe Controllers 00:28:46.913 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:28:46.913 Controller IO queue size 128, less than required. 00:28:46.913 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:46.913 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:28:46.913 Controller IO queue size 128, less than required. 00:28:46.913 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:46.913 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:28:46.913 Controller IO queue size 128, less than required. 00:28:46.913 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:46.913 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:28:46.913 Controller IO queue size 128, less than required. 00:28:46.913 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:46.913 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:28:46.913 Controller IO queue size 128, less than required. 00:28:46.913 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:46.913 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:46.913 Controller IO queue size 128, less than required. 00:28:46.913 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:46.913 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:28:46.913 Controller IO queue size 128, less than required. 00:28:46.913 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:46.913 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:28:46.913 Controller IO queue size 128, less than required. 00:28:46.913 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:46.913 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:28:46.913 Controller IO queue size 128, less than required. 00:28:46.913 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:46.913 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:28:46.913 Controller IO queue size 128, less than required. 00:28:46.913 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:46.913 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:28:46.913 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:28:46.913 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:28:46.913 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:28:46.913 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:28:46.913 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:46.913 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:28:46.913 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:28:46.913 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:28:46.913 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:28:46.913 Initialization complete. Launching workers. 00:28:46.913 ======================================================== 00:28:46.913 Latency(us) 00:28:46.914 Device Information : IOPS MiB/s Average min max 00:28:46.914 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1832.74 78.75 69856.53 1033.89 128234.67 00:28:46.914 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1819.51 78.18 70384.21 811.99 127902.84 00:28:46.914 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1792.63 77.03 71466.27 868.97 154670.67 00:28:46.914 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1814.18 77.95 70660.09 976.30 127526.77 00:28:46.914 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1797.32 77.23 71362.11 1006.98 131763.52 00:28:46.914 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1778.97 76.44 71333.85 1040.27 122860.72 00:28:46.914 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1798.17 77.27 71306.24 855.57 131805.44 00:28:46.914 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1871.79 80.43 68524.55 1155.61 134420.82 00:28:46.914 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1830.61 78.66 69327.17 912.78 113258.19 00:28:46.914 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1822.29 78.30 69667.08 1077.80 118491.28 00:28:46.914 ======================================================== 00:28:46.914 Total : 18158.21 780.24 70376.24 811.99 154670.67 00:28:46.914 00:28:46.914 [2024-11-18 00:34:10.440450] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c7240 is same with the state(6) to be set 00:28:46.914 [2024-11-18 00:34:10.440556] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc140 is same with the state(6) to be set 00:28:46.914 [2024-11-18 00:34:10.440626] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12eea40 is same with the state(6) to be set 00:28:46.914 [2024-11-18 00:34:10.440681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4c40 is same with the state(6) to be set 00:28:46.914 [2024-11-18 00:34:10.440739] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d1040 is same with the state(6) to be set 00:28:46.914 [2024-11-18 00:34:10.440797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2330 is same with the state(6) to be set 00:28:46.914 [2024-11-18 00:34:10.440855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dfd40 is same with the state(6) to be set 00:28:46.914 [2024-11-18 00:34:10.440912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e9b40 is same with the state(6) to be set 00:28:46.914 [2024-11-18 00:34:10.440969] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dae40 is same with the state(6) to be set 00:28:46.914 [2024-11-18 00:34:10.441025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d5f40 is same with the state(6) to be set 00:28:46.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:28:47.173 00:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:28:48.115 00:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 328361 00:28:48.115 00:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:28:48.115 00:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 328361 00:28:48.115 00:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:28:48.115 00:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:48.115 00:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:28:48.115 00:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:48.115 00:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 328361 00:28:48.115 00:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:28:48.115 00:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:48.115 00:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:48.115 00:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:48.115 00:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:28:48.115 00:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:48.115 00:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:48.115 00:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:48.115 00:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:48.115 00:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:48.115 00:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:28:48.115 00:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:48.115 00:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:28:48.115 00:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:48.115 00:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:48.115 rmmod nvme_tcp 00:28:48.115 rmmod nvme_fabrics 00:28:48.115 rmmod nvme_keyring 00:28:48.115 00:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:48.115 00:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:28:48.115 00:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:28:48.115 00:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 328214 ']' 00:28:48.115 00:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 328214 00:28:48.115 00:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 328214 ']' 00:28:48.115 00:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 328214 00:28:48.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (328214) - No such process 00:28:48.115 00:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 328214 is not found' 00:28:48.115 Process with pid 328214 is not found 00:28:48.115 00:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:48.115 00:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:48.115 00:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:48.115 00:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:28:48.115 00:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:28:48.115 00:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:48.115 00:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:28:48.115 00:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:48.115 00:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:48.115 00:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:48.115 00:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:48.115 00:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:50.650 00:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:50.650 00:28:50.650 real 0m9.692s 00:28:50.650 user 0m22.501s 00:28:50.650 sys 0m6.000s 00:28:50.650 00:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:50.650 00:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:50.650 ************************************ 00:28:50.650 END TEST nvmf_shutdown_tc4 00:28:50.650 ************************************ 00:28:50.650 00:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:28:50.650 00:28:50.650 real 0m37.273s 00:28:50.650 user 1m39.693s 00:28:50.650 sys 0m12.430s 00:28:50.650 00:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:50.650 00:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:50.650 ************************************ 00:28:50.650 END TEST nvmf_shutdown 00:28:50.650 ************************************ 00:28:50.650 00:34:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:28:50.650 00:34:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:50.650 00:34:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:50.650 00:34:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:50.650 ************************************ 00:28:50.650 START TEST nvmf_nsid 00:28:50.650 ************************************ 00:28:50.650 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:28:50.650 * Looking for test storage... 00:28:50.650 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:50.650 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:50.650 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:28:50.650 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:50.650 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:50.650 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:50.650 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:50.650 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:50.650 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:28:50.650 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:28:50.650 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:28:50.650 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:28:50.651 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:28:50.651 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:28:50.651 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:28:50.651 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:50.651 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:28:50.651 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:28:50.651 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:50.651 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:50.651 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:28:50.651 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:28:50.651 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:50.651 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:28:50.651 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:28:50.651 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:28:50.651 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:28:50.651 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:50.651 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:28:50.651 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:28:50.651 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:50.651 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:50.651 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:28:50.651 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:50.651 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:50.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:50.651 --rc genhtml_branch_coverage=1 00:28:50.651 --rc genhtml_function_coverage=1 00:28:50.651 --rc genhtml_legend=1 00:28:50.651 --rc geninfo_all_blocks=1 00:28:50.651 --rc geninfo_unexecuted_blocks=1 00:28:50.651 00:28:50.651 ' 00:28:50.651 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:50.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:50.651 --rc genhtml_branch_coverage=1 00:28:50.651 --rc genhtml_function_coverage=1 00:28:50.651 --rc genhtml_legend=1 00:28:50.651 --rc geninfo_all_blocks=1 00:28:50.651 --rc geninfo_unexecuted_blocks=1 00:28:50.651 00:28:50.651 ' 00:28:50.651 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:50.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:50.651 --rc genhtml_branch_coverage=1 00:28:50.651 --rc genhtml_function_coverage=1 00:28:50.651 --rc genhtml_legend=1 00:28:50.651 --rc geninfo_all_blocks=1 00:28:50.651 --rc geninfo_unexecuted_blocks=1 00:28:50.651 00:28:50.651 ' 00:28:50.651 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:50.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:50.651 --rc genhtml_branch_coverage=1 00:28:50.651 --rc genhtml_function_coverage=1 00:28:50.651 --rc genhtml_legend=1 00:28:50.651 --rc geninfo_all_blocks=1 00:28:50.651 --rc geninfo_unexecuted_blocks=1 00:28:50.651 00:28:50.651 ' 00:28:50.651 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:50.651 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:28:50.651 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:50.651 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:50.651 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:50.651 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:50.651 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:50.651 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:50.651 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:50.651 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:50.651 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:50.651 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:50.651 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:50.651 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:50.651 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:50.651 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:50.651 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:50.651 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:50.651 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:50.651 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:28:50.651 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:50.651 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:50.651 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:50.651 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.651 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.651 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.651 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:28:50.652 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.652 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:28:50.652 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:50.652 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:50.652 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:50.652 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:50.652 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:50.652 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:50.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:50.652 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:50.652 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:50.652 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:50.652 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:28:50.652 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:28:50.652 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:28:50.652 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:28:50.652 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:28:50.652 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:28:50.652 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:50.652 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:50.652 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:50.652 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:50.652 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:50.652 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:50.652 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:50.652 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:50.652 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:50.652 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:50.652 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:28:50.652 00:34:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:52.557 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:52.557 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:28:52.557 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:52.557 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:52.557 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:52.557 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:52.557 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:52.557 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:28:52.557 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:52.557 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:28:52.557 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:28:52.557 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:28:52.557 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:28:52.557 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:28:52.557 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:28:52.816 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:52.816 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:52.816 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:52.816 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:52.816 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:52.816 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:52.817 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:52.817 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:52.817 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:52.817 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:52.817 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:52.817 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.285 ms 00:28:52.817 00:28:52.817 --- 10.0.0.2 ping statistics --- 00:28:52.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:52.817 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:52.817 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:52.817 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:28:52.817 00:28:52.817 --- 10.0.0.1 ping statistics --- 00:28:52.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:52.817 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:52.817 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:52.818 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:52.818 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=331014 00:28:52.818 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:28:52.818 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 331014 00:28:52.818 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 331014 ']' 00:28:52.818 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:52.818 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:52.818 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:52.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:52.818 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:52.818 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:52.818 [2024-11-18 00:34:16.597902] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:28:52.818 [2024-11-18 00:34:16.597986] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:53.076 [2024-11-18 00:34:16.670555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:53.076 [2024-11-18 00:34:16.717583] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:53.076 [2024-11-18 00:34:16.717654] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:53.076 [2024-11-18 00:34:16.717692] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:53.076 [2024-11-18 00:34:16.717703] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:53.076 [2024-11-18 00:34:16.717712] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:53.076 [2024-11-18 00:34:16.718286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:53.076 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:53.076 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:28:53.076 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:53.076 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:53.076 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:53.076 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:53.076 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:53.076 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=331154 00:28:53.076 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:28:53.076 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:28:53.076 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:28:53.076 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:28:53.076 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:53.076 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:53.076 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:53.076 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:53.076 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:53.076 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:53.076 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:53.076 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:53.076 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:53.076 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:28:53.076 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:28:53.076 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=d2e2fb81-26f5-4a79-829e-784f0057947a 00:28:53.076 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:28:53.076 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=a23d491f-540f-4b67-a5ec-397f6b3a9934 00:28:53.076 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:28:53.076 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=8b0bfd83-2f84-46a5-ad3a-f7f5657564dd 00:28:53.076 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:28:53.076 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.076 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:53.076 null0 00:28:53.076 null1 00:28:53.334 null2 00:28:53.334 [2024-11-18 00:34:16.903862] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:53.334 [2024-11-18 00:34:16.922016] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:28:53.334 [2024-11-18 00:34:16.922098] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid331154 ] 00:28:53.334 [2024-11-18 00:34:16.928069] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:53.334 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.334 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 331154 /var/tmp/tgt2.sock 00:28:53.334 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 331154 ']' 00:28:53.334 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:28:53.334 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:53.334 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:28:53.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:28:53.334 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:53.334 00:34:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:53.335 [2024-11-18 00:34:16.997106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:53.335 [2024-11-18 00:34:17.043479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:53.593 00:34:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:53.593 00:34:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:28:53.593 00:34:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:28:54.159 [2024-11-18 00:34:17.681825] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:54.159 [2024-11-18 00:34:17.698014] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:28:54.159 nvme0n1 nvme0n2 00:28:54.159 nvme1n1 00:28:54.159 00:34:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:28:54.159 00:34:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:28:54.159 00:34:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:54.726 00:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:28:54.726 00:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:28:54.726 00:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:28:54.726 00:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:28:54.726 00:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:28:54.726 00:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:28:54.726 00:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:28:54.726 00:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:28:54.726 00:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:28:54.726 00:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:28:54.726 00:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:28:54.726 00:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:28:54.726 00:34:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:28:55.660 00:34:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:28:55.660 00:34:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:28:55.660 00:34:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:28:55.660 00:34:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:28:55.660 00:34:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:28:55.660 00:34:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid d2e2fb81-26f5-4a79-829e-784f0057947a 00:28:55.660 00:34:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:28:55.660 00:34:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:28:55.660 00:34:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:28:55.660 00:34:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:28:55.660 00:34:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:28:55.660 00:34:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=d2e2fb8126f54a79829e784f0057947a 00:28:55.660 00:34:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo D2E2FB8126F54A79829E784F0057947A 00:28:55.660 00:34:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ D2E2FB8126F54A79829E784F0057947A == \D\2\E\2\F\B\8\1\2\6\F\5\4\A\7\9\8\2\9\E\7\8\4\F\0\0\5\7\9\4\7\A ]] 00:28:55.660 00:34:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:28:55.660 00:34:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:28:55.660 00:34:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:28:55.660 00:34:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:28:55.660 00:34:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:28:55.660 00:34:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:28:55.660 00:34:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:28:55.660 00:34:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid a23d491f-540f-4b67-a5ec-397f6b3a9934 00:28:55.660 00:34:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:28:55.660 00:34:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:28:55.660 00:34:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:28:55.660 00:34:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:28:55.660 00:34:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:28:55.660 00:34:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=a23d491f540f4b67a5ec397f6b3a9934 00:28:55.660 00:34:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo A23D491F540F4B67A5EC397F6B3A9934 00:28:55.660 00:34:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ A23D491F540F4B67A5EC397F6B3A9934 == \A\2\3\D\4\9\1\F\5\4\0\F\4\B\6\7\A\5\E\C\3\9\7\F\6\B\3\A\9\9\3\4 ]] 00:28:55.660 00:34:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:28:55.660 00:34:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:28:55.660 00:34:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:28:55.660 00:34:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:28:55.660 00:34:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:28:55.660 00:34:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:28:55.660 00:34:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:28:55.660 00:34:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 8b0bfd83-2f84-46a5-ad3a-f7f5657564dd 00:28:55.660 00:34:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:28:55.660 00:34:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:28:55.660 00:34:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:28:55.660 00:34:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:28:55.660 00:34:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:28:55.918 00:34:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=8b0bfd832f8446a5ad3af7f5657564dd 00:28:55.918 00:34:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 8B0BFD832F8446A5AD3AF7F5657564DD 00:28:55.918 00:34:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 8B0BFD832F8446A5AD3AF7F5657564DD == \8\B\0\B\F\D\8\3\2\F\8\4\4\6\A\5\A\D\3\A\F\7\F\5\6\5\7\5\6\4\D\D ]] 00:28:55.918 00:34:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:28:55.918 00:34:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:28:55.918 00:34:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:28:55.918 00:34:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 331154 00:28:55.918 00:34:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 331154 ']' 00:28:55.918 00:34:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 331154 00:28:55.918 00:34:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:28:55.918 00:34:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:55.918 00:34:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 331154 00:28:55.918 00:34:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:55.918 00:34:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:55.918 00:34:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 331154' 00:28:55.918 killing process with pid 331154 00:28:55.918 00:34:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 331154 00:28:55.918 00:34:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 331154 00:28:56.484 00:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:28:56.484 00:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:56.484 00:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:28:56.484 00:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:56.484 00:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:28:56.485 00:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:56.485 00:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:56.485 rmmod nvme_tcp 00:28:56.485 rmmod nvme_fabrics 00:28:56.485 rmmod nvme_keyring 00:28:56.485 00:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:56.485 00:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:28:56.485 00:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:28:56.485 00:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 331014 ']' 00:28:56.485 00:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 331014 00:28:56.485 00:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 331014 ']' 00:28:56.485 00:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 331014 00:28:56.485 00:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:28:56.485 00:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:56.485 00:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 331014 00:28:56.485 00:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:56.485 00:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:56.485 00:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 331014' 00:28:56.485 killing process with pid 331014 00:28:56.485 00:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 331014 00:28:56.485 00:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 331014 00:28:56.760 00:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:56.760 00:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:56.760 00:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:56.760 00:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:28:56.760 00:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:28:56.760 00:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:56.760 00:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:28:56.760 00:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:56.760 00:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:56.760 00:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:56.760 00:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:56.760 00:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:58.671 00:34:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:58.671 00:28:58.671 real 0m8.421s 00:28:58.671 user 0m8.226s 00:28:58.671 sys 0m2.770s 00:28:58.671 00:34:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:58.671 00:34:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:58.671 ************************************ 00:28:58.671 END TEST nvmf_nsid 00:28:58.671 ************************************ 00:28:58.671 00:34:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:28:58.671 00:28:58.671 real 18m14.551s 00:28:58.671 user 50m38.749s 00:28:58.671 sys 4m1.390s 00:28:58.671 00:34:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:58.671 00:34:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:58.671 ************************************ 00:28:58.671 END TEST nvmf_target_extra 00:28:58.671 ************************************ 00:28:58.671 00:34:22 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:28:58.671 00:34:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:58.671 00:34:22 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:58.671 00:34:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:58.930 ************************************ 00:28:58.930 START TEST nvmf_host 00:28:58.930 ************************************ 00:28:58.930 00:34:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:28:58.930 * Looking for test storage... 00:28:58.930 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:28:58.930 00:34:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:58.930 00:34:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:28:58.930 00:34:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:58.930 00:34:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:58.930 00:34:22 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:58.930 00:34:22 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:58.930 00:34:22 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:58.930 00:34:22 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:28:58.930 00:34:22 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:28:58.930 00:34:22 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:28:58.930 00:34:22 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:28:58.930 00:34:22 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:28:58.930 00:34:22 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:28:58.930 00:34:22 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:28:58.930 00:34:22 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:58.930 00:34:22 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:28:58.930 00:34:22 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:28:58.930 00:34:22 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:58.930 00:34:22 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:58.930 00:34:22 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:28:58.930 00:34:22 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:28:58.930 00:34:22 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:58.930 00:34:22 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:28:58.930 00:34:22 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:28:58.930 00:34:22 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:28:58.930 00:34:22 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:28:58.930 00:34:22 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:58.930 00:34:22 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:28:58.930 00:34:22 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:28:58.930 00:34:22 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:58.930 00:34:22 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:58.930 00:34:22 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:28:58.930 00:34:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:58.930 00:34:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:58.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.930 --rc genhtml_branch_coverage=1 00:28:58.930 --rc genhtml_function_coverage=1 00:28:58.930 --rc genhtml_legend=1 00:28:58.930 --rc geninfo_all_blocks=1 00:28:58.930 --rc geninfo_unexecuted_blocks=1 00:28:58.930 00:28:58.930 ' 00:28:58.931 00:34:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:58.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.931 --rc genhtml_branch_coverage=1 00:28:58.931 --rc genhtml_function_coverage=1 00:28:58.931 --rc genhtml_legend=1 00:28:58.931 --rc geninfo_all_blocks=1 00:28:58.931 --rc geninfo_unexecuted_blocks=1 00:28:58.931 00:28:58.931 ' 00:28:58.931 00:34:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:58.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.931 --rc genhtml_branch_coverage=1 00:28:58.931 --rc genhtml_function_coverage=1 00:28:58.931 --rc genhtml_legend=1 00:28:58.931 --rc geninfo_all_blocks=1 00:28:58.931 --rc geninfo_unexecuted_blocks=1 00:28:58.931 00:28:58.931 ' 00:28:58.931 00:34:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:58.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.931 --rc genhtml_branch_coverage=1 00:28:58.931 --rc genhtml_function_coverage=1 00:28:58.931 --rc genhtml_legend=1 00:28:58.931 --rc geninfo_all_blocks=1 00:28:58.931 --rc geninfo_unexecuted_blocks=1 00:28:58.931 00:28:58.931 ' 00:28:58.931 00:34:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:58.931 00:34:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:28:58.931 00:34:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:58.931 00:34:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:58.931 00:34:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:58.931 00:34:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:58.931 00:34:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:58.931 00:34:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:58.931 00:34:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:58.931 00:34:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:58.931 00:34:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:58.931 00:34:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:58.931 00:34:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:58.931 00:34:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:58.931 00:34:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:58.931 00:34:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:58.931 00:34:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:58.931 00:34:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:58.931 00:34:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:58.931 00:34:22 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:28:58.931 00:34:22 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:58.931 00:34:22 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:58.931 00:34:22 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:58.931 00:34:22 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.931 00:34:22 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.931 00:34:22 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.931 00:34:22 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:28:58.931 00:34:22 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.931 00:34:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:28:58.931 00:34:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:58.931 00:34:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:58.931 00:34:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:58.931 00:34:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:58.931 00:34:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:58.931 00:34:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:58.931 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:58.931 00:34:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:58.931 00:34:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:58.931 00:34:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:58.931 00:34:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:28:58.931 00:34:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:28:58.931 00:34:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:28:58.931 00:34:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:58.931 00:34:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:58.931 00:34:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:58.931 00:34:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.931 ************************************ 00:28:58.931 START TEST nvmf_multicontroller 00:28:58.931 ************************************ 00:28:58.931 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:58.931 * Looking for test storage... 00:28:58.931 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:58.931 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:58.931 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:28:58.931 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:59.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.189 --rc genhtml_branch_coverage=1 00:28:59.189 --rc genhtml_function_coverage=1 00:28:59.189 --rc genhtml_legend=1 00:28:59.189 --rc geninfo_all_blocks=1 00:28:59.189 --rc geninfo_unexecuted_blocks=1 00:28:59.189 00:28:59.189 ' 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:59.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.189 --rc genhtml_branch_coverage=1 00:28:59.189 --rc genhtml_function_coverage=1 00:28:59.189 --rc genhtml_legend=1 00:28:59.189 --rc geninfo_all_blocks=1 00:28:59.189 --rc geninfo_unexecuted_blocks=1 00:28:59.189 00:28:59.189 ' 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:59.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.189 --rc genhtml_branch_coverage=1 00:28:59.189 --rc genhtml_function_coverage=1 00:28:59.189 --rc genhtml_legend=1 00:28:59.189 --rc geninfo_all_blocks=1 00:28:59.189 --rc geninfo_unexecuted_blocks=1 00:28:59.189 00:28:59.189 ' 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:59.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.189 --rc genhtml_branch_coverage=1 00:28:59.189 --rc genhtml_function_coverage=1 00:28:59.189 --rc genhtml_legend=1 00:28:59.189 --rc geninfo_all_blocks=1 00:28:59.189 --rc geninfo_unexecuted_blocks=1 00:28:59.189 00:28:59.189 ' 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:59.189 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:59.190 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:59.190 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:59.190 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:59.190 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:59.190 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:59.190 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:59.190 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:59.190 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:28:59.190 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:28:59.190 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:59.190 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:28:59.190 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:28:59.190 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:59.190 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:59.190 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:59.190 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:59.190 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:59.190 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:59.190 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:59.190 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:59.190 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:59.190 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:59.190 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:28:59.190 00:34:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:01.720 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:01.720 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:01.720 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:01.720 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:01.720 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:01.721 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:01.721 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:01.721 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:01.721 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:01.721 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:01.721 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:01.721 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:01.721 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:01.721 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:01.721 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:01.721 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:01.721 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:01.721 00:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:01.721 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:01.721 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:29:01.721 00:29:01.721 --- 10.0.0.2 ping statistics --- 00:29:01.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:01.721 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:01.721 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:01.721 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:29:01.721 00:29:01.721 --- 10.0.0.1 ping statistics --- 00:29:01.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:01.721 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=333592 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 333592 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 333592 ']' 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:01.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:01.721 [2024-11-18 00:34:25.148669] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:29:01.721 [2024-11-18 00:34:25.148758] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:01.721 [2024-11-18 00:34:25.221165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:01.721 [2024-11-18 00:34:25.265887] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:01.721 [2024-11-18 00:34:25.265949] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:01.721 [2024-11-18 00:34:25.265977] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:01.721 [2024-11-18 00:34:25.265987] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:01.721 [2024-11-18 00:34:25.265997] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:01.721 [2024-11-18 00:34:25.267461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:01.721 [2024-11-18 00:34:25.267580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:01.721 [2024-11-18 00:34:25.267583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:01.721 [2024-11-18 00:34:25.413042] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:01.721 Malloc0 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:01.721 [2024-11-18 00:34:25.473441] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:01.721 [2024-11-18 00:34:25.481286] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:01.721 Malloc1 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:01.721 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.979 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=333619 00:29:01.979 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:01.979 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:29:01.979 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 333619 /var/tmp/bdevperf.sock 00:29:01.979 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 333619 ']' 00:29:01.979 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:01.979 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:01.979 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:01.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:01.979 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:01.979 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:02.237 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:02.237 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:29:02.237 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:29:02.237 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.237 00:34:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:02.237 NVMe0n1 00:29:02.237 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.237 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:02.237 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:29:02.237 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.237 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:02.237 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.237 1 00:29:02.237 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:02.237 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:02.237 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:02.237 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:02.237 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:02.237 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:02.237 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:02.237 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:02.237 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.237 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:02.237 request: 00:29:02.237 { 00:29:02.237 "name": "NVMe0", 00:29:02.237 "trtype": "tcp", 00:29:02.237 "traddr": "10.0.0.2", 00:29:02.237 "adrfam": "ipv4", 00:29:02.237 "trsvcid": "4420", 00:29:02.237 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:02.237 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:29:02.237 "hostaddr": "10.0.0.1", 00:29:02.237 "prchk_reftag": false, 00:29:02.237 "prchk_guard": false, 00:29:02.237 "hdgst": false, 00:29:02.237 "ddgst": false, 00:29:02.237 "allow_unrecognized_csi": false, 00:29:02.237 "method": "bdev_nvme_attach_controller", 00:29:02.237 "req_id": 1 00:29:02.237 } 00:29:02.237 Got JSON-RPC error response 00:29:02.237 response: 00:29:02.237 { 00:29:02.237 "code": -114, 00:29:02.237 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:02.237 } 00:29:02.237 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:02.237 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:02.237 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:02.237 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:02.237 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:02.237 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:02.237 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:02.237 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:02.237 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:02.237 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:02.237 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:02.237 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:02.237 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:02.237 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.237 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:02.237 request: 00:29:02.237 { 00:29:02.237 "name": "NVMe0", 00:29:02.237 "trtype": "tcp", 00:29:02.237 "traddr": "10.0.0.2", 00:29:02.237 "adrfam": "ipv4", 00:29:02.237 "trsvcid": "4420", 00:29:02.237 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:02.237 "hostaddr": "10.0.0.1", 00:29:02.237 "prchk_reftag": false, 00:29:02.237 "prchk_guard": false, 00:29:02.237 "hdgst": false, 00:29:02.237 "ddgst": false, 00:29:02.237 "allow_unrecognized_csi": false, 00:29:02.237 "method": "bdev_nvme_attach_controller", 00:29:02.237 "req_id": 1 00:29:02.237 } 00:29:02.237 Got JSON-RPC error response 00:29:02.237 response: 00:29:02.237 { 00:29:02.237 "code": -114, 00:29:02.237 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:02.237 } 00:29:02.237 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:02.237 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:02.237 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:02.237 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:02.237 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:02.237 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:02.237 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:02.237 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:02.237 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:02.237 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:02.237 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:02.237 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:02.238 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:02.238 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.238 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:02.496 request: 00:29:02.496 { 00:29:02.496 "name": "NVMe0", 00:29:02.496 "trtype": "tcp", 00:29:02.496 "traddr": "10.0.0.2", 00:29:02.496 "adrfam": "ipv4", 00:29:02.496 "trsvcid": "4420", 00:29:02.496 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:02.496 "hostaddr": "10.0.0.1", 00:29:02.496 "prchk_reftag": false, 00:29:02.496 "prchk_guard": false, 00:29:02.496 "hdgst": false, 00:29:02.496 "ddgst": false, 00:29:02.496 "multipath": "disable", 00:29:02.496 "allow_unrecognized_csi": false, 00:29:02.496 "method": "bdev_nvme_attach_controller", 00:29:02.496 "req_id": 1 00:29:02.496 } 00:29:02.496 Got JSON-RPC error response 00:29:02.496 response: 00:29:02.496 { 00:29:02.496 "code": -114, 00:29:02.496 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:29:02.496 } 00:29:02.496 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:02.496 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:02.496 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:02.496 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:02.496 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:02.496 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:02.496 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:02.496 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:02.496 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:02.496 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:02.496 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:02.496 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:02.496 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:02.496 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.496 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:02.496 request: 00:29:02.496 { 00:29:02.496 "name": "NVMe0", 00:29:02.496 "trtype": "tcp", 00:29:02.496 "traddr": "10.0.0.2", 00:29:02.496 "adrfam": "ipv4", 00:29:02.496 "trsvcid": "4420", 00:29:02.496 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:02.496 "hostaddr": "10.0.0.1", 00:29:02.496 "prchk_reftag": false, 00:29:02.496 "prchk_guard": false, 00:29:02.496 "hdgst": false, 00:29:02.496 "ddgst": false, 00:29:02.496 "multipath": "failover", 00:29:02.496 "allow_unrecognized_csi": false, 00:29:02.496 "method": "bdev_nvme_attach_controller", 00:29:02.496 "req_id": 1 00:29:02.496 } 00:29:02.496 Got JSON-RPC error response 00:29:02.496 response: 00:29:02.496 { 00:29:02.496 "code": -114, 00:29:02.496 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:02.496 } 00:29:02.496 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:02.496 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:02.496 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:02.496 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:02.496 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:02.496 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:02.496 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.496 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:02.496 NVMe0n1 00:29:02.496 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.496 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:02.496 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.496 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:02.496 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.496 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:29:02.496 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.496 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:02.754 00:29:02.754 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.754 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:02.754 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:29:02.754 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.754 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:02.754 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.754 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:29:02.754 00:34:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:03.688 { 00:29:03.688 "results": [ 00:29:03.688 { 00:29:03.688 "job": "NVMe0n1", 00:29:03.688 "core_mask": "0x1", 00:29:03.688 "workload": "write", 00:29:03.688 "status": "finished", 00:29:03.688 "queue_depth": 128, 00:29:03.688 "io_size": 4096, 00:29:03.688 "runtime": 1.008397, 00:29:03.688 "iops": 18263.640213130344, 00:29:03.688 "mibps": 71.3423445825404, 00:29:03.688 "io_failed": 0, 00:29:03.688 "io_timeout": 0, 00:29:03.688 "avg_latency_us": 6992.873036868112, 00:29:03.688 "min_latency_us": 4126.34074074074, 00:29:03.688 "max_latency_us": 12718.838518518518 00:29:03.688 } 00:29:03.688 ], 00:29:03.688 "core_count": 1 00:29:03.688 } 00:29:03.688 00:34:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:29:03.688 00:34:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.688 00:34:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:03.947 00:34:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.947 00:34:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:29:03.947 00:34:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 333619 00:29:03.947 00:34:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 333619 ']' 00:29:03.947 00:34:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 333619 00:29:03.947 00:34:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:29:03.947 00:34:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:03.947 00:34:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 333619 00:29:03.947 00:34:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:03.947 00:34:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:03.947 00:34:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 333619' 00:29:03.947 killing process with pid 333619 00:29:03.947 00:34:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 333619 00:29:03.947 00:34:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 333619 00:29:03.947 00:34:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:03.947 00:34:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.947 00:34:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:03.947 00:34:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.947 00:34:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:03.947 00:34:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.947 00:34:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:04.206 00:34:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.206 00:34:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:29:04.206 00:34:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:04.206 00:34:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:29:04.206 00:34:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:29:04.206 00:34:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:29:04.206 00:34:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:29:04.206 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:04.206 [2024-11-18 00:34:25.589135] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:29:04.206 [2024-11-18 00:34:25.589219] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid333619 ] 00:29:04.206 [2024-11-18 00:34:25.656905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:04.206 [2024-11-18 00:34:25.703196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:04.206 [2024-11-18 00:34:26.353909] bdev.c:4691:bdev_name_add: *ERROR*: Bdev name 185e2a30-0f01-4af3-871b-ad22651925e1 already exists 00:29:04.206 [2024-11-18 00:34:26.353943] bdev.c:7842:bdev_register: *ERROR*: Unable to add uuid:185e2a30-0f01-4af3-871b-ad22651925e1 alias for bdev NVMe1n1 00:29:04.206 [2024-11-18 00:34:26.353973] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:29:04.206 Running I/O for 1 seconds... 00:29:04.206 18196.00 IOPS, 71.08 MiB/s 00:29:04.206 Latency(us) 00:29:04.206 [2024-11-17T23:34:28.028Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:04.206 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:29:04.206 NVMe0n1 : 1.01 18263.64 71.34 0.00 0.00 6992.87 4126.34 12718.84 00:29:04.206 [2024-11-17T23:34:28.028Z] =================================================================================================================== 00:29:04.206 [2024-11-17T23:34:28.028Z] Total : 18263.64 71.34 0.00 0.00 6992.87 4126.34 12718.84 00:29:04.206 Received shutdown signal, test time was about 1.000000 seconds 00:29:04.206 00:29:04.206 Latency(us) 00:29:04.206 [2024-11-17T23:34:28.028Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:04.206 [2024-11-17T23:34:28.028Z] =================================================================================================================== 00:29:04.206 [2024-11-17T23:34:28.028Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:04.206 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:04.206 00:34:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:04.206 00:34:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:29:04.206 00:34:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:29:04.206 00:34:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:04.206 00:34:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:29:04.206 00:34:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:04.206 00:34:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:29:04.206 00:34:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:04.206 00:34:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:04.206 rmmod nvme_tcp 00:29:04.206 rmmod nvme_fabrics 00:29:04.206 rmmod nvme_keyring 00:29:04.206 00:34:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:04.206 00:34:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:29:04.206 00:34:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:29:04.206 00:34:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 333592 ']' 00:29:04.206 00:34:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 333592 00:29:04.206 00:34:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 333592 ']' 00:29:04.207 00:34:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 333592 00:29:04.207 00:34:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:29:04.207 00:34:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:04.207 00:34:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 333592 00:29:04.207 00:34:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:04.207 00:34:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:04.207 00:34:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 333592' 00:29:04.207 killing process with pid 333592 00:29:04.207 00:34:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 333592 00:29:04.207 00:34:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 333592 00:29:04.465 00:34:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:04.465 00:34:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:04.465 00:34:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:04.465 00:34:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:29:04.465 00:34:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:29:04.465 00:34:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:29:04.465 00:34:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:04.465 00:34:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:04.465 00:34:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:04.465 00:34:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:04.465 00:34:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:04.465 00:34:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:06.374 00:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:06.374 00:29:06.374 real 0m7.501s 00:29:06.374 user 0m11.598s 00:29:06.374 sys 0m2.423s 00:29:06.374 00:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:06.374 00:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:06.374 ************************************ 00:29:06.374 END TEST nvmf_multicontroller 00:29:06.374 ************************************ 00:29:06.634 00:34:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:06.634 00:34:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:06.634 00:34:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:06.634 00:34:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.634 ************************************ 00:29:06.634 START TEST nvmf_aer 00:29:06.634 ************************************ 00:29:06.634 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:06.634 * Looking for test storage... 00:29:06.634 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:06.634 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:06.634 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:29:06.634 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:06.634 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:06.634 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:06.634 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:06.634 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:06.634 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:29:06.634 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:29:06.634 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:29:06.634 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:29:06.634 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:29:06.634 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:29:06.634 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:29:06.634 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:06.634 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:29:06.634 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:29:06.634 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:06.634 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:06.634 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:29:06.634 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:29:06.634 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:06.634 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:29:06.634 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:29:06.634 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:29:06.634 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:29:06.634 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:06.634 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:29:06.634 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:29:06.634 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:06.634 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:06.634 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:29:06.634 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:06.634 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:06.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:06.634 --rc genhtml_branch_coverage=1 00:29:06.634 --rc genhtml_function_coverage=1 00:29:06.634 --rc genhtml_legend=1 00:29:06.634 --rc geninfo_all_blocks=1 00:29:06.634 --rc geninfo_unexecuted_blocks=1 00:29:06.634 00:29:06.634 ' 00:29:06.634 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:06.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:06.634 --rc genhtml_branch_coverage=1 00:29:06.634 --rc genhtml_function_coverage=1 00:29:06.634 --rc genhtml_legend=1 00:29:06.634 --rc geninfo_all_blocks=1 00:29:06.634 --rc geninfo_unexecuted_blocks=1 00:29:06.634 00:29:06.634 ' 00:29:06.634 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:06.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:06.634 --rc genhtml_branch_coverage=1 00:29:06.634 --rc genhtml_function_coverage=1 00:29:06.634 --rc genhtml_legend=1 00:29:06.634 --rc geninfo_all_blocks=1 00:29:06.634 --rc geninfo_unexecuted_blocks=1 00:29:06.634 00:29:06.634 ' 00:29:06.634 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:06.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:06.634 --rc genhtml_branch_coverage=1 00:29:06.634 --rc genhtml_function_coverage=1 00:29:06.634 --rc genhtml_legend=1 00:29:06.634 --rc geninfo_all_blocks=1 00:29:06.634 --rc geninfo_unexecuted_blocks=1 00:29:06.634 00:29:06.634 ' 00:29:06.634 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:06.634 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:29:06.634 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:06.634 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:06.634 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:06.634 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:06.634 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:06.634 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:06.634 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:06.634 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:06.634 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:06.634 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:06.634 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:06.634 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:06.635 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:06.635 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:06.635 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:06.635 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:06.635 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:06.635 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:29:06.635 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:06.635 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:06.635 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:06.635 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.635 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.635 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.635 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:29:06.635 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.635 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:29:06.635 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:06.635 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:06.635 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:06.635 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:06.635 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:06.635 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:06.635 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:06.635 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:06.635 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:06.635 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:06.635 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:29:06.635 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:06.635 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:06.635 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:06.635 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:06.635 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:06.635 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:06.635 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:06.635 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:06.635 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:06.635 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:06.635 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:29:06.635 00:34:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:09.169 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:09.169 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:29:09.169 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:09.169 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:09.169 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:09.169 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:09.169 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:09.169 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:29:09.169 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:09.169 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:29:09.169 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:29:09.169 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:29:09.169 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:29:09.169 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:29:09.169 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:29:09.169 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:09.169 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:09.169 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:09.169 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:09.169 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:09.169 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:09.169 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:09.169 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:09.169 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:09.169 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:09.169 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:09.169 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:09.169 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:09.169 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:09.169 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:09.169 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:09.169 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:09.169 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:09.169 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:09.169 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:09.169 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:09.169 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:09.169 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:09.169 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:09.169 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:09.169 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:09.169 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:09.169 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:09.169 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:09.169 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:09.169 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:09.169 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:09.169 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:09.169 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:09.169 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:09.169 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:09.169 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:09.169 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:09.169 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:09.169 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:09.169 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:09.169 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:09.169 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:09.169 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:09.169 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:09.169 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:09.169 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:09.170 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:09.170 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:09.170 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.290 ms 00:29:09.170 00:29:09.170 --- 10.0.0.2 ping statistics --- 00:29:09.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:09.170 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:09.170 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:09.170 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:29:09.170 00:29:09.170 --- 10.0.0.1 ping statistics --- 00:29:09.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:09.170 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=335837 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 335837 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 335837 ']' 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:09.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:09.170 [2024-11-18 00:34:32.650234] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:29:09.170 [2024-11-18 00:34:32.650338] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:09.170 [2024-11-18 00:34:32.723704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:09.170 [2024-11-18 00:34:32.772004] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:09.170 [2024-11-18 00:34:32.772056] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:09.170 [2024-11-18 00:34:32.772083] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:09.170 [2024-11-18 00:34:32.772094] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:09.170 [2024-11-18 00:34:32.772104] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:09.170 [2024-11-18 00:34:32.773767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:09.170 [2024-11-18 00:34:32.774192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:09.170 [2024-11-18 00:34:32.774251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:09.170 [2024-11-18 00:34:32.774254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:09.170 [2024-11-18 00:34:32.912183] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:09.170 Malloc0 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:09.170 [2024-11-18 00:34:32.984124] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.170 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:09.428 [ 00:29:09.428 { 00:29:09.428 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:09.428 "subtype": "Discovery", 00:29:09.428 "listen_addresses": [], 00:29:09.428 "allow_any_host": true, 00:29:09.428 "hosts": [] 00:29:09.428 }, 00:29:09.428 { 00:29:09.428 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:09.428 "subtype": "NVMe", 00:29:09.428 "listen_addresses": [ 00:29:09.428 { 00:29:09.428 "trtype": "TCP", 00:29:09.428 "adrfam": "IPv4", 00:29:09.428 "traddr": "10.0.0.2", 00:29:09.428 "trsvcid": "4420" 00:29:09.428 } 00:29:09.428 ], 00:29:09.428 "allow_any_host": true, 00:29:09.428 "hosts": [], 00:29:09.428 "serial_number": "SPDK00000000000001", 00:29:09.428 "model_number": "SPDK bdev Controller", 00:29:09.428 "max_namespaces": 2, 00:29:09.428 "min_cntlid": 1, 00:29:09.428 "max_cntlid": 65519, 00:29:09.428 "namespaces": [ 00:29:09.428 { 00:29:09.428 "nsid": 1, 00:29:09.428 "bdev_name": "Malloc0", 00:29:09.428 "name": "Malloc0", 00:29:09.429 "nguid": "7CE76F1A09B44BD6A39B16EB00075B4D", 00:29:09.429 "uuid": "7ce76f1a-09b4-4bd6-a39b-16eb00075b4d" 00:29:09.429 } 00:29:09.429 ] 00:29:09.429 } 00:29:09.429 ] 00:29:09.429 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.429 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:29:09.429 00:34:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:29:09.429 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=335975 00:29:09.429 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:29:09.429 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:29:09.429 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:29:09.429 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:09.429 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:29:09.429 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:29:09.429 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:29:09.429 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:09.429 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:29:09.429 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:29:09.429 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:29:09.429 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:09.429 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:09.429 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:29:09.429 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:29:09.429 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.429 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:09.429 Malloc1 00:29:09.429 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.429 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:29:09.429 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.429 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:09.687 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.687 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:29:09.687 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.687 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:09.687 Asynchronous Event Request test 00:29:09.687 Attaching to 10.0.0.2 00:29:09.687 Attached to 10.0.0.2 00:29:09.687 Registering asynchronous event callbacks... 00:29:09.687 Starting namespace attribute notice tests for all controllers... 00:29:09.687 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:29:09.687 aer_cb - Changed Namespace 00:29:09.687 Cleaning up... 00:29:09.687 [ 00:29:09.687 { 00:29:09.687 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:09.687 "subtype": "Discovery", 00:29:09.687 "listen_addresses": [], 00:29:09.687 "allow_any_host": true, 00:29:09.687 "hosts": [] 00:29:09.687 }, 00:29:09.687 { 00:29:09.687 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:09.687 "subtype": "NVMe", 00:29:09.687 "listen_addresses": [ 00:29:09.687 { 00:29:09.687 "trtype": "TCP", 00:29:09.687 "adrfam": "IPv4", 00:29:09.687 "traddr": "10.0.0.2", 00:29:09.687 "trsvcid": "4420" 00:29:09.687 } 00:29:09.687 ], 00:29:09.687 "allow_any_host": true, 00:29:09.687 "hosts": [], 00:29:09.687 "serial_number": "SPDK00000000000001", 00:29:09.687 "model_number": "SPDK bdev Controller", 00:29:09.687 "max_namespaces": 2, 00:29:09.687 "min_cntlid": 1, 00:29:09.687 "max_cntlid": 65519, 00:29:09.687 "namespaces": [ 00:29:09.687 { 00:29:09.687 "nsid": 1, 00:29:09.687 "bdev_name": "Malloc0", 00:29:09.687 "name": "Malloc0", 00:29:09.687 "nguid": "7CE76F1A09B44BD6A39B16EB00075B4D", 00:29:09.687 "uuid": "7ce76f1a-09b4-4bd6-a39b-16eb00075b4d" 00:29:09.687 }, 00:29:09.687 { 00:29:09.687 "nsid": 2, 00:29:09.687 "bdev_name": "Malloc1", 00:29:09.687 "name": "Malloc1", 00:29:09.687 "nguid": "BF54EA336D954D20A1B4065908AD9BCC", 00:29:09.687 "uuid": "bf54ea33-6d95-4d20-a1b4-065908ad9bcc" 00:29:09.687 } 00:29:09.687 ] 00:29:09.687 } 00:29:09.687 ] 00:29:09.687 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.687 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 335975 00:29:09.687 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:29:09.687 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.687 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:09.687 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.687 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:29:09.687 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.687 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:09.687 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.687 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:09.687 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.687 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:09.687 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.687 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:29:09.687 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:29:09.687 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:09.687 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:29:09.687 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:09.687 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:29:09.687 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:09.687 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:09.687 rmmod nvme_tcp 00:29:09.687 rmmod nvme_fabrics 00:29:09.687 rmmod nvme_keyring 00:29:09.687 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:09.687 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:29:09.687 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:29:09.688 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 335837 ']' 00:29:09.688 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 335837 00:29:09.688 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 335837 ']' 00:29:09.688 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 335837 00:29:09.688 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:29:09.688 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:09.688 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 335837 00:29:09.688 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:09.688 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:09.688 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 335837' 00:29:09.688 killing process with pid 335837 00:29:09.688 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 335837 00:29:09.688 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 335837 00:29:09.948 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:09.948 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:09.948 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:09.948 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:29:09.948 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:29:09.948 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:09.948 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:29:09.948 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:09.948 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:09.948 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:09.948 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:09.948 00:34:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:12.483 00:34:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:12.483 00:29:12.483 real 0m5.470s 00:29:12.483 user 0m4.288s 00:29:12.483 sys 0m1.966s 00:29:12.483 00:34:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:12.483 00:34:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:12.483 ************************************ 00:29:12.484 END TEST nvmf_aer 00:29:12.484 ************************************ 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.484 ************************************ 00:29:12.484 START TEST nvmf_async_init 00:29:12.484 ************************************ 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:12.484 * Looking for test storage... 00:29:12.484 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:12.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:12.484 --rc genhtml_branch_coverage=1 00:29:12.484 --rc genhtml_function_coverage=1 00:29:12.484 --rc genhtml_legend=1 00:29:12.484 --rc geninfo_all_blocks=1 00:29:12.484 --rc geninfo_unexecuted_blocks=1 00:29:12.484 00:29:12.484 ' 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:12.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:12.484 --rc genhtml_branch_coverage=1 00:29:12.484 --rc genhtml_function_coverage=1 00:29:12.484 --rc genhtml_legend=1 00:29:12.484 --rc geninfo_all_blocks=1 00:29:12.484 --rc geninfo_unexecuted_blocks=1 00:29:12.484 00:29:12.484 ' 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:12.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:12.484 --rc genhtml_branch_coverage=1 00:29:12.484 --rc genhtml_function_coverage=1 00:29:12.484 --rc genhtml_legend=1 00:29:12.484 --rc geninfo_all_blocks=1 00:29:12.484 --rc geninfo_unexecuted_blocks=1 00:29:12.484 00:29:12.484 ' 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:12.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:12.484 --rc genhtml_branch_coverage=1 00:29:12.484 --rc genhtml_function_coverage=1 00:29:12.484 --rc genhtml_legend=1 00:29:12.484 --rc geninfo_all_blocks=1 00:29:12.484 --rc geninfo_unexecuted_blocks=1 00:29:12.484 00:29:12.484 ' 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:29:12.484 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:12.485 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:12.485 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:12.485 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:12.485 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:12.485 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:12.485 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:12.485 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:12.485 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:12.485 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:12.485 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:29:12.485 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:29:12.485 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:29:12.485 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:29:12.485 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:29:12.485 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:29:12.485 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=117ca4079d5143669776c6f7c22fd921 00:29:12.485 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:29:12.485 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:12.485 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:12.485 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:12.485 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:12.485 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:12.485 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:12.485 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:12.485 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:12.485 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:12.485 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:12.485 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:29:12.485 00:34:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:14.389 00:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:14.389 00:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:29:14.389 00:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:14.389 00:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:14.389 00:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:14.389 00:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:14.389 00:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:14.389 00:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:29:14.389 00:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:14.389 00:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:29:14.389 00:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:29:14.389 00:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:29:14.389 00:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:29:14.389 00:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:29:14.389 00:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:29:14.389 00:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:14.389 00:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:14.389 00:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:14.389 00:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:14.389 00:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:14.389 00:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:14.389 00:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:14.389 00:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:14.389 00:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:14.389 00:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:14.389 00:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:14.389 00:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:14.389 00:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:14.389 00:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:14.389 00:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:14.389 00:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:14.389 00:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:14.389 00:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:14.389 00:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:14.389 00:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:14.389 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:14.389 00:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:14.389 00:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:14.389 00:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:14.389 00:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:14.389 00:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:14.389 00:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:14.389 00:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:14.389 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:14.389 00:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:14.389 00:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:14.389 00:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:14.389 00:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:14.389 00:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:14.389 00:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:14.389 00:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:14.389 00:34:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:14.389 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:14.389 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:14.389 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:14.389 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:14.389 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:14.389 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:14.389 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:14.389 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:14.389 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:14.389 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:14.389 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:14.389 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:14.389 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:14.389 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:14.389 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:14.389 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:14.389 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:14.389 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:14.389 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:14.389 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:14.389 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:14.389 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:29:14.389 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:14.389 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:14.389 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:14.389 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:14.389 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:14.389 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:14.389 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:14.389 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:14.389 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:14.389 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:14.389 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:14.389 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:14.389 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:14.389 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:14.389 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:14.389 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:14.389 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:14.389 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:14.389 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:14.389 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:14.389 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:14.389 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:14.389 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:14.389 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:14.389 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:14.389 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:14.389 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:14.389 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:29:14.389 00:29:14.389 --- 10.0.0.2 ping statistics --- 00:29:14.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:14.389 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:29:14.389 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:14.389 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:14.389 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:29:14.389 00:29:14.389 --- 10.0.0.1 ping statistics --- 00:29:14.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:14.390 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:29:14.390 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:14.390 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:29:14.390 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:14.390 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:14.390 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:14.390 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:14.390 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:14.390 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:14.390 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:14.390 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:29:14.390 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:14.390 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:14.390 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:14.390 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=337927 00:29:14.390 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:29:14.390 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 337927 00:29:14.390 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 337927 ']' 00:29:14.390 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:14.390 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:14.390 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:14.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:14.390 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:14.390 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:14.648 [2024-11-18 00:34:38.213380] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:29:14.648 [2024-11-18 00:34:38.213457] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:14.648 [2024-11-18 00:34:38.285464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:14.648 [2024-11-18 00:34:38.330230] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:14.648 [2024-11-18 00:34:38.330280] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:14.648 [2024-11-18 00:34:38.330293] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:14.648 [2024-11-18 00:34:38.330304] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:14.648 [2024-11-18 00:34:38.330321] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:14.648 [2024-11-18 00:34:38.330870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:14.648 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:14.648 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:29:14.648 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:14.648 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:14.648 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:14.648 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:14.648 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:14.648 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.648 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:14.648 [2024-11-18 00:34:38.468165] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:14.906 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.906 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:29:14.906 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.906 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:14.906 null0 00:29:14.906 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.906 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:29:14.906 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.906 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:14.906 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.906 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:29:14.906 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.906 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:14.906 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.906 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 117ca4079d5143669776c6f7c22fd921 00:29:14.906 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.906 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:14.906 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.906 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:14.906 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.906 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:14.906 [2024-11-18 00:34:38.508405] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:14.906 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.906 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:29:14.906 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.906 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:15.165 nvme0n1 00:29:15.165 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.165 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:15.165 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.165 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:15.165 [ 00:29:15.165 { 00:29:15.165 "name": "nvme0n1", 00:29:15.165 "aliases": [ 00:29:15.165 "117ca407-9d51-4366-9776-c6f7c22fd921" 00:29:15.165 ], 00:29:15.165 "product_name": "NVMe disk", 00:29:15.165 "block_size": 512, 00:29:15.165 "num_blocks": 2097152, 00:29:15.165 "uuid": "117ca407-9d51-4366-9776-c6f7c22fd921", 00:29:15.165 "numa_id": 0, 00:29:15.165 "assigned_rate_limits": { 00:29:15.165 "rw_ios_per_sec": 0, 00:29:15.165 "rw_mbytes_per_sec": 0, 00:29:15.165 "r_mbytes_per_sec": 0, 00:29:15.165 "w_mbytes_per_sec": 0 00:29:15.165 }, 00:29:15.165 "claimed": false, 00:29:15.165 "zoned": false, 00:29:15.165 "supported_io_types": { 00:29:15.165 "read": true, 00:29:15.165 "write": true, 00:29:15.165 "unmap": false, 00:29:15.165 "flush": true, 00:29:15.165 "reset": true, 00:29:15.165 "nvme_admin": true, 00:29:15.165 "nvme_io": true, 00:29:15.165 "nvme_io_md": false, 00:29:15.165 "write_zeroes": true, 00:29:15.165 "zcopy": false, 00:29:15.165 "get_zone_info": false, 00:29:15.165 "zone_management": false, 00:29:15.165 "zone_append": false, 00:29:15.165 "compare": true, 00:29:15.165 "compare_and_write": true, 00:29:15.165 "abort": true, 00:29:15.165 "seek_hole": false, 00:29:15.165 "seek_data": false, 00:29:15.165 "copy": true, 00:29:15.165 "nvme_iov_md": false 00:29:15.165 }, 00:29:15.165 "memory_domains": [ 00:29:15.165 { 00:29:15.165 "dma_device_id": "system", 00:29:15.165 "dma_device_type": 1 00:29:15.165 } 00:29:15.165 ], 00:29:15.165 "driver_specific": { 00:29:15.165 "nvme": [ 00:29:15.165 { 00:29:15.165 "trid": { 00:29:15.165 "trtype": "TCP", 00:29:15.165 "adrfam": "IPv4", 00:29:15.165 "traddr": "10.0.0.2", 00:29:15.165 "trsvcid": "4420", 00:29:15.165 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:15.165 }, 00:29:15.165 "ctrlr_data": { 00:29:15.165 "cntlid": 1, 00:29:15.165 "vendor_id": "0x8086", 00:29:15.165 "model_number": "SPDK bdev Controller", 00:29:15.165 "serial_number": "00000000000000000000", 00:29:15.165 "firmware_revision": "25.01", 00:29:15.165 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:15.165 "oacs": { 00:29:15.165 "security": 0, 00:29:15.165 "format": 0, 00:29:15.165 "firmware": 0, 00:29:15.165 "ns_manage": 0 00:29:15.165 }, 00:29:15.165 "multi_ctrlr": true, 00:29:15.165 "ana_reporting": false 00:29:15.165 }, 00:29:15.165 "vs": { 00:29:15.165 "nvme_version": "1.3" 00:29:15.165 }, 00:29:15.165 "ns_data": { 00:29:15.165 "id": 1, 00:29:15.165 "can_share": true 00:29:15.165 } 00:29:15.165 } 00:29:15.165 ], 00:29:15.165 "mp_policy": "active_passive" 00:29:15.165 } 00:29:15.165 } 00:29:15.165 ] 00:29:15.165 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.165 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:29:15.165 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.165 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:15.165 [2024-11-18 00:34:38.770364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:15.165 [2024-11-18 00:34:38.770440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d09700 (9): Bad file descriptor 00:29:15.165 [2024-11-18 00:34:38.904463] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:29:15.165 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.165 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:15.165 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.165 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:15.165 [ 00:29:15.165 { 00:29:15.165 "name": "nvme0n1", 00:29:15.165 "aliases": [ 00:29:15.165 "117ca407-9d51-4366-9776-c6f7c22fd921" 00:29:15.165 ], 00:29:15.165 "product_name": "NVMe disk", 00:29:15.165 "block_size": 512, 00:29:15.165 "num_blocks": 2097152, 00:29:15.165 "uuid": "117ca407-9d51-4366-9776-c6f7c22fd921", 00:29:15.165 "numa_id": 0, 00:29:15.165 "assigned_rate_limits": { 00:29:15.165 "rw_ios_per_sec": 0, 00:29:15.165 "rw_mbytes_per_sec": 0, 00:29:15.165 "r_mbytes_per_sec": 0, 00:29:15.165 "w_mbytes_per_sec": 0 00:29:15.165 }, 00:29:15.165 "claimed": false, 00:29:15.165 "zoned": false, 00:29:15.165 "supported_io_types": { 00:29:15.165 "read": true, 00:29:15.165 "write": true, 00:29:15.165 "unmap": false, 00:29:15.165 "flush": true, 00:29:15.165 "reset": true, 00:29:15.165 "nvme_admin": true, 00:29:15.165 "nvme_io": true, 00:29:15.165 "nvme_io_md": false, 00:29:15.165 "write_zeroes": true, 00:29:15.165 "zcopy": false, 00:29:15.165 "get_zone_info": false, 00:29:15.165 "zone_management": false, 00:29:15.165 "zone_append": false, 00:29:15.165 "compare": true, 00:29:15.165 "compare_and_write": true, 00:29:15.165 "abort": true, 00:29:15.165 "seek_hole": false, 00:29:15.165 "seek_data": false, 00:29:15.165 "copy": true, 00:29:15.165 "nvme_iov_md": false 00:29:15.165 }, 00:29:15.165 "memory_domains": [ 00:29:15.165 { 00:29:15.165 "dma_device_id": "system", 00:29:15.165 "dma_device_type": 1 00:29:15.165 } 00:29:15.165 ], 00:29:15.165 "driver_specific": { 00:29:15.165 "nvme": [ 00:29:15.165 { 00:29:15.165 "trid": { 00:29:15.165 "trtype": "TCP", 00:29:15.165 "adrfam": "IPv4", 00:29:15.165 "traddr": "10.0.0.2", 00:29:15.165 "trsvcid": "4420", 00:29:15.165 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:15.165 }, 00:29:15.165 "ctrlr_data": { 00:29:15.165 "cntlid": 2, 00:29:15.165 "vendor_id": "0x8086", 00:29:15.165 "model_number": "SPDK bdev Controller", 00:29:15.165 "serial_number": "00000000000000000000", 00:29:15.165 "firmware_revision": "25.01", 00:29:15.165 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:15.165 "oacs": { 00:29:15.165 "security": 0, 00:29:15.165 "format": 0, 00:29:15.165 "firmware": 0, 00:29:15.165 "ns_manage": 0 00:29:15.165 }, 00:29:15.165 "multi_ctrlr": true, 00:29:15.165 "ana_reporting": false 00:29:15.165 }, 00:29:15.165 "vs": { 00:29:15.165 "nvme_version": "1.3" 00:29:15.165 }, 00:29:15.165 "ns_data": { 00:29:15.165 "id": 1, 00:29:15.165 "can_share": true 00:29:15.165 } 00:29:15.165 } 00:29:15.165 ], 00:29:15.165 "mp_policy": "active_passive" 00:29:15.165 } 00:29:15.165 } 00:29:15.165 ] 00:29:15.165 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.165 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:15.165 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.165 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:15.165 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.165 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:29:15.166 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.n4RdET1eXK 00:29:15.166 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:29:15.166 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.n4RdET1eXK 00:29:15.166 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.n4RdET1eXK 00:29:15.166 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.166 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:15.166 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.166 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:29:15.166 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.166 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:15.166 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.166 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:29:15.166 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.166 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:15.166 [2024-11-18 00:34:38.958942] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:15.166 [2024-11-18 00:34:38.959073] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:15.166 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.166 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:29:15.166 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.166 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:15.166 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.166 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:29:15.166 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.166 00:34:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:15.166 [2024-11-18 00:34:38.974974] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:15.423 nvme0n1 00:29:15.423 00:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.423 00:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:15.423 00:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.423 00:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:15.423 [ 00:29:15.423 { 00:29:15.423 "name": "nvme0n1", 00:29:15.423 "aliases": [ 00:29:15.423 "117ca407-9d51-4366-9776-c6f7c22fd921" 00:29:15.423 ], 00:29:15.423 "product_name": "NVMe disk", 00:29:15.423 "block_size": 512, 00:29:15.423 "num_blocks": 2097152, 00:29:15.423 "uuid": "117ca407-9d51-4366-9776-c6f7c22fd921", 00:29:15.423 "numa_id": 0, 00:29:15.423 "assigned_rate_limits": { 00:29:15.423 "rw_ios_per_sec": 0, 00:29:15.423 "rw_mbytes_per_sec": 0, 00:29:15.423 "r_mbytes_per_sec": 0, 00:29:15.423 "w_mbytes_per_sec": 0 00:29:15.423 }, 00:29:15.423 "claimed": false, 00:29:15.423 "zoned": false, 00:29:15.423 "supported_io_types": { 00:29:15.423 "read": true, 00:29:15.423 "write": true, 00:29:15.423 "unmap": false, 00:29:15.423 "flush": true, 00:29:15.423 "reset": true, 00:29:15.423 "nvme_admin": true, 00:29:15.423 "nvme_io": true, 00:29:15.423 "nvme_io_md": false, 00:29:15.423 "write_zeroes": true, 00:29:15.423 "zcopy": false, 00:29:15.423 "get_zone_info": false, 00:29:15.423 "zone_management": false, 00:29:15.423 "zone_append": false, 00:29:15.423 "compare": true, 00:29:15.423 "compare_and_write": true, 00:29:15.423 "abort": true, 00:29:15.423 "seek_hole": false, 00:29:15.423 "seek_data": false, 00:29:15.423 "copy": true, 00:29:15.423 "nvme_iov_md": false 00:29:15.423 }, 00:29:15.423 "memory_domains": [ 00:29:15.423 { 00:29:15.423 "dma_device_id": "system", 00:29:15.423 "dma_device_type": 1 00:29:15.423 } 00:29:15.423 ], 00:29:15.423 "driver_specific": { 00:29:15.423 "nvme": [ 00:29:15.423 { 00:29:15.423 "trid": { 00:29:15.423 "trtype": "TCP", 00:29:15.423 "adrfam": "IPv4", 00:29:15.423 "traddr": "10.0.0.2", 00:29:15.423 "trsvcid": "4421", 00:29:15.423 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:15.423 }, 00:29:15.423 "ctrlr_data": { 00:29:15.423 "cntlid": 3, 00:29:15.423 "vendor_id": "0x8086", 00:29:15.423 "model_number": "SPDK bdev Controller", 00:29:15.423 "serial_number": "00000000000000000000", 00:29:15.423 "firmware_revision": "25.01", 00:29:15.423 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:15.423 "oacs": { 00:29:15.423 "security": 0, 00:29:15.423 "format": 0, 00:29:15.423 "firmware": 0, 00:29:15.423 "ns_manage": 0 00:29:15.423 }, 00:29:15.423 "multi_ctrlr": true, 00:29:15.423 "ana_reporting": false 00:29:15.423 }, 00:29:15.423 "vs": { 00:29:15.423 "nvme_version": "1.3" 00:29:15.423 }, 00:29:15.423 "ns_data": { 00:29:15.423 "id": 1, 00:29:15.423 "can_share": true 00:29:15.423 } 00:29:15.423 } 00:29:15.423 ], 00:29:15.423 "mp_policy": "active_passive" 00:29:15.423 } 00:29:15.423 } 00:29:15.423 ] 00:29:15.423 00:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.423 00:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:15.423 00:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.423 00:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:15.423 00:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.423 00:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.n4RdET1eXK 00:29:15.423 00:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:29:15.423 00:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:29:15.423 00:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:15.423 00:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:29:15.423 00:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:15.423 00:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:29:15.423 00:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:15.423 00:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:15.424 rmmod nvme_tcp 00:29:15.424 rmmod nvme_fabrics 00:29:15.424 rmmod nvme_keyring 00:29:15.424 00:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:15.424 00:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:29:15.424 00:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:29:15.424 00:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 337927 ']' 00:29:15.424 00:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 337927 00:29:15.424 00:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 337927 ']' 00:29:15.424 00:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 337927 00:29:15.424 00:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:29:15.424 00:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:15.424 00:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 337927 00:29:15.424 00:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:15.424 00:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:15.424 00:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 337927' 00:29:15.424 killing process with pid 337927 00:29:15.424 00:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 337927 00:29:15.424 00:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 337927 00:29:15.683 00:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:15.683 00:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:15.683 00:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:15.683 00:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:29:15.683 00:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:29:15.683 00:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:15.683 00:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:29:15.683 00:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:15.683 00:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:15.683 00:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:15.683 00:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:15.683 00:34:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:18.229 00:34:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:18.229 00:29:18.229 real 0m5.670s 00:29:18.229 user 0m2.133s 00:29:18.229 sys 0m1.935s 00:29:18.229 00:34:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:18.229 00:34:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:18.229 ************************************ 00:29:18.229 END TEST nvmf_async_init 00:29:18.229 ************************************ 00:29:18.229 00:34:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:18.229 00:34:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:18.229 00:34:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:18.229 00:34:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.229 ************************************ 00:29:18.229 START TEST dma 00:29:18.229 ************************************ 00:29:18.229 00:34:41 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:18.229 * Looking for test storage... 00:29:18.230 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:18.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.230 --rc genhtml_branch_coverage=1 00:29:18.230 --rc genhtml_function_coverage=1 00:29:18.230 --rc genhtml_legend=1 00:29:18.230 --rc geninfo_all_blocks=1 00:29:18.230 --rc geninfo_unexecuted_blocks=1 00:29:18.230 00:29:18.230 ' 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:18.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.230 --rc genhtml_branch_coverage=1 00:29:18.230 --rc genhtml_function_coverage=1 00:29:18.230 --rc genhtml_legend=1 00:29:18.230 --rc geninfo_all_blocks=1 00:29:18.230 --rc geninfo_unexecuted_blocks=1 00:29:18.230 00:29:18.230 ' 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:18.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.230 --rc genhtml_branch_coverage=1 00:29:18.230 --rc genhtml_function_coverage=1 00:29:18.230 --rc genhtml_legend=1 00:29:18.230 --rc geninfo_all_blocks=1 00:29:18.230 --rc geninfo_unexecuted_blocks=1 00:29:18.230 00:29:18.230 ' 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:18.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.230 --rc genhtml_branch_coverage=1 00:29:18.230 --rc genhtml_function_coverage=1 00:29:18.230 --rc genhtml_legend=1 00:29:18.230 --rc geninfo_all_blocks=1 00:29:18.230 --rc geninfo_unexecuted_blocks=1 00:29:18.230 00:29:18.230 ' 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:18.230 00:34:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:18.230 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:29:18.231 00:29:18.231 real 0m0.166s 00:29:18.231 user 0m0.120s 00:29:18.231 sys 0m0.056s 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:29:18.231 ************************************ 00:29:18.231 END TEST dma 00:29:18.231 ************************************ 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.231 ************************************ 00:29:18.231 START TEST nvmf_identify 00:29:18.231 ************************************ 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:18.231 * Looking for test storage... 00:29:18.231 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:18.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.231 --rc genhtml_branch_coverage=1 00:29:18.231 --rc genhtml_function_coverage=1 00:29:18.231 --rc genhtml_legend=1 00:29:18.231 --rc geninfo_all_blocks=1 00:29:18.231 --rc geninfo_unexecuted_blocks=1 00:29:18.231 00:29:18.231 ' 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:18.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.231 --rc genhtml_branch_coverage=1 00:29:18.231 --rc genhtml_function_coverage=1 00:29:18.231 --rc genhtml_legend=1 00:29:18.231 --rc geninfo_all_blocks=1 00:29:18.231 --rc geninfo_unexecuted_blocks=1 00:29:18.231 00:29:18.231 ' 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:18.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.231 --rc genhtml_branch_coverage=1 00:29:18.231 --rc genhtml_function_coverage=1 00:29:18.231 --rc genhtml_legend=1 00:29:18.231 --rc geninfo_all_blocks=1 00:29:18.231 --rc geninfo_unexecuted_blocks=1 00:29:18.231 00:29:18.231 ' 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:18.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.231 --rc genhtml_branch_coverage=1 00:29:18.231 --rc genhtml_function_coverage=1 00:29:18.231 --rc genhtml_legend=1 00:29:18.231 --rc geninfo_all_blocks=1 00:29:18.231 --rc geninfo_unexecuted_blocks=1 00:29:18.231 00:29:18.231 ' 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:18.231 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.232 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.232 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.232 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:29:18.232 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.232 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:29:18.232 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:18.232 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:18.232 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:18.232 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:18.232 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:18.232 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:18.232 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:18.232 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:18.232 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:18.232 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:18.232 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:18.232 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:18.232 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:29:18.232 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:18.232 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:18.232 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:18.232 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:18.232 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:18.232 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:18.232 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:18.232 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:18.232 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:18.232 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:18.232 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:29:18.232 00:34:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:20.155 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:20.155 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:29:20.155 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:20.155 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:20.155 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:20.155 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:20.155 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:20.155 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:29:20.155 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:20.155 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:29:20.155 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:29:20.155 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:29:20.155 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:29:20.155 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:29:20.155 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:29:20.155 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:20.155 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:20.155 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:20.155 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:20.155 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:20.155 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:20.155 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:20.155 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:20.155 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:20.155 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:20.156 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:20.156 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:20.156 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:20.156 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:20.156 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:20.156 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:20.156 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:20.156 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:20.156 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:20.156 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:20.156 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:20.156 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:20.156 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:20.156 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:20.156 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:20.156 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:20.156 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:20.156 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:20.156 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:20.156 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:20.156 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:20.156 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:20.156 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:20.156 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:20.156 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:20.156 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:20.156 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:20.156 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:20.156 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:20.156 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:20.156 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:20.156 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:20.156 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:20.156 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:20.156 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:20.156 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:20.156 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:20.156 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:20.156 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:20.156 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:20.156 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:20.156 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:20.156 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:20.156 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:20.156 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:20.156 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:20.156 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:20.156 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:20.156 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:29:20.156 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:20.156 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:20.156 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:20.156 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:20.156 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:20.156 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:20.156 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:20.156 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:20.156 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:20.156 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:20.156 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:20.156 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:20.156 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:20.156 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:20.156 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:20.156 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:20.156 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:20.156 00:34:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:20.416 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:20.416 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:20.416 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:20.416 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:20.416 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:20.416 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:20.416 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:20.416 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:20.416 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:20.416 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:29:20.416 00:29:20.416 --- 10.0.0.2 ping statistics --- 00:29:20.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:20.416 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:29:20.416 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:20.416 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:20.416 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:29:20.416 00:29:20.416 --- 10.0.0.1 ping statistics --- 00:29:20.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:20.416 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:29:20.416 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:20.416 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:29:20.416 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:20.416 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:20.416 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:20.416 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:20.416 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:20.416 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:20.416 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:20.416 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:29:20.416 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:20.416 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:20.416 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=340062 00:29:20.416 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:20.416 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:20.416 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 340062 00:29:20.416 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 340062 ']' 00:29:20.416 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:20.416 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:20.416 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:20.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:20.416 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:20.416 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:20.416 [2024-11-18 00:34:44.156016] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:29:20.416 [2024-11-18 00:34:44.156096] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:20.416 [2024-11-18 00:34:44.232043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:20.675 [2024-11-18 00:34:44.281790] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:20.675 [2024-11-18 00:34:44.281842] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:20.675 [2024-11-18 00:34:44.281856] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:20.675 [2024-11-18 00:34:44.281867] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:20.675 [2024-11-18 00:34:44.281877] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:20.675 [2024-11-18 00:34:44.283447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:20.675 [2024-11-18 00:34:44.283477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:20.675 [2024-11-18 00:34:44.283533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:20.675 [2024-11-18 00:34:44.283536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:20.675 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:20.675 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:29:20.675 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:20.675 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.675 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:20.675 [2024-11-18 00:34:44.403276] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:20.676 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.676 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:29:20.676 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:20.676 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:20.676 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:20.676 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.676 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:20.676 Malloc0 00:29:20.676 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.676 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:20.676 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.676 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:20.676 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.676 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:29:20.676 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.676 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:20.676 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.676 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:20.676 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.676 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:20.676 [2024-11-18 00:34:44.485199] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:20.676 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.676 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:20.676 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.676 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:20.676 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.676 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:29:20.676 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.676 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:20.935 [ 00:29:20.935 { 00:29:20.935 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:20.935 "subtype": "Discovery", 00:29:20.935 "listen_addresses": [ 00:29:20.935 { 00:29:20.935 "trtype": "TCP", 00:29:20.935 "adrfam": "IPv4", 00:29:20.935 "traddr": "10.0.0.2", 00:29:20.935 "trsvcid": "4420" 00:29:20.935 } 00:29:20.935 ], 00:29:20.935 "allow_any_host": true, 00:29:20.935 "hosts": [] 00:29:20.935 }, 00:29:20.935 { 00:29:20.935 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:20.935 "subtype": "NVMe", 00:29:20.935 "listen_addresses": [ 00:29:20.935 { 00:29:20.935 "trtype": "TCP", 00:29:20.935 "adrfam": "IPv4", 00:29:20.935 "traddr": "10.0.0.2", 00:29:20.935 "trsvcid": "4420" 00:29:20.935 } 00:29:20.935 ], 00:29:20.935 "allow_any_host": true, 00:29:20.935 "hosts": [], 00:29:20.935 "serial_number": "SPDK00000000000001", 00:29:20.935 "model_number": "SPDK bdev Controller", 00:29:20.935 "max_namespaces": 32, 00:29:20.935 "min_cntlid": 1, 00:29:20.935 "max_cntlid": 65519, 00:29:20.935 "namespaces": [ 00:29:20.935 { 00:29:20.935 "nsid": 1, 00:29:20.935 "bdev_name": "Malloc0", 00:29:20.935 "name": "Malloc0", 00:29:20.935 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:29:20.935 "eui64": "ABCDEF0123456789", 00:29:20.935 "uuid": "5883b743-0b95-4d04-b96a-dbf4bb0e1bef" 00:29:20.935 } 00:29:20.935 ] 00:29:20.935 } 00:29:20.935 ] 00:29:20.935 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.935 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:29:20.935 [2024-11-18 00:34:44.522598] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:29:20.935 [2024-11-18 00:34:44.522637] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid340207 ] 00:29:20.935 [2024-11-18 00:34:44.570530] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:29:20.935 [2024-11-18 00:34:44.570619] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:20.935 [2024-11-18 00:34:44.570630] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:20.935 [2024-11-18 00:34:44.570645] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:20.935 [2024-11-18 00:34:44.570661] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:20.935 [2024-11-18 00:34:44.574741] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:29:20.935 [2024-11-18 00:34:44.574810] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x7b6650 0 00:29:20.935 [2024-11-18 00:34:44.582327] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:20.935 [2024-11-18 00:34:44.582350] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:20.935 [2024-11-18 00:34:44.582359] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:20.935 [2024-11-18 00:34:44.582366] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:20.935 [2024-11-18 00:34:44.582423] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.935 [2024-11-18 00:34:44.582438] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.935 [2024-11-18 00:34:44.582445] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7b6650) 00:29:20.935 [2024-11-18 00:34:44.582463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:20.935 [2024-11-18 00:34:44.582491] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x810f40, cid 0, qid 0 00:29:20.935 [2024-11-18 00:34:44.590327] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.935 [2024-11-18 00:34:44.590345] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.935 [2024-11-18 00:34:44.590352] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.935 [2024-11-18 00:34:44.590360] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x810f40) on tqpair=0x7b6650 00:29:20.935 [2024-11-18 00:34:44.590376] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:20.936 [2024-11-18 00:34:44.590404] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:29:20.936 [2024-11-18 00:34:44.590414] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:29:20.936 [2024-11-18 00:34:44.590437] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.936 [2024-11-18 00:34:44.590446] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.936 [2024-11-18 00:34:44.590453] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7b6650) 00:29:20.936 [2024-11-18 00:34:44.590464] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.936 [2024-11-18 00:34:44.590489] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x810f40, cid 0, qid 0 00:29:20.936 [2024-11-18 00:34:44.590621] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.936 [2024-11-18 00:34:44.590635] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.936 [2024-11-18 00:34:44.590642] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.936 [2024-11-18 00:34:44.590649] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x810f40) on tqpair=0x7b6650 00:29:20.936 [2024-11-18 00:34:44.590658] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:29:20.936 [2024-11-18 00:34:44.590671] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:29:20.936 [2024-11-18 00:34:44.590685] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.936 [2024-11-18 00:34:44.590692] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.936 [2024-11-18 00:34:44.590699] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7b6650) 00:29:20.936 [2024-11-18 00:34:44.590709] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.936 [2024-11-18 00:34:44.590731] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x810f40, cid 0, qid 0 00:29:20.936 [2024-11-18 00:34:44.590823] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.936 [2024-11-18 00:34:44.590836] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.936 [2024-11-18 00:34:44.590847] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.936 [2024-11-18 00:34:44.590854] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x810f40) on tqpair=0x7b6650 00:29:20.936 [2024-11-18 00:34:44.590864] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:29:20.936 [2024-11-18 00:34:44.590878] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:29:20.936 [2024-11-18 00:34:44.590890] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.936 [2024-11-18 00:34:44.590898] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.936 [2024-11-18 00:34:44.590904] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7b6650) 00:29:20.936 [2024-11-18 00:34:44.590914] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.936 [2024-11-18 00:34:44.590936] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x810f40, cid 0, qid 0 00:29:20.936 [2024-11-18 00:34:44.591060] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.936 [2024-11-18 00:34:44.591073] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.936 [2024-11-18 00:34:44.591079] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.936 [2024-11-18 00:34:44.591086] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x810f40) on tqpair=0x7b6650 00:29:20.936 [2024-11-18 00:34:44.591094] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:20.936 [2024-11-18 00:34:44.591110] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.936 [2024-11-18 00:34:44.591119] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.936 [2024-11-18 00:34:44.591126] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7b6650) 00:29:20.936 [2024-11-18 00:34:44.591136] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.936 [2024-11-18 00:34:44.591157] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x810f40, cid 0, qid 0 00:29:20.936 [2024-11-18 00:34:44.591246] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.936 [2024-11-18 00:34:44.591260] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.936 [2024-11-18 00:34:44.591267] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.936 [2024-11-18 00:34:44.591273] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x810f40) on tqpair=0x7b6650 00:29:20.936 [2024-11-18 00:34:44.591282] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:29:20.936 [2024-11-18 00:34:44.591290] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:29:20.936 [2024-11-18 00:34:44.591303] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:20.936 [2024-11-18 00:34:44.591421] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:29:20.936 [2024-11-18 00:34:44.591431] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:20.936 [2024-11-18 00:34:44.591447] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.936 [2024-11-18 00:34:44.591455] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.936 [2024-11-18 00:34:44.591461] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7b6650) 00:29:20.936 [2024-11-18 00:34:44.591471] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.936 [2024-11-18 00:34:44.591498] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x810f40, cid 0, qid 0 00:29:20.936 [2024-11-18 00:34:44.591627] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.936 [2024-11-18 00:34:44.591641] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.936 [2024-11-18 00:34:44.591647] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.936 [2024-11-18 00:34:44.591654] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x810f40) on tqpair=0x7b6650 00:29:20.936 [2024-11-18 00:34:44.591663] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:20.936 [2024-11-18 00:34:44.591679] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.936 [2024-11-18 00:34:44.591688] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.936 [2024-11-18 00:34:44.591695] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7b6650) 00:29:20.936 [2024-11-18 00:34:44.591705] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.936 [2024-11-18 00:34:44.591726] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x810f40, cid 0, qid 0 00:29:20.936 [2024-11-18 00:34:44.591827] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.936 [2024-11-18 00:34:44.591841] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.936 [2024-11-18 00:34:44.591848] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.936 [2024-11-18 00:34:44.591854] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x810f40) on tqpair=0x7b6650 00:29:20.936 [2024-11-18 00:34:44.591862] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:20.937 [2024-11-18 00:34:44.591871] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:29:20.937 [2024-11-18 00:34:44.591884] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:29:20.937 [2024-11-18 00:34:44.591907] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:29:20.937 [2024-11-18 00:34:44.591924] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.937 [2024-11-18 00:34:44.591932] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7b6650) 00:29:20.937 [2024-11-18 00:34:44.591943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.937 [2024-11-18 00:34:44.591964] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x810f40, cid 0, qid 0 00:29:20.937 [2024-11-18 00:34:44.592099] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:20.937 [2024-11-18 00:34:44.592114] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:20.937 [2024-11-18 00:34:44.592121] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:20.937 [2024-11-18 00:34:44.592128] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7b6650): datao=0, datal=4096, cccid=0 00:29:20.937 [2024-11-18 00:34:44.592136] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x810f40) on tqpair(0x7b6650): expected_datao=0, payload_size=4096 00:29:20.937 [2024-11-18 00:34:44.592144] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.937 [2024-11-18 00:34:44.592156] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:20.937 [2024-11-18 00:34:44.592164] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:20.937 [2024-11-18 00:34:44.633414] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.937 [2024-11-18 00:34:44.633434] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.937 [2024-11-18 00:34:44.633442] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.937 [2024-11-18 00:34:44.633454] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x810f40) on tqpair=0x7b6650 00:29:20.937 [2024-11-18 00:34:44.633468] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:29:20.937 [2024-11-18 00:34:44.633478] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:29:20.937 [2024-11-18 00:34:44.633486] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:29:20.937 [2024-11-18 00:34:44.633500] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:29:20.937 [2024-11-18 00:34:44.633510] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:29:20.937 [2024-11-18 00:34:44.633518] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:29:20.937 [2024-11-18 00:34:44.633538] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:29:20.937 [2024-11-18 00:34:44.633552] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.937 [2024-11-18 00:34:44.633561] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.937 [2024-11-18 00:34:44.633567] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7b6650) 00:29:20.937 [2024-11-18 00:34:44.633579] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:20.937 [2024-11-18 00:34:44.633609] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x810f40, cid 0, qid 0 00:29:20.937 [2024-11-18 00:34:44.633743] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.937 [2024-11-18 00:34:44.633756] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.937 [2024-11-18 00:34:44.633763] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.937 [2024-11-18 00:34:44.633769] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x810f40) on tqpair=0x7b6650 00:29:20.937 [2024-11-18 00:34:44.633783] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.937 [2024-11-18 00:34:44.633791] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.937 [2024-11-18 00:34:44.633797] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7b6650) 00:29:20.937 [2024-11-18 00:34:44.633807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:20.937 [2024-11-18 00:34:44.633818] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.937 [2024-11-18 00:34:44.633825] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.937 [2024-11-18 00:34:44.633832] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x7b6650) 00:29:20.937 [2024-11-18 00:34:44.633840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:20.937 [2024-11-18 00:34:44.633851] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.937 [2024-11-18 00:34:44.633858] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.937 [2024-11-18 00:34:44.633864] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x7b6650) 00:29:20.937 [2024-11-18 00:34:44.633888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:20.937 [2024-11-18 00:34:44.633899] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.937 [2024-11-18 00:34:44.633906] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.937 [2024-11-18 00:34:44.633912] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7b6650) 00:29:20.937 [2024-11-18 00:34:44.633921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:20.937 [2024-11-18 00:34:44.633934] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:20.937 [2024-11-18 00:34:44.633950] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:20.937 [2024-11-18 00:34:44.633962] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.937 [2024-11-18 00:34:44.633969] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7b6650) 00:29:20.937 [2024-11-18 00:34:44.633980] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.937 [2024-11-18 00:34:44.634002] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x810f40, cid 0, qid 0 00:29:20.937 [2024-11-18 00:34:44.634029] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8110c0, cid 1, qid 0 00:29:20.937 [2024-11-18 00:34:44.634037] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x811240, cid 2, qid 0 00:29:20.937 [2024-11-18 00:34:44.634045] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8113c0, cid 3, qid 0 00:29:20.937 [2024-11-18 00:34:44.634053] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x811540, cid 4, qid 0 00:29:20.937 [2024-11-18 00:34:44.634168] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.937 [2024-11-18 00:34:44.634182] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.937 [2024-11-18 00:34:44.634189] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.937 [2024-11-18 00:34:44.634196] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x811540) on tqpair=0x7b6650 00:29:20.937 [2024-11-18 00:34:44.634210] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:29:20.937 [2024-11-18 00:34:44.634221] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:29:20.937 [2024-11-18 00:34:44.634239] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.937 [2024-11-18 00:34:44.634249] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7b6650) 00:29:20.937 [2024-11-18 00:34:44.634260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.937 [2024-11-18 00:34:44.634282] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x811540, cid 4, qid 0 00:29:20.937 [2024-11-18 00:34:44.638327] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:20.937 [2024-11-18 00:34:44.638344] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:20.937 [2024-11-18 00:34:44.638351] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:20.937 [2024-11-18 00:34:44.638357] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7b6650): datao=0, datal=4096, cccid=4 00:29:20.937 [2024-11-18 00:34:44.638365] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x811540) on tqpair(0x7b6650): expected_datao=0, payload_size=4096 00:29:20.937 [2024-11-18 00:34:44.638373] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.937 [2024-11-18 00:34:44.638383] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:20.938 [2024-11-18 00:34:44.638390] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:20.938 [2024-11-18 00:34:44.638399] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.938 [2024-11-18 00:34:44.638408] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.938 [2024-11-18 00:34:44.638414] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.938 [2024-11-18 00:34:44.638421] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x811540) on tqpair=0x7b6650 00:29:20.938 [2024-11-18 00:34:44.638441] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:29:20.938 [2024-11-18 00:34:44.638500] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.938 [2024-11-18 00:34:44.638511] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7b6650) 00:29:20.938 [2024-11-18 00:34:44.638522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.938 [2024-11-18 00:34:44.638535] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.938 [2024-11-18 00:34:44.638542] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.938 [2024-11-18 00:34:44.638549] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7b6650) 00:29:20.938 [2024-11-18 00:34:44.638558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:20.938 [2024-11-18 00:34:44.638587] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x811540, cid 4, qid 0 00:29:20.938 [2024-11-18 00:34:44.638621] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8116c0, cid 5, qid 0 00:29:20.938 [2024-11-18 00:34:44.638804] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:20.938 [2024-11-18 00:34:44.638817] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:20.938 [2024-11-18 00:34:44.638824] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:20.938 [2024-11-18 00:34:44.638830] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7b6650): datao=0, datal=1024, cccid=4 00:29:20.938 [2024-11-18 00:34:44.638838] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x811540) on tqpair(0x7b6650): expected_datao=0, payload_size=1024 00:29:20.938 [2024-11-18 00:34:44.638846] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.938 [2024-11-18 00:34:44.638856] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:20.938 [2024-11-18 00:34:44.638863] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:20.938 [2024-11-18 00:34:44.638872] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.938 [2024-11-18 00:34:44.638882] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.938 [2024-11-18 00:34:44.638888] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.938 [2024-11-18 00:34:44.638895] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8116c0) on tqpair=0x7b6650 00:29:20.938 [2024-11-18 00:34:44.680434] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.938 [2024-11-18 00:34:44.680454] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.938 [2024-11-18 00:34:44.680462] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.938 [2024-11-18 00:34:44.680469] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x811540) on tqpair=0x7b6650 00:29:20.938 [2024-11-18 00:34:44.680487] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.938 [2024-11-18 00:34:44.680497] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7b6650) 00:29:20.938 [2024-11-18 00:34:44.680508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.938 [2024-11-18 00:34:44.680539] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x811540, cid 4, qid 0 00:29:20.938 [2024-11-18 00:34:44.680654] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:20.938 [2024-11-18 00:34:44.680669] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:20.938 [2024-11-18 00:34:44.680676] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:20.938 [2024-11-18 00:34:44.680682] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7b6650): datao=0, datal=3072, cccid=4 00:29:20.938 [2024-11-18 00:34:44.680690] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x811540) on tqpair(0x7b6650): expected_datao=0, payload_size=3072 00:29:20.938 [2024-11-18 00:34:44.680697] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.938 [2024-11-18 00:34:44.680718] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:20.938 [2024-11-18 00:34:44.680733] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:20.938 [2024-11-18 00:34:44.725325] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.938 [2024-11-18 00:34:44.725345] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.938 [2024-11-18 00:34:44.725353] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.938 [2024-11-18 00:34:44.725360] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x811540) on tqpair=0x7b6650 00:29:20.938 [2024-11-18 00:34:44.725377] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.938 [2024-11-18 00:34:44.725386] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7b6650) 00:29:20.938 [2024-11-18 00:34:44.725397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.938 [2024-11-18 00:34:44.725428] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x811540, cid 4, qid 0 00:29:20.938 [2024-11-18 00:34:44.725578] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:20.938 [2024-11-18 00:34:44.725592] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:20.938 [2024-11-18 00:34:44.725600] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:20.938 [2024-11-18 00:34:44.725606] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7b6650): datao=0, datal=8, cccid=4 00:29:20.938 [2024-11-18 00:34:44.725614] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x811540) on tqpair(0x7b6650): expected_datao=0, payload_size=8 00:29:20.938 [2024-11-18 00:34:44.725621] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.938 [2024-11-18 00:34:44.725631] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:20.938 [2024-11-18 00:34:44.725639] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:21.200 [2024-11-18 00:34:44.767388] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:21.200 [2024-11-18 00:34:44.767407] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:21.200 [2024-11-18 00:34:44.767416] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:21.200 [2024-11-18 00:34:44.767423] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x811540) on tqpair=0x7b6650 00:29:21.200 ===================================================== 00:29:21.200 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:29:21.200 ===================================================== 00:29:21.200 Controller Capabilities/Features 00:29:21.201 ================================ 00:29:21.201 Vendor ID: 0000 00:29:21.201 Subsystem Vendor ID: 0000 00:29:21.201 Serial Number: .................... 00:29:21.201 Model Number: ........................................ 00:29:21.201 Firmware Version: 25.01 00:29:21.201 Recommended Arb Burst: 0 00:29:21.201 IEEE OUI Identifier: 00 00 00 00:29:21.201 Multi-path I/O 00:29:21.201 May have multiple subsystem ports: No 00:29:21.201 May have multiple controllers: No 00:29:21.201 Associated with SR-IOV VF: No 00:29:21.201 Max Data Transfer Size: 131072 00:29:21.201 Max Number of Namespaces: 0 00:29:21.201 Max Number of I/O Queues: 1024 00:29:21.201 NVMe Specification Version (VS): 1.3 00:29:21.201 NVMe Specification Version (Identify): 1.3 00:29:21.201 Maximum Queue Entries: 128 00:29:21.201 Contiguous Queues Required: Yes 00:29:21.201 Arbitration Mechanisms Supported 00:29:21.201 Weighted Round Robin: Not Supported 00:29:21.201 Vendor Specific: Not Supported 00:29:21.201 Reset Timeout: 15000 ms 00:29:21.201 Doorbell Stride: 4 bytes 00:29:21.201 NVM Subsystem Reset: Not Supported 00:29:21.201 Command Sets Supported 00:29:21.201 NVM Command Set: Supported 00:29:21.201 Boot Partition: Not Supported 00:29:21.201 Memory Page Size Minimum: 4096 bytes 00:29:21.201 Memory Page Size Maximum: 4096 bytes 00:29:21.201 Persistent Memory Region: Not Supported 00:29:21.201 Optional Asynchronous Events Supported 00:29:21.201 Namespace Attribute Notices: Not Supported 00:29:21.201 Firmware Activation Notices: Not Supported 00:29:21.201 ANA Change Notices: Not Supported 00:29:21.201 PLE Aggregate Log Change Notices: Not Supported 00:29:21.201 LBA Status Info Alert Notices: Not Supported 00:29:21.201 EGE Aggregate Log Change Notices: Not Supported 00:29:21.201 Normal NVM Subsystem Shutdown event: Not Supported 00:29:21.201 Zone Descriptor Change Notices: Not Supported 00:29:21.201 Discovery Log Change Notices: Supported 00:29:21.201 Controller Attributes 00:29:21.201 128-bit Host Identifier: Not Supported 00:29:21.201 Non-Operational Permissive Mode: Not Supported 00:29:21.201 NVM Sets: Not Supported 00:29:21.201 Read Recovery Levels: Not Supported 00:29:21.201 Endurance Groups: Not Supported 00:29:21.201 Predictable Latency Mode: Not Supported 00:29:21.201 Traffic Based Keep ALive: Not Supported 00:29:21.201 Namespace Granularity: Not Supported 00:29:21.201 SQ Associations: Not Supported 00:29:21.201 UUID List: Not Supported 00:29:21.201 Multi-Domain Subsystem: Not Supported 00:29:21.201 Fixed Capacity Management: Not Supported 00:29:21.201 Variable Capacity Management: Not Supported 00:29:21.201 Delete Endurance Group: Not Supported 00:29:21.201 Delete NVM Set: Not Supported 00:29:21.201 Extended LBA Formats Supported: Not Supported 00:29:21.201 Flexible Data Placement Supported: Not Supported 00:29:21.201 00:29:21.201 Controller Memory Buffer Support 00:29:21.201 ================================ 00:29:21.201 Supported: No 00:29:21.201 00:29:21.201 Persistent Memory Region Support 00:29:21.201 ================================ 00:29:21.201 Supported: No 00:29:21.201 00:29:21.201 Admin Command Set Attributes 00:29:21.201 ============================ 00:29:21.201 Security Send/Receive: Not Supported 00:29:21.201 Format NVM: Not Supported 00:29:21.201 Firmware Activate/Download: Not Supported 00:29:21.201 Namespace Management: Not Supported 00:29:21.201 Device Self-Test: Not Supported 00:29:21.201 Directives: Not Supported 00:29:21.201 NVMe-MI: Not Supported 00:29:21.201 Virtualization Management: Not Supported 00:29:21.201 Doorbell Buffer Config: Not Supported 00:29:21.201 Get LBA Status Capability: Not Supported 00:29:21.201 Command & Feature Lockdown Capability: Not Supported 00:29:21.201 Abort Command Limit: 1 00:29:21.201 Async Event Request Limit: 4 00:29:21.201 Number of Firmware Slots: N/A 00:29:21.201 Firmware Slot 1 Read-Only: N/A 00:29:21.201 Firmware Activation Without Reset: N/A 00:29:21.201 Multiple Update Detection Support: N/A 00:29:21.201 Firmware Update Granularity: No Information Provided 00:29:21.201 Per-Namespace SMART Log: No 00:29:21.201 Asymmetric Namespace Access Log Page: Not Supported 00:29:21.201 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:29:21.201 Command Effects Log Page: Not Supported 00:29:21.201 Get Log Page Extended Data: Supported 00:29:21.201 Telemetry Log Pages: Not Supported 00:29:21.201 Persistent Event Log Pages: Not Supported 00:29:21.201 Supported Log Pages Log Page: May Support 00:29:21.201 Commands Supported & Effects Log Page: Not Supported 00:29:21.201 Feature Identifiers & Effects Log Page:May Support 00:29:21.201 NVMe-MI Commands & Effects Log Page: May Support 00:29:21.201 Data Area 4 for Telemetry Log: Not Supported 00:29:21.201 Error Log Page Entries Supported: 128 00:29:21.201 Keep Alive: Not Supported 00:29:21.201 00:29:21.201 NVM Command Set Attributes 00:29:21.201 ========================== 00:29:21.201 Submission Queue Entry Size 00:29:21.201 Max: 1 00:29:21.201 Min: 1 00:29:21.201 Completion Queue Entry Size 00:29:21.201 Max: 1 00:29:21.201 Min: 1 00:29:21.201 Number of Namespaces: 0 00:29:21.201 Compare Command: Not Supported 00:29:21.201 Write Uncorrectable Command: Not Supported 00:29:21.201 Dataset Management Command: Not Supported 00:29:21.201 Write Zeroes Command: Not Supported 00:29:21.201 Set Features Save Field: Not Supported 00:29:21.201 Reservations: Not Supported 00:29:21.201 Timestamp: Not Supported 00:29:21.201 Copy: Not Supported 00:29:21.201 Volatile Write Cache: Not Present 00:29:21.201 Atomic Write Unit (Normal): 1 00:29:21.201 Atomic Write Unit (PFail): 1 00:29:21.201 Atomic Compare & Write Unit: 1 00:29:21.201 Fused Compare & Write: Supported 00:29:21.201 Scatter-Gather List 00:29:21.201 SGL Command Set: Supported 00:29:21.201 SGL Keyed: Supported 00:29:21.201 SGL Bit Bucket Descriptor: Not Supported 00:29:21.201 SGL Metadata Pointer: Not Supported 00:29:21.201 Oversized SGL: Not Supported 00:29:21.201 SGL Metadata Address: Not Supported 00:29:21.201 SGL Offset: Supported 00:29:21.201 Transport SGL Data Block: Not Supported 00:29:21.201 Replay Protected Memory Block: Not Supported 00:29:21.201 00:29:21.201 Firmware Slot Information 00:29:21.201 ========================= 00:29:21.201 Active slot: 0 00:29:21.201 00:29:21.201 00:29:21.201 Error Log 00:29:21.201 ========= 00:29:21.201 00:29:21.201 Active Namespaces 00:29:21.201 ================= 00:29:21.201 Discovery Log Page 00:29:21.201 ================== 00:29:21.201 Generation Counter: 2 00:29:21.201 Number of Records: 2 00:29:21.201 Record Format: 0 00:29:21.201 00:29:21.201 Discovery Log Entry 0 00:29:21.201 ---------------------- 00:29:21.201 Transport Type: 3 (TCP) 00:29:21.201 Address Family: 1 (IPv4) 00:29:21.201 Subsystem Type: 3 (Current Discovery Subsystem) 00:29:21.201 Entry Flags: 00:29:21.201 Duplicate Returned Information: 1 00:29:21.202 Explicit Persistent Connection Support for Discovery: 1 00:29:21.202 Transport Requirements: 00:29:21.202 Secure Channel: Not Required 00:29:21.202 Port ID: 0 (0x0000) 00:29:21.202 Controller ID: 65535 (0xffff) 00:29:21.202 Admin Max SQ Size: 128 00:29:21.202 Transport Service Identifier: 4420 00:29:21.202 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:29:21.202 Transport Address: 10.0.0.2 00:29:21.202 Discovery Log Entry 1 00:29:21.202 ---------------------- 00:29:21.202 Transport Type: 3 (TCP) 00:29:21.202 Address Family: 1 (IPv4) 00:29:21.202 Subsystem Type: 2 (NVM Subsystem) 00:29:21.202 Entry Flags: 00:29:21.202 Duplicate Returned Information: 0 00:29:21.202 Explicit Persistent Connection Support for Discovery: 0 00:29:21.202 Transport Requirements: 00:29:21.202 Secure Channel: Not Required 00:29:21.202 Port ID: 0 (0x0000) 00:29:21.202 Controller ID: 65535 (0xffff) 00:29:21.202 Admin Max SQ Size: 128 00:29:21.202 Transport Service Identifier: 4420 00:29:21.202 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:29:21.202 Transport Address: 10.0.0.2 [2024-11-18 00:34:44.767547] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:29:21.202 [2024-11-18 00:34:44.767570] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x810f40) on tqpair=0x7b6650 00:29:21.202 [2024-11-18 00:34:44.767583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.202 [2024-11-18 00:34:44.767601] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8110c0) on tqpair=0x7b6650 00:29:21.202 [2024-11-18 00:34:44.767609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.202 [2024-11-18 00:34:44.767617] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x811240) on tqpair=0x7b6650 00:29:21.202 [2024-11-18 00:34:44.767625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.202 [2024-11-18 00:34:44.767633] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8113c0) on tqpair=0x7b6650 00:29:21.202 [2024-11-18 00:34:44.767641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.202 [2024-11-18 00:34:44.767659] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:21.202 [2024-11-18 00:34:44.767668] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:21.202 [2024-11-18 00:34:44.767675] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7b6650) 00:29:21.202 [2024-11-18 00:34:44.767686] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.202 [2024-11-18 00:34:44.767712] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8113c0, cid 3, qid 0 00:29:21.202 [2024-11-18 00:34:44.767794] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:21.202 [2024-11-18 00:34:44.767809] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:21.202 [2024-11-18 00:34:44.767816] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:21.202 [2024-11-18 00:34:44.767823] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8113c0) on tqpair=0x7b6650 00:29:21.202 [2024-11-18 00:34:44.767835] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:21.202 [2024-11-18 00:34:44.767843] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:21.202 [2024-11-18 00:34:44.767850] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7b6650) 00:29:21.202 [2024-11-18 00:34:44.767861] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.202 [2024-11-18 00:34:44.767888] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8113c0, cid 3, qid 0 00:29:21.202 [2024-11-18 00:34:44.767985] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:21.202 [2024-11-18 00:34:44.767999] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:21.202 [2024-11-18 00:34:44.768006] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:21.202 [2024-11-18 00:34:44.768012] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8113c0) on tqpair=0x7b6650 00:29:21.202 [2024-11-18 00:34:44.768021] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:29:21.202 [2024-11-18 00:34:44.768029] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:29:21.202 [2024-11-18 00:34:44.768045] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:21.202 [2024-11-18 00:34:44.768054] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:21.202 [2024-11-18 00:34:44.768060] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7b6650) 00:29:21.202 [2024-11-18 00:34:44.768071] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.202 [2024-11-18 00:34:44.768092] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8113c0, cid 3, qid 0 00:29:21.202 [2024-11-18 00:34:44.768178] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:21.202 [2024-11-18 00:34:44.768190] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:21.202 [2024-11-18 00:34:44.768196] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:21.202 [2024-11-18 00:34:44.768203] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8113c0) on tqpair=0x7b6650 00:29:21.202 [2024-11-18 00:34:44.768219] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:21.202 [2024-11-18 00:34:44.768228] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:21.202 [2024-11-18 00:34:44.768235] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7b6650) 00:29:21.202 [2024-11-18 00:34:44.768245] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.202 [2024-11-18 00:34:44.768266] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8113c0, cid 3, qid 0 00:29:21.202 [2024-11-18 00:34:44.772322] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:21.202 [2024-11-18 00:34:44.772339] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:21.202 [2024-11-18 00:34:44.772347] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:21.202 [2024-11-18 00:34:44.772353] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8113c0) on tqpair=0x7b6650 00:29:21.202 [2024-11-18 00:34:44.772372] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:21.202 [2024-11-18 00:34:44.772396] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:21.202 [2024-11-18 00:34:44.772403] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7b6650) 00:29:21.202 [2024-11-18 00:34:44.772414] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.202 [2024-11-18 00:34:44.772442] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8113c0, cid 3, qid 0 00:29:21.202 [2024-11-18 00:34:44.772543] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:21.202 [2024-11-18 00:34:44.772555] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:21.202 [2024-11-18 00:34:44.772562] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:21.202 [2024-11-18 00:34:44.772569] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8113c0) on tqpair=0x7b6650 00:29:21.202 [2024-11-18 00:34:44.772582] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 4 milliseconds 00:29:21.202 00:29:21.202 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:29:21.202 [2024-11-18 00:34:44.804712] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:29:21.202 [2024-11-18 00:34:44.804753] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid340214 ] 00:29:21.202 [2024-11-18 00:34:44.852008] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:29:21.202 [2024-11-18 00:34:44.852058] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:21.202 [2024-11-18 00:34:44.852068] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:21.202 [2024-11-18 00:34:44.852081] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:21.202 [2024-11-18 00:34:44.852093] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:21.202 [2024-11-18 00:34:44.855593] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:29:21.203 [2024-11-18 00:34:44.855645] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x19b2650 0 00:29:21.203 [2024-11-18 00:34:44.870329] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:21.203 [2024-11-18 00:34:44.870350] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:21.203 [2024-11-18 00:34:44.870359] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:21.203 [2024-11-18 00:34:44.870365] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:21.203 [2024-11-18 00:34:44.870398] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:21.203 [2024-11-18 00:34:44.870412] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:21.203 [2024-11-18 00:34:44.870419] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19b2650) 00:29:21.203 [2024-11-18 00:34:44.870433] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:21.203 [2024-11-18 00:34:44.870461] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a0cf40, cid 0, qid 0 00:29:21.203 [2024-11-18 00:34:44.877322] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:21.203 [2024-11-18 00:34:44.877342] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:21.203 [2024-11-18 00:34:44.877349] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:21.203 [2024-11-18 00:34:44.877356] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a0cf40) on tqpair=0x19b2650 00:29:21.203 [2024-11-18 00:34:44.877377] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:21.203 [2024-11-18 00:34:44.877395] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:29:21.203 [2024-11-18 00:34:44.877405] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:29:21.203 [2024-11-18 00:34:44.877423] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:21.203 [2024-11-18 00:34:44.877435] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:21.203 [2024-11-18 00:34:44.877442] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19b2650) 00:29:21.203 [2024-11-18 00:34:44.877453] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.203 [2024-11-18 00:34:44.877478] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a0cf40, cid 0, qid 0 00:29:21.203 [2024-11-18 00:34:44.877605] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:21.203 [2024-11-18 00:34:44.877621] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:21.203 [2024-11-18 00:34:44.877628] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:21.203 [2024-11-18 00:34:44.877635] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a0cf40) on tqpair=0x19b2650 00:29:21.203 [2024-11-18 00:34:44.877644] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:29:21.203 [2024-11-18 00:34:44.877660] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:29:21.203 [2024-11-18 00:34:44.877674] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:21.203 [2024-11-18 00:34:44.877682] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:21.203 [2024-11-18 00:34:44.877688] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19b2650) 00:29:21.203 [2024-11-18 00:34:44.877702] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.203 [2024-11-18 00:34:44.877726] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a0cf40, cid 0, qid 0 00:29:21.203 [2024-11-18 00:34:44.877804] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:21.203 [2024-11-18 00:34:44.877819] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:21.203 [2024-11-18 00:34:44.877826] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:21.203 [2024-11-18 00:34:44.877833] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a0cf40) on tqpair=0x19b2650 00:29:21.203 [2024-11-18 00:34:44.877842] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:29:21.203 [2024-11-18 00:34:44.877859] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:29:21.203 [2024-11-18 00:34:44.877872] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:21.203 [2024-11-18 00:34:44.877880] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:21.203 [2024-11-18 00:34:44.877886] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19b2650) 00:29:21.203 [2024-11-18 00:34:44.877900] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.203 [2024-11-18 00:34:44.877924] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a0cf40, cid 0, qid 0 00:29:21.203 [2024-11-18 00:34:44.878002] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:21.203 [2024-11-18 00:34:44.878017] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:21.203 [2024-11-18 00:34:44.878024] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:21.203 [2024-11-18 00:34:44.878031] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a0cf40) on tqpair=0x19b2650 00:29:21.203 [2024-11-18 00:34:44.878039] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:21.203 [2024-11-18 00:34:44.878063] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:21.203 [2024-11-18 00:34:44.878075] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:21.203 [2024-11-18 00:34:44.878081] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19b2650) 00:29:21.203 [2024-11-18 00:34:44.878092] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.203 [2024-11-18 00:34:44.878115] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a0cf40, cid 0, qid 0 00:29:21.203 [2024-11-18 00:34:44.878196] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:21.203 [2024-11-18 00:34:44.878211] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:21.203 [2024-11-18 00:34:44.878218] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:21.203 [2024-11-18 00:34:44.878225] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a0cf40) on tqpair=0x19b2650 00:29:21.203 [2024-11-18 00:34:44.878232] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:29:21.203 [2024-11-18 00:34:44.878240] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:29:21.203 [2024-11-18 00:34:44.878255] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:21.203 [2024-11-18 00:34:44.878369] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:29:21.203 [2024-11-18 00:34:44.878380] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:21.203 [2024-11-18 00:34:44.878392] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:21.203 [2024-11-18 00:34:44.878400] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:21.203 [2024-11-18 00:34:44.878406] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19b2650) 00:29:21.203 [2024-11-18 00:34:44.878417] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.203 [2024-11-18 00:34:44.878440] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a0cf40, cid 0, qid 0 00:29:21.203 [2024-11-18 00:34:44.878627] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:21.203 [2024-11-18 00:34:44.878642] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:21.204 [2024-11-18 00:34:44.878649] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:21.204 [2024-11-18 00:34:44.878655] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a0cf40) on tqpair=0x19b2650 00:29:21.204 [2024-11-18 00:34:44.878664] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:21.204 [2024-11-18 00:34:44.878683] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:21.204 [2024-11-18 00:34:44.878693] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:21.204 [2024-11-18 00:34:44.878700] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19b2650) 00:29:21.204 [2024-11-18 00:34:44.878710] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.204 [2024-11-18 00:34:44.878733] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a0cf40, cid 0, qid 0 00:29:21.204 [2024-11-18 00:34:44.878814] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:21.204 [2024-11-18 00:34:44.878829] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:21.204 [2024-11-18 00:34:44.878836] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:21.204 [2024-11-18 00:34:44.878843] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a0cf40) on tqpair=0x19b2650 00:29:21.204 [2024-11-18 00:34:44.878850] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:21.204 [2024-11-18 00:34:44.878863] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:29:21.204 [2024-11-18 00:34:44.878878] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:29:21.204 [2024-11-18 00:34:44.878899] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:29:21.204 [2024-11-18 00:34:44.878914] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:21.204 [2024-11-18 00:34:44.878921] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19b2650) 00:29:21.204 [2024-11-18 00:34:44.878932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.204 [2024-11-18 00:34:44.878955] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a0cf40, cid 0, qid 0 00:29:21.204 [2024-11-18 00:34:44.879083] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:21.204 [2024-11-18 00:34:44.879099] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:21.204 [2024-11-18 00:34:44.879106] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:21.204 [2024-11-18 00:34:44.879112] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19b2650): datao=0, datal=4096, cccid=0 00:29:21.204 [2024-11-18 00:34:44.879120] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a0cf40) on tqpair(0x19b2650): expected_datao=0, payload_size=4096 00:29:21.204 [2024-11-18 00:34:44.879133] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:21.204 [2024-11-18 00:34:44.879147] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:21.204 [2024-11-18 00:34:44.879155] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:21.204 [2024-11-18 00:34:44.879167] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:21.204 [2024-11-18 00:34:44.879177] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:21.204 [2024-11-18 00:34:44.879184] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:21.204 [2024-11-18 00:34:44.879190] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a0cf40) on tqpair=0x19b2650 00:29:21.204 [2024-11-18 00:34:44.879201] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:29:21.204 [2024-11-18 00:34:44.879209] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:29:21.204 [2024-11-18 00:34:44.879217] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:29:21.204 [2024-11-18 00:34:44.879228] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:29:21.204 [2024-11-18 00:34:44.879237] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:29:21.204 [2024-11-18 00:34:44.879245] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:29:21.204 [2024-11-18 00:34:44.879264] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:29:21.204 [2024-11-18 00:34:44.879279] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:21.204 [2024-11-18 00:34:44.879287] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:21.204 [2024-11-18 00:34:44.879293] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19b2650) 00:29:21.204 [2024-11-18 00:34:44.879304] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:21.204 [2024-11-18 00:34:44.879336] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a0cf40, cid 0, qid 0 00:29:21.204 [2024-11-18 00:34:44.879429] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:21.204 [2024-11-18 00:34:44.879451] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:21.204 [2024-11-18 00:34:44.879459] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:21.204 [2024-11-18 00:34:44.879466] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a0cf40) on tqpair=0x19b2650 00:29:21.204 [2024-11-18 00:34:44.879476] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:21.204 [2024-11-18 00:34:44.879484] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:21.204 [2024-11-18 00:34:44.879490] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19b2650) 00:29:21.204 [2024-11-18 00:34:44.879500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.204 [2024-11-18 00:34:44.879511] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:21.204 [2024-11-18 00:34:44.879518] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:21.204 [2024-11-18 00:34:44.879524] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x19b2650) 00:29:21.204 [2024-11-18 00:34:44.879533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.204 [2024-11-18 00:34:44.879543] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:21.204 [2024-11-18 00:34:44.879550] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:21.204 [2024-11-18 00:34:44.879556] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x19b2650) 00:29:21.204 [2024-11-18 00:34:44.879564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.204 [2024-11-18 00:34:44.879574] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:21.204 [2024-11-18 00:34:44.879581] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:21.204 [2024-11-18 00:34:44.879587] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19b2650) 00:29:21.205 [2024-11-18 00:34:44.879596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.205 [2024-11-18 00:34:44.879604] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:21.205 [2024-11-18 00:34:44.879634] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:21.205 [2024-11-18 00:34:44.879649] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:21.205 [2024-11-18 00:34:44.879656] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19b2650) 00:29:21.205 [2024-11-18 00:34:44.879666] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.205 [2024-11-18 00:34:44.879688] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a0cf40, cid 0, qid 0 00:29:21.205 [2024-11-18 00:34:44.879699] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a0d0c0, cid 1, qid 0 00:29:21.205 [2024-11-18 00:34:44.879724] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a0d240, cid 2, qid 0 00:29:21.205 [2024-11-18 00:34:44.879732] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a0d3c0, cid 3, qid 0 00:29:21.205 [2024-11-18 00:34:44.879740] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a0d540, cid 4, qid 0 00:29:21.205 [2024-11-18 00:34:44.879880] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:21.205 [2024-11-18 00:34:44.879895] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:21.205 [2024-11-18 00:34:44.879902] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:21.205 [2024-11-18 00:34:44.879909] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a0d540) on tqpair=0x19b2650 00:29:21.205 [2024-11-18 00:34:44.879921] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:29:21.205 [2024-11-18 00:34:44.879934] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:29:21.205 [2024-11-18 00:34:44.879951] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:29:21.205 [2024-11-18 00:34:44.879966] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:29:21.205 [2024-11-18 00:34:44.879977] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:21.205 [2024-11-18 00:34:44.879984] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:21.205 [2024-11-18 00:34:44.879991] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19b2650) 00:29:21.205 [2024-11-18 00:34:44.880001] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:21.205 [2024-11-18 00:34:44.880038] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a0d540, cid 4, qid 0 00:29:21.205 [2024-11-18 00:34:44.880198] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:21.205 [2024-11-18 00:34:44.880213] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:21.205 [2024-11-18 00:34:44.880220] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:21.205 [2024-11-18 00:34:44.880227] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a0d540) on tqpair=0x19b2650 00:29:21.205 [2024-11-18 00:34:44.880297] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:29:21.205 [2024-11-18 00:34:44.880328] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:29:21.205 [2024-11-18 00:34:44.880347] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:21.205 [2024-11-18 00:34:44.880354] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19b2650) 00:29:21.205 [2024-11-18 00:34:44.880365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.205 [2024-11-18 00:34:44.880387] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a0d540, cid 4, qid 0 00:29:21.205 [2024-11-18 00:34:44.880513] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:21.205 [2024-11-18 00:34:44.880528] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:21.205 [2024-11-18 00:34:44.880535] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:21.205 [2024-11-18 00:34:44.880542] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19b2650): datao=0, datal=4096, cccid=4 00:29:21.205 [2024-11-18 00:34:44.880549] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a0d540) on tqpair(0x19b2650): expected_datao=0, payload_size=4096 00:29:21.205 [2024-11-18 00:34:44.880557] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:21.205 [2024-11-18 00:34:44.880582] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:21.205 [2024-11-18 00:34:44.880595] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:21.205 [2024-11-18 00:34:44.880623] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:21.205 [2024-11-18 00:34:44.880638] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:21.205 [2024-11-18 00:34:44.880645] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:21.205 [2024-11-18 00:34:44.880652] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a0d540) on tqpair=0x19b2650 00:29:21.205 [2024-11-18 00:34:44.880668] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:29:21.205 [2024-11-18 00:34:44.880685] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:29:21.205 [2024-11-18 00:34:44.880705] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:29:21.205 [2024-11-18 00:34:44.880725] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:21.205 [2024-11-18 00:34:44.880733] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19b2650) 00:29:21.205 [2024-11-18 00:34:44.880744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.205 [2024-11-18 00:34:44.880766] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a0d540, cid 4, qid 0 00:29:21.205 [2024-11-18 00:34:44.880873] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:21.205 [2024-11-18 00:34:44.880893] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:21.205 [2024-11-18 00:34:44.880902] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:21.205 [2024-11-18 00:34:44.880908] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19b2650): datao=0, datal=4096, cccid=4 00:29:21.205 [2024-11-18 00:34:44.880915] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a0d540) on tqpair(0x19b2650): expected_datao=0, payload_size=4096 00:29:21.205 [2024-11-18 00:34:44.880926] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:21.205 [2024-11-18 00:34:44.880949] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:21.205 [2024-11-18 00:34:44.880958] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:21.205 [2024-11-18 00:34:44.880970] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:21.205 [2024-11-18 00:34:44.880980] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:21.205 [2024-11-18 00:34:44.880986] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:21.205 [2024-11-18 00:34:44.880993] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a0d540) on tqpair=0x19b2650 00:29:21.206 [2024-11-18 00:34:44.881015] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:29:21.206 [2024-11-18 00:34:44.881036] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:29:21.206 [2024-11-18 00:34:44.881052] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:21.206 [2024-11-18 00:34:44.881060] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19b2650) 00:29:21.206 [2024-11-18 00:34:44.881071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.206 [2024-11-18 00:34:44.881093] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a0d540, cid 4, qid 0 00:29:21.206 [2024-11-18 00:34:44.881185] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:21.206 [2024-11-18 00:34:44.881204] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:21.206 [2024-11-18 00:34:44.881212] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:21.206 [2024-11-18 00:34:44.881218] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19b2650): datao=0, datal=4096, cccid=4 00:29:21.206 [2024-11-18 00:34:44.881227] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a0d540) on tqpair(0x19b2650): expected_datao=0, payload_size=4096 00:29:21.206 [2024-11-18 00:34:44.881239] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:21.206 [2024-11-18 00:34:44.881258] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:21.206 [2024-11-18 00:34:44.881267] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:21.206 [2024-11-18 00:34:44.881279] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:21.206 [2024-11-18 00:34:44.881289] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:21.206 [2024-11-18 00:34:44.881295] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:21.206 [2024-11-18 00:34:44.881302] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a0d540) on tqpair=0x19b2650 00:29:21.206 [2024-11-18 00:34:44.885327] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:29:21.206 [2024-11-18 00:34:44.885351] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:29:21.206 [2024-11-18 00:34:44.885370] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:29:21.206 [2024-11-18 00:34:44.885382] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:29:21.206 [2024-11-18 00:34:44.885391] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:29:21.206 [2024-11-18 00:34:44.885400] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:29:21.206 [2024-11-18 00:34:44.885408] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:29:21.206 [2024-11-18 00:34:44.885415] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:29:21.206 [2024-11-18 00:34:44.885424] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:29:21.206 [2024-11-18 00:34:44.885442] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:21.206 [2024-11-18 00:34:44.885451] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19b2650) 00:29:21.206 [2024-11-18 00:34:44.885462] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.206 [2024-11-18 00:34:44.885473] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:21.206 [2024-11-18 00:34:44.885480] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:21.206 [2024-11-18 00:34:44.885486] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19b2650) 00:29:21.206 [2024-11-18 00:34:44.885494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.206 [2024-11-18 00:34:44.885522] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a0d540, cid 4, qid 0 00:29:21.206 [2024-11-18 00:34:44.885549] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a0d6c0, cid 5, qid 0 00:29:21.206 [2024-11-18 00:34:44.885675] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:21.206 [2024-11-18 00:34:44.885695] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:21.206 [2024-11-18 00:34:44.885702] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:21.206 [2024-11-18 00:34:44.885709] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a0d540) on tqpair=0x19b2650 00:29:21.206 [2024-11-18 00:34:44.885719] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:21.206 [2024-11-18 00:34:44.885729] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:21.206 [2024-11-18 00:34:44.885735] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:21.206 [2024-11-18 00:34:44.885742] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a0d6c0) on tqpair=0x19b2650 00:29:21.206 [2024-11-18 00:34:44.885760] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:21.206 [2024-11-18 00:34:44.885771] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19b2650) 00:29:21.206 [2024-11-18 00:34:44.885781] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.206 [2024-11-18 00:34:44.885804] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a0d6c0, cid 5, qid 0 00:29:21.206 [2024-11-18 00:34:44.885887] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:21.206 [2024-11-18 00:34:44.885902] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:21.206 [2024-11-18 00:34:44.885913] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:21.206 [2024-11-18 00:34:44.885920] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a0d6c0) on tqpair=0x19b2650 00:29:21.206 [2024-11-18 00:34:44.885939] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:21.206 [2024-11-18 00:34:44.885950] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19b2650) 00:29:21.206 [2024-11-18 00:34:44.885960] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.206 [2024-11-18 00:34:44.885983] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a0d6c0, cid 5, qid 0 00:29:21.206 [2024-11-18 00:34:44.886062] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:21.206 [2024-11-18 00:34:44.886077] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:21.206 [2024-11-18 00:34:44.886084] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:21.206 [2024-11-18 00:34:44.886090] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a0d6c0) on tqpair=0x19b2650 00:29:21.206 [2024-11-18 00:34:44.886108] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:21.206 [2024-11-18 00:34:44.886119] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19b2650) 00:29:21.206 [2024-11-18 00:34:44.886129] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.206 [2024-11-18 00:34:44.886151] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a0d6c0, cid 5, qid 0 00:29:21.206 [2024-11-18 00:34:44.886231] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:21.206 [2024-11-18 00:34:44.886246] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:21.206 [2024-11-18 00:34:44.886253] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:21.206 [2024-11-18 00:34:44.886260] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a0d6c0) on tqpair=0x19b2650 00:29:21.206 [2024-11-18 00:34:44.886285] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:21.206 [2024-11-18 00:34:44.886297] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19b2650) 00:29:21.206 [2024-11-18 00:34:44.886308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.206 [2024-11-18 00:34:44.886336] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:21.206 [2024-11-18 00:34:44.886345] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19b2650) 00:29:21.206 [2024-11-18 00:34:44.886355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.206 [2024-11-18 00:34:44.886367] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:21.206 [2024-11-18 00:34:44.886374] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x19b2650) 00:29:21.206 [2024-11-18 00:34:44.886383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.206 [2024-11-18 00:34:44.886394] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:21.206 [2024-11-18 00:34:44.886402] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x19b2650) 00:29:21.206 [2024-11-18 00:34:44.886411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.206 [2024-11-18 00:34:44.886434] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a0d6c0, cid 5, qid 0 00:29:21.207 [2024-11-18 00:34:44.886446] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a0d540, cid 4, qid 0 00:29:21.207 [2024-11-18 00:34:44.886454] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a0d840, cid 6, qid 0 00:29:21.207 [2024-11-18 00:34:44.886465] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a0d9c0, cid 7, qid 0 00:29:21.207 [2024-11-18 00:34:44.886652] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:21.207 [2024-11-18 00:34:44.886667] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:21.207 [2024-11-18 00:34:44.886678] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:21.207 [2024-11-18 00:34:44.886690] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19b2650): datao=0, datal=8192, cccid=5 00:29:21.207 [2024-11-18 00:34:44.886704] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a0d6c0) on tqpair(0x19b2650): expected_datao=0, payload_size=8192 00:29:21.207 [2024-11-18 00:34:44.886717] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:21.207 [2024-11-18 00:34:44.886739] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:21.207 [2024-11-18 00:34:44.886748] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:21.207 [2024-11-18 00:34:44.886761] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:21.207 [2024-11-18 00:34:44.886778] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:21.207 [2024-11-18 00:34:44.886787] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:21.207 [2024-11-18 00:34:44.886793] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19b2650): datao=0, datal=512, cccid=4 00:29:21.207 [2024-11-18 00:34:44.886800] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a0d540) on tqpair(0x19b2650): expected_datao=0, payload_size=512 00:29:21.207 [2024-11-18 00:34:44.886808] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:21.207 [2024-11-18 00:34:44.886817] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:21.207 [2024-11-18 00:34:44.886824] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:21.207 [2024-11-18 00:34:44.886832] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:21.207 [2024-11-18 00:34:44.886841] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:21.207 [2024-11-18 00:34:44.886847] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:21.207 [2024-11-18 00:34:44.886853] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19b2650): datao=0, datal=512, cccid=6 00:29:21.207 [2024-11-18 00:34:44.886861] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a0d840) on tqpair(0x19b2650): expected_datao=0, payload_size=512 00:29:21.207 [2024-11-18 00:34:44.886868] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:21.207 [2024-11-18 00:34:44.886877] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:21.207 [2024-11-18 00:34:44.886884] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:21.207 [2024-11-18 00:34:44.886892] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:21.207 [2024-11-18 00:34:44.886901] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:21.207 [2024-11-18 00:34:44.886907] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:21.207 [2024-11-18 00:34:44.886913] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19b2650): datao=0, datal=4096, cccid=7 00:29:21.207 [2024-11-18 00:34:44.886920] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a0d9c0) on tqpair(0x19b2650): expected_datao=0, payload_size=4096 00:29:21.207 [2024-11-18 00:34:44.886928] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:21.207 [2024-11-18 00:34:44.886937] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:21.207 [2024-11-18 00:34:44.886945] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:21.207 [2024-11-18 00:34:44.886956] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:21.207 [2024-11-18 00:34:44.886966] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:21.207 [2024-11-18 00:34:44.886972] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:21.207 [2024-11-18 00:34:44.886979] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a0d6c0) on tqpair=0x19b2650 00:29:21.207 [2024-11-18 00:34:44.886997] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:21.207 [2024-11-18 00:34:44.887014] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:21.207 [2024-11-18 00:34:44.887022] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:21.207 [2024-11-18 00:34:44.887043] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a0d540) on tqpair=0x19b2650 00:29:21.207 [2024-11-18 00:34:44.887059] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:21.207 [2024-11-18 00:34:44.887069] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:21.207 [2024-11-18 00:34:44.887076] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:21.207 [2024-11-18 00:34:44.887082] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a0d840) on tqpair=0x19b2650 00:29:21.207 [2024-11-18 00:34:44.887105] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:21.207 [2024-11-18 00:34:44.887115] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:21.207 [2024-11-18 00:34:44.887122] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:21.207 [2024-11-18 00:34:44.887128] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a0d9c0) on tqpair=0x19b2650 00:29:21.207 ===================================================== 00:29:21.207 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:21.207 ===================================================== 00:29:21.207 Controller Capabilities/Features 00:29:21.207 ================================ 00:29:21.207 Vendor ID: 8086 00:29:21.207 Subsystem Vendor ID: 8086 00:29:21.207 Serial Number: SPDK00000000000001 00:29:21.207 Model Number: SPDK bdev Controller 00:29:21.207 Firmware Version: 25.01 00:29:21.207 Recommended Arb Burst: 6 00:29:21.207 IEEE OUI Identifier: e4 d2 5c 00:29:21.207 Multi-path I/O 00:29:21.207 May have multiple subsystem ports: Yes 00:29:21.207 May have multiple controllers: Yes 00:29:21.207 Associated with SR-IOV VF: No 00:29:21.207 Max Data Transfer Size: 131072 00:29:21.207 Max Number of Namespaces: 32 00:29:21.207 Max Number of I/O Queues: 127 00:29:21.207 NVMe Specification Version (VS): 1.3 00:29:21.207 NVMe Specification Version (Identify): 1.3 00:29:21.207 Maximum Queue Entries: 128 00:29:21.207 Contiguous Queues Required: Yes 00:29:21.207 Arbitration Mechanisms Supported 00:29:21.207 Weighted Round Robin: Not Supported 00:29:21.207 Vendor Specific: Not Supported 00:29:21.207 Reset Timeout: 15000 ms 00:29:21.207 Doorbell Stride: 4 bytes 00:29:21.207 NVM Subsystem Reset: Not Supported 00:29:21.207 Command Sets Supported 00:29:21.207 NVM Command Set: Supported 00:29:21.207 Boot Partition: Not Supported 00:29:21.207 Memory Page Size Minimum: 4096 bytes 00:29:21.207 Memory Page Size Maximum: 4096 bytes 00:29:21.207 Persistent Memory Region: Not Supported 00:29:21.207 Optional Asynchronous Events Supported 00:29:21.207 Namespace Attribute Notices: Supported 00:29:21.207 Firmware Activation Notices: Not Supported 00:29:21.207 ANA Change Notices: Not Supported 00:29:21.207 PLE Aggregate Log Change Notices: Not Supported 00:29:21.207 LBA Status Info Alert Notices: Not Supported 00:29:21.207 EGE Aggregate Log Change Notices: Not Supported 00:29:21.207 Normal NVM Subsystem Shutdown event: Not Supported 00:29:21.207 Zone Descriptor Change Notices: Not Supported 00:29:21.207 Discovery Log Change Notices: Not Supported 00:29:21.207 Controller Attributes 00:29:21.207 128-bit Host Identifier: Supported 00:29:21.207 Non-Operational Permissive Mode: Not Supported 00:29:21.207 NVM Sets: Not Supported 00:29:21.207 Read Recovery Levels: Not Supported 00:29:21.207 Endurance Groups: Not Supported 00:29:21.207 Predictable Latency Mode: Not Supported 00:29:21.208 Traffic Based Keep ALive: Not Supported 00:29:21.208 Namespace Granularity: Not Supported 00:29:21.208 SQ Associations: Not Supported 00:29:21.208 UUID List: Not Supported 00:29:21.208 Multi-Domain Subsystem: Not Supported 00:29:21.208 Fixed Capacity Management: Not Supported 00:29:21.208 Variable Capacity Management: Not Supported 00:29:21.208 Delete Endurance Group: Not Supported 00:29:21.208 Delete NVM Set: Not Supported 00:29:21.208 Extended LBA Formats Supported: Not Supported 00:29:21.208 Flexible Data Placement Supported: Not Supported 00:29:21.208 00:29:21.208 Controller Memory Buffer Support 00:29:21.208 ================================ 00:29:21.208 Supported: No 00:29:21.208 00:29:21.208 Persistent Memory Region Support 00:29:21.208 ================================ 00:29:21.208 Supported: No 00:29:21.208 00:29:21.208 Admin Command Set Attributes 00:29:21.208 ============================ 00:29:21.208 Security Send/Receive: Not Supported 00:29:21.208 Format NVM: Not Supported 00:29:21.208 Firmware Activate/Download: Not Supported 00:29:21.208 Namespace Management: Not Supported 00:29:21.208 Device Self-Test: Not Supported 00:29:21.208 Directives: Not Supported 00:29:21.208 NVMe-MI: Not Supported 00:29:21.208 Virtualization Management: Not Supported 00:29:21.208 Doorbell Buffer Config: Not Supported 00:29:21.208 Get LBA Status Capability: Not Supported 00:29:21.208 Command & Feature Lockdown Capability: Not Supported 00:29:21.208 Abort Command Limit: 4 00:29:21.208 Async Event Request Limit: 4 00:29:21.208 Number of Firmware Slots: N/A 00:29:21.208 Firmware Slot 1 Read-Only: N/A 00:29:21.208 Firmware Activation Without Reset: N/A 00:29:21.208 Multiple Update Detection Support: N/A 00:29:21.208 Firmware Update Granularity: No Information Provided 00:29:21.208 Per-Namespace SMART Log: No 00:29:21.208 Asymmetric Namespace Access Log Page: Not Supported 00:29:21.208 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:29:21.208 Command Effects Log Page: Supported 00:29:21.208 Get Log Page Extended Data: Supported 00:29:21.208 Telemetry Log Pages: Not Supported 00:29:21.208 Persistent Event Log Pages: Not Supported 00:29:21.208 Supported Log Pages Log Page: May Support 00:29:21.208 Commands Supported & Effects Log Page: Not Supported 00:29:21.208 Feature Identifiers & Effects Log Page:May Support 00:29:21.208 NVMe-MI Commands & Effects Log Page: May Support 00:29:21.208 Data Area 4 for Telemetry Log: Not Supported 00:29:21.208 Error Log Page Entries Supported: 128 00:29:21.208 Keep Alive: Supported 00:29:21.208 Keep Alive Granularity: 10000 ms 00:29:21.208 00:29:21.208 NVM Command Set Attributes 00:29:21.208 ========================== 00:29:21.208 Submission Queue Entry Size 00:29:21.208 Max: 64 00:29:21.208 Min: 64 00:29:21.208 Completion Queue Entry Size 00:29:21.208 Max: 16 00:29:21.208 Min: 16 00:29:21.208 Number of Namespaces: 32 00:29:21.208 Compare Command: Supported 00:29:21.208 Write Uncorrectable Command: Not Supported 00:29:21.208 Dataset Management Command: Supported 00:29:21.208 Write Zeroes Command: Supported 00:29:21.208 Set Features Save Field: Not Supported 00:29:21.208 Reservations: Supported 00:29:21.208 Timestamp: Not Supported 00:29:21.208 Copy: Supported 00:29:21.208 Volatile Write Cache: Present 00:29:21.208 Atomic Write Unit (Normal): 1 00:29:21.208 Atomic Write Unit (PFail): 1 00:29:21.208 Atomic Compare & Write Unit: 1 00:29:21.208 Fused Compare & Write: Supported 00:29:21.208 Scatter-Gather List 00:29:21.208 SGL Command Set: Supported 00:29:21.208 SGL Keyed: Supported 00:29:21.208 SGL Bit Bucket Descriptor: Not Supported 00:29:21.208 SGL Metadata Pointer: Not Supported 00:29:21.208 Oversized SGL: Not Supported 00:29:21.208 SGL Metadata Address: Not Supported 00:29:21.208 SGL Offset: Supported 00:29:21.208 Transport SGL Data Block: Not Supported 00:29:21.208 Replay Protected Memory Block: Not Supported 00:29:21.208 00:29:21.208 Firmware Slot Information 00:29:21.208 ========================= 00:29:21.208 Active slot: 1 00:29:21.208 Slot 1 Firmware Revision: 25.01 00:29:21.208 00:29:21.208 00:29:21.208 Commands Supported and Effects 00:29:21.208 ============================== 00:29:21.208 Admin Commands 00:29:21.208 -------------- 00:29:21.208 Get Log Page (02h): Supported 00:29:21.208 Identify (06h): Supported 00:29:21.208 Abort (08h): Supported 00:29:21.208 Set Features (09h): Supported 00:29:21.208 Get Features (0Ah): Supported 00:29:21.208 Asynchronous Event Request (0Ch): Supported 00:29:21.208 Keep Alive (18h): Supported 00:29:21.208 I/O Commands 00:29:21.208 ------------ 00:29:21.208 Flush (00h): Supported LBA-Change 00:29:21.208 Write (01h): Supported LBA-Change 00:29:21.208 Read (02h): Supported 00:29:21.208 Compare (05h): Supported 00:29:21.208 Write Zeroes (08h): Supported LBA-Change 00:29:21.208 Dataset Management (09h): Supported LBA-Change 00:29:21.208 Copy (19h): Supported LBA-Change 00:29:21.208 00:29:21.208 Error Log 00:29:21.208 ========= 00:29:21.208 00:29:21.208 Arbitration 00:29:21.208 =========== 00:29:21.208 Arbitration Burst: 1 00:29:21.208 00:29:21.208 Power Management 00:29:21.208 ================ 00:29:21.208 Number of Power States: 1 00:29:21.208 Current Power State: Power State #0 00:29:21.208 Power State #0: 00:29:21.208 Max Power: 0.00 W 00:29:21.208 Non-Operational State: Operational 00:29:21.208 Entry Latency: Not Reported 00:29:21.208 Exit Latency: Not Reported 00:29:21.208 Relative Read Throughput: 0 00:29:21.208 Relative Read Latency: 0 00:29:21.208 Relative Write Throughput: 0 00:29:21.208 Relative Write Latency: 0 00:29:21.208 Idle Power: Not Reported 00:29:21.208 Active Power: Not Reported 00:29:21.208 Non-Operational Permissive Mode: Not Supported 00:29:21.208 00:29:21.208 Health Information 00:29:21.208 ================== 00:29:21.208 Critical Warnings: 00:29:21.208 Available Spare Space: OK 00:29:21.208 Temperature: OK 00:29:21.208 Device Reliability: OK 00:29:21.208 Read Only: No 00:29:21.208 Volatile Memory Backup: OK 00:29:21.209 Current Temperature: 0 Kelvin (-273 Celsius) 00:29:21.209 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:29:21.209 Available Spare: 0% 00:29:21.209 Available Spare Threshold: 0% 00:29:21.209 Life Percentage Used:[2024-11-18 00:34:44.887237] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:21.209 [2024-11-18 00:34:44.887249] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x19b2650) 00:29:21.209 [2024-11-18 00:34:44.887260] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.209 [2024-11-18 00:34:44.887282] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a0d9c0, cid 7, qid 0 00:29:21.209 [2024-11-18 00:34:44.887416] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:21.209 [2024-11-18 00:34:44.887432] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:21.209 [2024-11-18 00:34:44.887439] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:21.209 [2024-11-18 00:34:44.887446] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a0d9c0) on tqpair=0x19b2650 00:29:21.209 [2024-11-18 00:34:44.887493] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:29:21.209 [2024-11-18 00:34:44.887516] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a0cf40) on tqpair=0x19b2650 00:29:21.209 [2024-11-18 00:34:44.887528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.209 [2024-11-18 00:34:44.887537] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a0d0c0) on tqpair=0x19b2650 00:29:21.209 [2024-11-18 00:34:44.887544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.209 [2024-11-18 00:34:44.887552] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a0d240) on tqpair=0x19b2650 00:29:21.209 [2024-11-18 00:34:44.887563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.209 [2024-11-18 00:34:44.887572] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a0d3c0) on tqpair=0x19b2650 00:29:21.209 [2024-11-18 00:34:44.887579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.209 [2024-11-18 00:34:44.887592] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:21.209 [2024-11-18 00:34:44.887600] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:21.209 [2024-11-18 00:34:44.887606] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19b2650) 00:29:21.209 [2024-11-18 00:34:44.887632] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.209 [2024-11-18 00:34:44.887654] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a0d3c0, cid 3, qid 0 00:29:21.209 [2024-11-18 00:34:44.887791] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:21.209 [2024-11-18 00:34:44.887806] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:21.209 [2024-11-18 00:34:44.887817] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:21.209 [2024-11-18 00:34:44.887824] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a0d3c0) on tqpair=0x19b2650 00:29:21.209 [2024-11-18 00:34:44.887835] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:21.209 [2024-11-18 00:34:44.887843] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:21.209 [2024-11-18 00:34:44.887850] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19b2650) 00:29:21.209 [2024-11-18 00:34:44.887860] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.209 [2024-11-18 00:34:44.887889] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a0d3c0, cid 3, qid 0 00:29:21.209 [2024-11-18 00:34:44.887977] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:21.209 [2024-11-18 00:34:44.887992] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:21.209 [2024-11-18 00:34:44.887999] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:21.209 [2024-11-18 00:34:44.888006] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a0d3c0) on tqpair=0x19b2650 00:29:21.209 [2024-11-18 00:34:44.888013] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:29:21.209 [2024-11-18 00:34:44.888025] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:29:21.209 [2024-11-18 00:34:44.888042] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:21.209 [2024-11-18 00:34:44.888051] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:21.209 [2024-11-18 00:34:44.888057] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19b2650) 00:29:21.209 [2024-11-18 00:34:44.888068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.209 [2024-11-18 00:34:44.888093] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a0d3c0, cid 3, qid 0 00:29:21.209 [2024-11-18 00:34:44.888173] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:21.209 [2024-11-18 00:34:44.888187] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:21.209 [2024-11-18 00:34:44.888194] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:21.209 [2024-11-18 00:34:44.888201] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a0d3c0) on tqpair=0x19b2650 00:29:21.209 [2024-11-18 00:34:44.888219] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:21.209 [2024-11-18 00:34:44.888230] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:21.209 [2024-11-18 00:34:44.888237] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19b2650) 00:29:21.209 [2024-11-18 00:34:44.888247] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.209 [2024-11-18 00:34:44.888269] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a0d3c0, cid 3, qid 0 00:29:21.209 [2024-11-18 00:34:44.888352] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:21.209 [2024-11-18 00:34:44.888368] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:21.209 [2024-11-18 00:34:44.888375] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:21.209 [2024-11-18 00:34:44.888382] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a0d3c0) on tqpair=0x19b2650 00:29:21.209 [2024-11-18 00:34:44.888400] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:21.209 [2024-11-18 00:34:44.888411] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:21.209 [2024-11-18 00:34:44.888418] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19b2650) 00:29:21.209 [2024-11-18 00:34:44.888429] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.209 [2024-11-18 00:34:44.888451] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a0d3c0, cid 3, qid 0 00:29:21.209 [2024-11-18 00:34:44.888528] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:21.209 [2024-11-18 00:34:44.888543] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:21.209 [2024-11-18 00:34:44.888550] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:21.209 [2024-11-18 00:34:44.888557] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a0d3c0) on tqpair=0x19b2650 00:29:21.209 [2024-11-18 00:34:44.888574] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:21.209 [2024-11-18 00:34:44.888585] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:21.209 [2024-11-18 00:34:44.888592] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19b2650) 00:29:21.210 [2024-11-18 00:34:44.888602] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.210 [2024-11-18 00:34:44.888626] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a0d3c0, cid 3, qid 0 00:29:21.210 [2024-11-18 00:34:44.888702] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:21.210 [2024-11-18 00:34:44.888717] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:21.210 [2024-11-18 00:34:44.888724] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:21.210 [2024-11-18 00:34:44.888731] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a0d3c0) on tqpair=0x19b2650 00:29:21.210 [2024-11-18 00:34:44.888748] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:21.210 [2024-11-18 00:34:44.888758] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:21.210 [2024-11-18 00:34:44.888765] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19b2650) 00:29:21.210 [2024-11-18 00:34:44.888775] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.210 [2024-11-18 00:34:44.888797] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a0d3c0, cid 3, qid 0 00:29:21.210 [2024-11-18 00:34:44.888879] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:21.210 [2024-11-18 00:34:44.888894] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:21.210 [2024-11-18 00:34:44.888901] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:21.210 [2024-11-18 00:34:44.888907] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a0d3c0) on tqpair=0x19b2650 00:29:21.210 [2024-11-18 00:34:44.888925] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:21.210 [2024-11-18 00:34:44.888936] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:21.210 [2024-11-18 00:34:44.888943] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19b2650) 00:29:21.210 [2024-11-18 00:34:44.888953] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.210 [2024-11-18 00:34:44.888976] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a0d3c0, cid 3, qid 0 00:29:21.210 [2024-11-18 00:34:44.889058] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:21.210 [2024-11-18 00:34:44.889073] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:21.210 [2024-11-18 00:34:44.889080] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:21.210 [2024-11-18 00:34:44.889087] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a0d3c0) on tqpair=0x19b2650 00:29:21.210 [2024-11-18 00:34:44.889104] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:21.210 [2024-11-18 00:34:44.889116] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:21.210 [2024-11-18 00:34:44.889123] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19b2650) 00:29:21.210 [2024-11-18 00:34:44.889133] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.210 [2024-11-18 00:34:44.889155] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a0d3c0, cid 3, qid 0 00:29:21.210 [2024-11-18 00:34:44.889231] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:21.210 [2024-11-18 00:34:44.889249] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:21.210 [2024-11-18 00:34:44.889257] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:21.210 [2024-11-18 00:34:44.889264] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a0d3c0) on tqpair=0x19b2650 00:29:21.210 [2024-11-18 00:34:44.889283] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:21.210 [2024-11-18 00:34:44.889293] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:21.210 [2024-11-18 00:34:44.889300] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19b2650) 00:29:21.210 [2024-11-18 00:34:44.893318] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.210 [2024-11-18 00:34:44.893349] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a0d3c0, cid 3, qid 0 00:29:21.210 [2024-11-18 00:34:44.893483] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:21.210 [2024-11-18 00:34:44.893499] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:21.210 [2024-11-18 00:34:44.893506] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:21.210 [2024-11-18 00:34:44.893516] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a0d3c0) on tqpair=0x19b2650 00:29:21.210 [2024-11-18 00:34:44.893531] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 5 milliseconds 00:29:21.210 0% 00:29:21.210 Data Units Read: 0 00:29:21.210 Data Units Written: 0 00:29:21.210 Host Read Commands: 0 00:29:21.210 Host Write Commands: 0 00:29:21.210 Controller Busy Time: 0 minutes 00:29:21.210 Power Cycles: 0 00:29:21.210 Power On Hours: 0 hours 00:29:21.210 Unsafe Shutdowns: 0 00:29:21.210 Unrecoverable Media Errors: 0 00:29:21.210 Lifetime Error Log Entries: 0 00:29:21.210 Warning Temperature Time: 0 minutes 00:29:21.210 Critical Temperature Time: 0 minutes 00:29:21.210 00:29:21.210 Number of Queues 00:29:21.210 ================ 00:29:21.210 Number of I/O Submission Queues: 127 00:29:21.210 Number of I/O Completion Queues: 127 00:29:21.210 00:29:21.210 Active Namespaces 00:29:21.210 ================= 00:29:21.210 Namespace ID:1 00:29:21.210 Error Recovery Timeout: Unlimited 00:29:21.210 Command Set Identifier: NVM (00h) 00:29:21.210 Deallocate: Supported 00:29:21.210 Deallocated/Unwritten Error: Not Supported 00:29:21.210 Deallocated Read Value: Unknown 00:29:21.210 Deallocate in Write Zeroes: Not Supported 00:29:21.210 Deallocated Guard Field: 0xFFFF 00:29:21.210 Flush: Supported 00:29:21.210 Reservation: Supported 00:29:21.210 Namespace Sharing Capabilities: Multiple Controllers 00:29:21.210 Size (in LBAs): 131072 (0GiB) 00:29:21.210 Capacity (in LBAs): 131072 (0GiB) 00:29:21.210 Utilization (in LBAs): 131072 (0GiB) 00:29:21.210 NGUID: ABCDEF0123456789ABCDEF0123456789 00:29:21.210 EUI64: ABCDEF0123456789 00:29:21.210 UUID: 5883b743-0b95-4d04-b96a-dbf4bb0e1bef 00:29:21.210 Thin Provisioning: Not Supported 00:29:21.210 Per-NS Atomic Units: Yes 00:29:21.210 Atomic Boundary Size (Normal): 0 00:29:21.210 Atomic Boundary Size (PFail): 0 00:29:21.210 Atomic Boundary Offset: 0 00:29:21.210 Maximum Single Source Range Length: 65535 00:29:21.211 Maximum Copy Length: 65535 00:29:21.211 Maximum Source Range Count: 1 00:29:21.211 NGUID/EUI64 Never Reused: No 00:29:21.211 Namespace Write Protected: No 00:29:21.211 Number of LBA Formats: 1 00:29:21.211 Current LBA Format: LBA Format #00 00:29:21.211 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:21.211 00:29:21.211 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:29:21.211 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:21.211 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.211 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:21.211 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.211 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:29:21.211 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:29:21.211 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:21.211 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:29:21.211 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:21.211 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:29:21.211 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:21.211 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:21.211 rmmod nvme_tcp 00:29:21.211 rmmod nvme_fabrics 00:29:21.211 rmmod nvme_keyring 00:29:21.211 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:21.211 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:29:21.211 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:29:21.211 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 340062 ']' 00:29:21.211 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 340062 00:29:21.211 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 340062 ']' 00:29:21.211 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 340062 00:29:21.211 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:29:21.211 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:21.211 00:34:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 340062 00:29:21.470 00:34:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:21.470 00:34:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:21.470 00:34:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 340062' 00:29:21.470 killing process with pid 340062 00:29:21.470 00:34:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 340062 00:29:21.470 00:34:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 340062 00:29:21.470 00:34:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:21.470 00:34:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:21.470 00:34:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:21.470 00:34:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:29:21.470 00:34:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:21.470 00:34:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:29:21.471 00:34:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:29:21.471 00:34:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:21.471 00:34:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:21.471 00:34:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:21.471 00:34:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:21.471 00:34:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:24.009 00:34:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:24.009 00:29:24.009 real 0m5.618s 00:29:24.009 user 0m4.681s 00:29:24.009 sys 0m2.015s 00:29:24.009 00:34:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:24.009 00:34:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:24.009 ************************************ 00:29:24.009 END TEST nvmf_identify 00:29:24.009 ************************************ 00:29:24.009 00:34:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:24.009 00:34:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:24.009 00:34:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:24.009 00:34:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.009 ************************************ 00:29:24.009 START TEST nvmf_perf 00:29:24.009 ************************************ 00:29:24.009 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:24.009 * Looking for test storage... 00:29:24.009 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:24.009 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:24.009 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:29:24.009 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:24.009 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:24.009 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:24.009 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:24.009 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:24.009 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:29:24.009 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:29:24.009 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:29:24.009 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:29:24.009 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:29:24.009 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:29:24.009 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:29:24.009 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:24.009 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:29:24.009 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:29:24.009 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:24.009 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:24.009 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:29:24.009 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:24.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.010 --rc genhtml_branch_coverage=1 00:29:24.010 --rc genhtml_function_coverage=1 00:29:24.010 --rc genhtml_legend=1 00:29:24.010 --rc geninfo_all_blocks=1 00:29:24.010 --rc geninfo_unexecuted_blocks=1 00:29:24.010 00:29:24.010 ' 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:24.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.010 --rc genhtml_branch_coverage=1 00:29:24.010 --rc genhtml_function_coverage=1 00:29:24.010 --rc genhtml_legend=1 00:29:24.010 --rc geninfo_all_blocks=1 00:29:24.010 --rc geninfo_unexecuted_blocks=1 00:29:24.010 00:29:24.010 ' 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:24.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.010 --rc genhtml_branch_coverage=1 00:29:24.010 --rc genhtml_function_coverage=1 00:29:24.010 --rc genhtml_legend=1 00:29:24.010 --rc geninfo_all_blocks=1 00:29:24.010 --rc geninfo_unexecuted_blocks=1 00:29:24.010 00:29:24.010 ' 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:24.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.010 --rc genhtml_branch_coverage=1 00:29:24.010 --rc genhtml_function_coverage=1 00:29:24.010 --rc genhtml_legend=1 00:29:24.010 --rc geninfo_all_blocks=1 00:29:24.010 --rc geninfo_unexecuted_blocks=1 00:29:24.010 00:29:24.010 ' 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:24.010 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:29:24.010 00:34:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:25.917 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:25.917 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:29:25.917 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:25.917 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:25.917 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:25.917 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:25.917 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:25.917 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:29:25.917 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:25.917 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:29:25.917 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:29:25.917 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:29:25.917 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:29:25.917 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:29:25.917 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:29:25.917 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:25.917 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:25.917 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:26.177 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:26.177 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:26.177 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:26.177 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:26.177 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:26.178 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:26.178 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:26.178 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:26.178 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.156 ms 00:29:26.178 00:29:26.178 --- 10.0.0.2 ping statistics --- 00:29:26.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:26.178 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:29:26.178 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:26.178 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:26.178 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:29:26.178 00:29:26.178 --- 10.0.0.1 ping statistics --- 00:29:26.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:26.178 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:29:26.178 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:26.178 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:29:26.178 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:26.178 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:26.178 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:26.178 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:26.178 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:26.178 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:26.178 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:26.178 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:29:26.178 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:26.178 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:26.178 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:26.178 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=342236 00:29:26.178 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:26.178 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 342236 00:29:26.178 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 342236 ']' 00:29:26.178 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:26.178 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:26.178 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:26.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:26.178 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:26.178 00:34:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:26.436 [2024-11-18 00:34:50.005984] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:29:26.436 [2024-11-18 00:34:50.006063] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:26.436 [2024-11-18 00:34:50.083882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:26.436 [2024-11-18 00:34:50.137648] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:26.436 [2024-11-18 00:34:50.137697] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:26.436 [2024-11-18 00:34:50.137711] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:26.436 [2024-11-18 00:34:50.137722] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:26.436 [2024-11-18 00:34:50.137732] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:26.436 [2024-11-18 00:34:50.139332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:26.436 [2024-11-18 00:34:50.139391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:26.436 [2024-11-18 00:34:50.139418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:26.436 [2024-11-18 00:34:50.139422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:26.695 00:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:26.695 00:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:29:26.695 00:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:26.695 00:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:26.695 00:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:26.695 00:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:26.695 00:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:26.695 00:34:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:29:29.977 00:34:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:29:29.977 00:34:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:29:29.977 00:34:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:29:29.977 00:34:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:30.235 00:34:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:29:30.235 00:34:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:29:30.235 00:34:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:29:30.235 00:34:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:29:30.235 00:34:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:29:30.492 [2024-11-18 00:34:54.243554] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:30.492 00:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:30.750 00:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:30.750 00:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:31.007 00:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:31.007 00:34:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:29:31.572 00:34:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:31.572 [2024-11-18 00:34:55.343504] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:31.572 00:34:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:31.830 00:34:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:29:31.830 00:34:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:29:31.830 00:34:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:29:31.830 00:34:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:29:33.202 Initializing NVMe Controllers 00:29:33.202 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:29:33.202 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:29:33.202 Initialization complete. Launching workers. 00:29:33.202 ======================================================== 00:29:33.202 Latency(us) 00:29:33.202 Device Information : IOPS MiB/s Average min max 00:29:33.202 PCIE (0000:88:00.0) NSID 1 from core 0: 85622.64 334.46 373.19 42.48 4302.54 00:29:33.202 ======================================================== 00:29:33.202 Total : 85622.64 334.46 373.19 42.48 4302.54 00:29:33.202 00:29:33.202 00:34:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:34.576 Initializing NVMe Controllers 00:29:34.576 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:34.576 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:34.576 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:34.576 Initialization complete. Launching workers. 00:29:34.576 ======================================================== 00:29:34.576 Latency(us) 00:29:34.576 Device Information : IOPS MiB/s Average min max 00:29:34.576 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 108.94 0.43 9335.58 144.33 47880.85 00:29:34.576 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 55.97 0.22 17865.71 4961.83 50831.83 00:29:34.576 ======================================================== 00:29:34.576 Total : 164.91 0.64 12230.66 144.33 50831.83 00:29:34.576 00:29:34.576 00:34:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:35.950 Initializing NVMe Controllers 00:29:35.950 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:35.950 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:35.950 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:35.950 Initialization complete. Launching workers. 00:29:35.950 ======================================================== 00:29:35.950 Latency(us) 00:29:35.950 Device Information : IOPS MiB/s Average min max 00:29:35.950 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8427.99 32.92 3811.65 637.70 9781.69 00:29:35.950 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3617.00 14.13 8894.92 6917.58 16983.29 00:29:35.950 ======================================================== 00:29:35.950 Total : 12044.99 47.05 5338.11 637.70 16983.29 00:29:35.950 00:29:35.950 00:34:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:29:35.950 00:34:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:29:35.950 00:34:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:38.476 Initializing NVMe Controllers 00:29:38.476 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:38.476 Controller IO queue size 128, less than required. 00:29:38.476 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:38.476 Controller IO queue size 128, less than required. 00:29:38.476 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:38.476 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:38.476 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:38.476 Initialization complete. Launching workers. 00:29:38.476 ======================================================== 00:29:38.476 Latency(us) 00:29:38.476 Device Information : IOPS MiB/s Average min max 00:29:38.476 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1548.34 387.08 84242.05 60186.41 142825.32 00:29:38.476 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 594.98 148.74 221624.45 78383.30 326361.45 00:29:38.476 ======================================================== 00:29:38.476 Total : 2143.31 535.83 122378.94 60186.41 326361.45 00:29:38.476 00:29:38.476 00:35:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:29:38.755 No valid NVMe controllers or AIO or URING devices found 00:29:38.755 Initializing NVMe Controllers 00:29:38.755 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:38.755 Controller IO queue size 128, less than required. 00:29:38.755 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:38.755 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:29:38.755 Controller IO queue size 128, less than required. 00:29:38.755 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:38.755 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:29:38.755 WARNING: Some requested NVMe devices were skipped 00:29:38.755 00:35:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:29:41.288 Initializing NVMe Controllers 00:29:41.288 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:41.288 Controller IO queue size 128, less than required. 00:29:41.288 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:41.288 Controller IO queue size 128, less than required. 00:29:41.288 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:41.288 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:41.288 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:41.288 Initialization complete. Launching workers. 00:29:41.288 00:29:41.288 ==================== 00:29:41.288 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:29:41.288 TCP transport: 00:29:41.288 polls: 9620 00:29:41.288 idle_polls: 6307 00:29:41.288 sock_completions: 3313 00:29:41.288 nvme_completions: 6045 00:29:41.288 submitted_requests: 9036 00:29:41.288 queued_requests: 1 00:29:41.288 00:29:41.288 ==================== 00:29:41.288 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:29:41.288 TCP transport: 00:29:41.288 polls: 12126 00:29:41.288 idle_polls: 8353 00:29:41.288 sock_completions: 3773 00:29:41.288 nvme_completions: 6559 00:29:41.288 submitted_requests: 9794 00:29:41.288 queued_requests: 1 00:29:41.288 ======================================================== 00:29:41.288 Latency(us) 00:29:41.288 Device Information : IOPS MiB/s Average min max 00:29:41.288 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1510.73 377.68 87072.58 53494.79 159875.30 00:29:41.288 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1639.21 409.80 78657.43 40481.52 109697.45 00:29:41.288 ======================================================== 00:29:41.288 Total : 3149.93 787.48 82693.39 40481.52 159875.30 00:29:41.288 00:29:41.288 00:35:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:29:41.288 00:35:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:41.546 00:35:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:29:41.546 00:35:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:29:41.546 00:35:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:29:44.844 00:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=124c0ac0-10a3-49fd-9519-914437ca312c 00:29:44.844 00:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 124c0ac0-10a3-49fd-9519-914437ca312c 00:29:44.844 00:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=124c0ac0-10a3-49fd-9519-914437ca312c 00:29:44.844 00:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:29:44.844 00:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:29:44.844 00:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:29:44.844 00:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:45.102 00:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:29:45.102 { 00:29:45.102 "uuid": "124c0ac0-10a3-49fd-9519-914437ca312c", 00:29:45.102 "name": "lvs_0", 00:29:45.102 "base_bdev": "Nvme0n1", 00:29:45.102 "total_data_clusters": 238234, 00:29:45.102 "free_clusters": 238234, 00:29:45.102 "block_size": 512, 00:29:45.102 "cluster_size": 4194304 00:29:45.102 } 00:29:45.102 ]' 00:29:45.102 00:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="124c0ac0-10a3-49fd-9519-914437ca312c") .free_clusters' 00:29:45.102 00:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=238234 00:29:45.102 00:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="124c0ac0-10a3-49fd-9519-914437ca312c") .cluster_size' 00:29:45.102 00:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:29:45.102 00:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=952936 00:29:45.102 00:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 952936 00:29:45.102 952936 00:29:45.102 00:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:29:45.102 00:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:29:45.102 00:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 124c0ac0-10a3-49fd-9519-914437ca312c lbd_0 20480 00:29:46.040 00:35:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=201603db-14c6-4920-9df1-50ec384d2056 00:29:46.040 00:35:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 201603db-14c6-4920-9df1-50ec384d2056 lvs_n_0 00:29:46.606 00:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=011af6d9-f9d9-41aa-911a-6989552d9e74 00:29:46.606 00:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 011af6d9-f9d9-41aa-911a-6989552d9e74 00:29:46.606 00:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=011af6d9-f9d9-41aa-911a-6989552d9e74 00:29:46.606 00:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:29:46.606 00:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:29:46.606 00:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:29:46.606 00:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:46.865 00:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:29:46.865 { 00:29:46.865 "uuid": "124c0ac0-10a3-49fd-9519-914437ca312c", 00:29:46.865 "name": "lvs_0", 00:29:46.865 "base_bdev": "Nvme0n1", 00:29:46.865 "total_data_clusters": 238234, 00:29:46.865 "free_clusters": 233114, 00:29:46.865 "block_size": 512, 00:29:46.865 "cluster_size": 4194304 00:29:46.865 }, 00:29:46.865 { 00:29:46.865 "uuid": "011af6d9-f9d9-41aa-911a-6989552d9e74", 00:29:46.865 "name": "lvs_n_0", 00:29:46.865 "base_bdev": "201603db-14c6-4920-9df1-50ec384d2056", 00:29:46.865 "total_data_clusters": 5114, 00:29:46.865 "free_clusters": 5114, 00:29:46.865 "block_size": 512, 00:29:46.865 "cluster_size": 4194304 00:29:46.865 } 00:29:46.865 ]' 00:29:46.865 00:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="011af6d9-f9d9-41aa-911a-6989552d9e74") .free_clusters' 00:29:46.865 00:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=5114 00:29:46.865 00:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="011af6d9-f9d9-41aa-911a-6989552d9e74") .cluster_size' 00:29:46.865 00:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:29:46.865 00:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=20456 00:29:46.865 00:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 20456 00:29:46.865 20456 00:29:46.865 00:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:29:46.865 00:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 011af6d9-f9d9-41aa-911a-6989552d9e74 lbd_nest_0 20456 00:29:47.434 00:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=86c593a2-b680-437b-bb3e-a5174cfb8382 00:29:47.434 00:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:47.434 00:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:29:47.434 00:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 86c593a2-b680-437b-bb3e-a5174cfb8382 00:29:47.692 00:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:47.950 00:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:29:47.950 00:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:29:47.950 00:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:47.950 00:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:47.950 00:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:00.144 Initializing NVMe Controllers 00:30:00.144 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:00.144 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:00.144 Initialization complete. Launching workers. 00:30:00.144 ======================================================== 00:30:00.144 Latency(us) 00:30:00.144 Device Information : IOPS MiB/s Average min max 00:30:00.144 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 48.09 0.02 20811.21 168.04 47518.81 00:30:00.144 ======================================================== 00:30:00.144 Total : 48.09 0.02 20811.21 168.04 47518.81 00:30:00.144 00:30:00.144 00:35:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:00.144 00:35:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:10.104 Initializing NVMe Controllers 00:30:10.104 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:10.105 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:10.105 Initialization complete. Launching workers. 00:30:10.105 ======================================================== 00:30:10.105 Latency(us) 00:30:10.105 Device Information : IOPS MiB/s Average min max 00:30:10.105 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 71.00 8.88 14090.32 7048.49 50839.60 00:30:10.105 ======================================================== 00:30:10.105 Total : 71.00 8.88 14090.32 7048.49 50839.60 00:30:10.105 00:30:10.105 00:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:10.105 00:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:10.105 00:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:20.070 Initializing NVMe Controllers 00:30:20.070 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:20.070 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:20.070 Initialization complete. Launching workers. 00:30:20.070 ======================================================== 00:30:20.070 Latency(us) 00:30:20.070 Device Information : IOPS MiB/s Average min max 00:30:20.070 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7543.38 3.68 4247.38 275.46 51097.78 00:30:20.070 ======================================================== 00:30:20.070 Total : 7543.38 3.68 4247.38 275.46 51097.78 00:30:20.070 00:30:20.070 00:35:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:20.070 00:35:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:30.048 Initializing NVMe Controllers 00:30:30.048 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:30.048 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:30.048 Initialization complete. Launching workers. 00:30:30.048 ======================================================== 00:30:30.048 Latency(us) 00:30:30.048 Device Information : IOPS MiB/s Average min max 00:30:30.048 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3933.90 491.74 8138.31 647.12 18048.94 00:30:30.048 ======================================================== 00:30:30.048 Total : 3933.90 491.74 8138.31 647.12 18048.94 00:30:30.048 00:30:30.048 00:35:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:30.048 00:35:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:30.048 00:35:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:40.034 Initializing NVMe Controllers 00:30:40.034 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:40.034 Controller IO queue size 128, less than required. 00:30:40.034 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:40.034 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:40.034 Initialization complete. Launching workers. 00:30:40.034 ======================================================== 00:30:40.034 Latency(us) 00:30:40.034 Device Information : IOPS MiB/s Average min max 00:30:40.034 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11610.80 5.67 11027.22 1856.91 30698.01 00:30:40.034 ======================================================== 00:30:40.034 Total : 11610.80 5.67 11027.22 1856.91 30698.01 00:30:40.034 00:30:40.034 00:36:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:40.034 00:36:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:52.250 Initializing NVMe Controllers 00:30:52.250 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:52.250 Controller IO queue size 128, less than required. 00:30:52.250 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:52.250 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:52.250 Initialization complete. Launching workers. 00:30:52.250 ======================================================== 00:30:52.250 Latency(us) 00:30:52.250 Device Information : IOPS MiB/s Average min max 00:30:52.250 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1179.20 147.40 108870.93 15656.95 230484.13 00:30:52.250 ======================================================== 00:30:52.250 Total : 1179.20 147.40 108870.93 15656.95 230484.13 00:30:52.250 00:30:52.250 00:36:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:52.250 00:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 86c593a2-b680-437b-bb3e-a5174cfb8382 00:30:52.250 00:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:52.250 00:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 201603db-14c6-4920-9df1-50ec384d2056 00:30:52.250 00:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:52.250 00:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:30:52.250 00:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:30:52.250 00:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:52.250 00:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:30:52.250 00:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:52.250 00:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:30:52.250 00:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:52.250 00:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:52.250 rmmod nvme_tcp 00:30:52.250 rmmod nvme_fabrics 00:30:52.250 rmmod nvme_keyring 00:30:52.250 00:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:52.250 00:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:30:52.250 00:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:30:52.250 00:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 342236 ']' 00:30:52.250 00:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 342236 00:30:52.250 00:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 342236 ']' 00:30:52.250 00:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 342236 00:30:52.250 00:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:30:52.250 00:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:52.250 00:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 342236 00:30:52.250 00:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:52.250 00:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:52.250 00:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 342236' 00:30:52.250 killing process with pid 342236 00:30:52.250 00:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 342236 00:30:52.250 00:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 342236 00:30:54.149 00:36:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:54.149 00:36:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:54.149 00:36:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:54.149 00:36:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:30:54.149 00:36:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:30:54.149 00:36:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:54.149 00:36:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:30:54.149 00:36:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:54.149 00:36:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:54.149 00:36:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:54.149 00:36:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:54.149 00:36:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:56.061 00:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:56.061 00:30:56.061 real 1m32.191s 00:30:56.061 user 5m38.625s 00:30:56.061 sys 0m16.447s 00:30:56.061 00:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:56.061 00:36:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:56.061 ************************************ 00:30:56.061 END TEST nvmf_perf 00:30:56.061 ************************************ 00:30:56.061 00:36:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:56.061 00:36:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:56.061 00:36:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:56.061 00:36:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.061 ************************************ 00:30:56.061 START TEST nvmf_fio_host 00:30:56.061 ************************************ 00:30:56.061 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:56.061 * Looking for test storage... 00:30:56.061 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:56.061 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:56.061 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:30:56.061 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:56.061 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:56.061 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:56.061 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:56.061 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:56.061 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:30:56.061 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:30:56.062 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:30:56.062 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:30:56.062 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:30:56.062 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:30:56.062 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:30:56.062 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:56.062 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:30:56.062 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:30:56.062 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:56.062 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:56.062 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:30:56.062 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:30:56.062 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:56.062 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:30:56.062 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:30:56.062 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:30:56.062 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:30:56.062 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:56.062 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:30:56.062 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:30:56.062 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:56.062 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:56.062 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:30:56.062 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:56.062 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:56.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:56.062 --rc genhtml_branch_coverage=1 00:30:56.062 --rc genhtml_function_coverage=1 00:30:56.062 --rc genhtml_legend=1 00:30:56.062 --rc geninfo_all_blocks=1 00:30:56.062 --rc geninfo_unexecuted_blocks=1 00:30:56.062 00:30:56.062 ' 00:30:56.062 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:56.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:56.062 --rc genhtml_branch_coverage=1 00:30:56.062 --rc genhtml_function_coverage=1 00:30:56.062 --rc genhtml_legend=1 00:30:56.062 --rc geninfo_all_blocks=1 00:30:56.062 --rc geninfo_unexecuted_blocks=1 00:30:56.062 00:30:56.062 ' 00:30:56.062 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:56.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:56.062 --rc genhtml_branch_coverage=1 00:30:56.062 --rc genhtml_function_coverage=1 00:30:56.062 --rc genhtml_legend=1 00:30:56.062 --rc geninfo_all_blocks=1 00:30:56.062 --rc geninfo_unexecuted_blocks=1 00:30:56.062 00:30:56.062 ' 00:30:56.062 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:56.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:56.062 --rc genhtml_branch_coverage=1 00:30:56.062 --rc genhtml_function_coverage=1 00:30:56.062 --rc genhtml_legend=1 00:30:56.062 --rc geninfo_all_blocks=1 00:30:56.062 --rc geninfo_unexecuted_blocks=1 00:30:56.062 00:30:56.062 ' 00:30:56.062 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:56.062 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:30:56.062 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:56.062 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:56.062 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:56.062 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.062 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.062 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.062 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:30:56.062 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.062 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:56.062 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:30:56.062 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:56.062 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:56.062 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:56.062 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:56.062 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:56.062 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:56.062 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:56.062 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:56.062 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:56.062 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:56.062 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:56.062 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:56.062 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:56.062 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:56.062 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:56.062 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:56.063 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:56.063 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:30:56.063 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:56.063 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:56.063 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:56.063 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.063 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.063 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.063 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:30:56.063 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.063 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:30:56.063 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:56.063 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:56.063 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:56.063 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:56.063 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:56.063 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:56.063 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:56.063 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:56.063 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:56.063 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:56.063 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:56.063 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:30:56.063 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:56.063 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:56.063 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:56.063 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:56.063 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:56.063 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:56.063 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:56.063 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:56.063 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:56.063 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:56.063 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:30:56.063 00:36:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:58.595 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:58.595 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:30:58.595 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:58.595 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:58.595 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:58.595 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:58.595 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:58.595 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:30:58.595 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:58.595 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:30:58.595 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:30:58.595 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:30:58.595 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:30:58.595 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:30:58.595 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:30:58.595 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:58.595 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:58.595 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:58.596 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:58.596 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:58.596 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:58.596 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:58.596 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:58.596 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.294 ms 00:30:58.596 00:30:58.596 --- 10.0.0.2 ping statistics --- 00:30:58.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:58.596 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:30:58.596 00:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:58.596 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:58.596 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:30:58.596 00:30:58.596 --- 10.0.0.1 ping statistics --- 00:30:58.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:58.596 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:30:58.596 00:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:58.596 00:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:30:58.596 00:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:58.596 00:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:58.596 00:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:58.596 00:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:58.596 00:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:58.596 00:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:58.596 00:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:58.596 00:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:30:58.596 00:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:30:58.596 00:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:58.596 00:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:58.596 00:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=354986 00:30:58.596 00:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:58.596 00:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:58.596 00:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 354986 00:30:58.596 00:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 354986 ']' 00:30:58.596 00:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:58.596 00:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:58.596 00:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:58.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:58.596 00:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:58.597 00:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:58.597 [2024-11-18 00:36:22.073520] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:30:58.597 [2024-11-18 00:36:22.073606] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:58.597 [2024-11-18 00:36:22.149892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:58.597 [2024-11-18 00:36:22.196259] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:58.597 [2024-11-18 00:36:22.196337] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:58.597 [2024-11-18 00:36:22.196364] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:58.597 [2024-11-18 00:36:22.196376] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:58.597 [2024-11-18 00:36:22.196386] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:58.597 [2024-11-18 00:36:22.197927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:58.597 [2024-11-18 00:36:22.197991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:58.597 [2024-11-18 00:36:22.198059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:58.597 [2024-11-18 00:36:22.198062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:58.597 00:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:58.597 00:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:30:58.597 00:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:58.854 [2024-11-18 00:36:22.566785] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:58.854 00:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:30:58.854 00:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:58.854 00:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:58.854 00:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:30:59.112 Malloc1 00:30:59.112 00:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:59.678 00:36:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:59.678 00:36:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:59.935 [2024-11-18 00:36:23.717546] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:59.935 00:36:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:00.193 00:36:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:31:00.193 00:36:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:00.193 00:36:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:00.193 00:36:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:00.193 00:36:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:00.193 00:36:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:00.193 00:36:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:00.193 00:36:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:00.193 00:36:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:00.193 00:36:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:00.451 00:36:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:00.451 00:36:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:00.451 00:36:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:00.451 00:36:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:00.451 00:36:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:00.451 00:36:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:00.451 00:36:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:00.451 00:36:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:00.451 00:36:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:00.451 00:36:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:00.451 00:36:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:00.452 00:36:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:00.452 00:36:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:00.452 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:00.452 fio-3.35 00:31:00.452 Starting 1 thread 00:31:02.978 00:31:02.978 test: (groupid=0, jobs=1): err= 0: pid=355341: Mon Nov 18 00:36:26 2024 00:31:02.978 read: IOPS=8580, BW=33.5MiB/s (35.1MB/s)(67.3MiB/2007msec) 00:31:02.978 slat (nsec): min=1923, max=101613, avg=2391.55, stdev=1469.79 00:31:02.978 clat (usec): min=2192, max=14588, avg=8164.94, stdev=690.63 00:31:02.978 lat (usec): min=2212, max=14591, avg=8167.33, stdev=690.58 00:31:02.978 clat percentiles (usec): 00:31:02.978 | 1.00th=[ 6587], 5.00th=[ 7046], 10.00th=[ 7308], 20.00th=[ 7635], 00:31:02.978 | 30.00th=[ 7832], 40.00th=[ 8029], 50.00th=[ 8160], 60.00th=[ 8356], 00:31:02.978 | 70.00th=[ 8455], 80.00th=[ 8717], 90.00th=[ 8979], 95.00th=[ 9241], 00:31:02.978 | 99.00th=[ 9634], 99.50th=[ 9765], 99.90th=[12649], 99.95th=[13042], 00:31:02.978 | 99.99th=[13960] 00:31:02.978 bw ( KiB/s): min=33448, max=34856, per=99.85%, avg=34272.00, stdev=638.77, samples=4 00:31:02.978 iops : min= 8362, max= 8714, avg=8568.00, stdev=160.32, samples=4 00:31:02.978 write: IOPS=8571, BW=33.5MiB/s (35.1MB/s)(67.2MiB/2007msec); 0 zone resets 00:31:02.978 slat (nsec): min=2038, max=86101, avg=2481.25, stdev=1120.27 00:31:02.978 clat (usec): min=1594, max=12807, avg=6700.58, stdev=547.13 00:31:02.978 lat (usec): min=1601, max=12810, avg=6703.06, stdev=547.10 00:31:02.978 clat percentiles (usec): 00:31:02.978 | 1.00th=[ 5407], 5.00th=[ 5866], 10.00th=[ 6063], 20.00th=[ 6325], 00:31:02.978 | 30.00th=[ 6456], 40.00th=[ 6587], 50.00th=[ 6718], 60.00th=[ 6849], 00:31:02.978 | 70.00th=[ 6980], 80.00th=[ 7111], 90.00th=[ 7308], 95.00th=[ 7504], 00:31:02.979 | 99.00th=[ 7832], 99.50th=[ 7963], 99.90th=[ 9765], 99.95th=[10814], 00:31:02.979 | 99.99th=[12780] 00:31:02.979 bw ( KiB/s): min=34104, max=34440, per=100.00%, avg=34320.00, stdev=156.90, samples=4 00:31:02.979 iops : min= 8526, max= 8610, avg=8580.00, stdev=39.23, samples=4 00:31:02.979 lat (msec) : 2=0.02%, 4=0.11%, 10=99.67%, 20=0.19% 00:31:02.979 cpu : usr=62.11%, sys=36.34%, ctx=102, majf=0, minf=41 00:31:02.979 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:31:02.979 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.979 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:02.979 issued rwts: total=17221,17203,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:02.979 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:02.979 00:31:02.979 Run status group 0 (all jobs): 00:31:02.979 READ: bw=33.5MiB/s (35.1MB/s), 33.5MiB/s-33.5MiB/s (35.1MB/s-35.1MB/s), io=67.3MiB (70.5MB), run=2007-2007msec 00:31:02.979 WRITE: bw=33.5MiB/s (35.1MB/s), 33.5MiB/s-33.5MiB/s (35.1MB/s-35.1MB/s), io=67.2MiB (70.5MB), run=2007-2007msec 00:31:02.979 00:36:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:02.979 00:36:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:02.979 00:36:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:02.979 00:36:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:02.979 00:36:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:02.979 00:36:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:02.979 00:36:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:02.979 00:36:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:02.979 00:36:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:02.979 00:36:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:02.979 00:36:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:02.979 00:36:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:02.979 00:36:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:02.979 00:36:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:02.979 00:36:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:02.979 00:36:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:02.979 00:36:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:02.979 00:36:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:02.979 00:36:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:02.979 00:36:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:02.979 00:36:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:02.979 00:36:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:03.235 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:31:03.235 fio-3.35 00:31:03.235 Starting 1 thread 00:31:05.762 00:31:05.762 test: (groupid=0, jobs=1): err= 0: pid=355676: Mon Nov 18 00:36:29 2024 00:31:05.762 read: IOPS=8038, BW=126MiB/s (132MB/s)(252MiB/2010msec) 00:31:05.762 slat (nsec): min=2919, max=97560, avg=3813.01, stdev=2028.34 00:31:05.762 clat (usec): min=2194, max=19586, avg=8953.10, stdev=2047.92 00:31:05.762 lat (usec): min=2197, max=19589, avg=8956.91, stdev=2047.95 00:31:05.762 clat percentiles (usec): 00:31:05.762 | 1.00th=[ 4752], 5.00th=[ 5669], 10.00th=[ 6259], 20.00th=[ 7177], 00:31:05.762 | 30.00th=[ 7767], 40.00th=[ 8455], 50.00th=[ 8979], 60.00th=[ 9503], 00:31:05.762 | 70.00th=[10028], 80.00th=[10552], 90.00th=[11600], 95.00th=[12518], 00:31:05.762 | 99.00th=[14091], 99.50th=[14615], 99.90th=[15270], 99.95th=[15270], 00:31:05.762 | 99.99th=[15664] 00:31:05.762 bw ( KiB/s): min=60352, max=74144, per=51.88%, avg=66728.00, stdev=5700.04, samples=4 00:31:05.762 iops : min= 3772, max= 4634, avg=4170.50, stdev=356.25, samples=4 00:31:05.762 write: IOPS=4792, BW=74.9MiB/s (78.5MB/s)(136MiB/1821msec); 0 zone resets 00:31:05.762 slat (usec): min=30, max=195, avg=34.67, stdev= 6.44 00:31:05.762 clat (usec): min=4167, max=22648, avg=12091.65, stdev=2096.47 00:31:05.762 lat (usec): min=4199, max=22711, avg=12126.32, stdev=2096.73 00:31:05.762 clat percentiles (usec): 00:31:05.762 | 1.00th=[ 7963], 5.00th=[ 8979], 10.00th=[ 9634], 20.00th=[10290], 00:31:05.762 | 30.00th=[10814], 40.00th=[11469], 50.00th=[11994], 60.00th=[12518], 00:31:05.762 | 70.00th=[13042], 80.00th=[13960], 90.00th=[15008], 95.00th=[15664], 00:31:05.762 | 99.00th=[17171], 99.50th=[17957], 99.90th=[21890], 99.95th=[22414], 00:31:05.762 | 99.99th=[22676] 00:31:05.762 bw ( KiB/s): min=62656, max=77472, per=90.60%, avg=69480.00, stdev=6198.61, samples=4 00:31:05.762 iops : min= 3916, max= 4842, avg=4342.50, stdev=387.41, samples=4 00:31:05.762 lat (msec) : 4=0.17%, 10=51.12%, 20=48.67%, 50=0.04% 00:31:05.762 cpu : usr=75.62%, sys=23.08%, ctx=51, majf=0, minf=61 00:31:05.762 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:31:05.762 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.762 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:05.762 issued rwts: total=16158,8728,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:05.762 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:05.762 00:31:05.762 Run status group 0 (all jobs): 00:31:05.763 READ: bw=126MiB/s (132MB/s), 126MiB/s-126MiB/s (132MB/s-132MB/s), io=252MiB (265MB), run=2010-2010msec 00:31:05.763 WRITE: bw=74.9MiB/s (78.5MB/s), 74.9MiB/s-74.9MiB/s (78.5MB/s-78.5MB/s), io=136MiB (143MB), run=1821-1821msec 00:31:05.763 00:36:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:06.055 00:36:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:31:06.055 00:36:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:31:06.055 00:36:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:31:06.055 00:36:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # bdfs=() 00:31:06.055 00:36:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # local bdfs 00:31:06.055 00:36:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:06.055 00:36:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:06.055 00:36:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:31:06.055 00:36:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:31:06.055 00:36:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:31:06.055 00:36:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:31:09.522 Nvme0n1 00:31:09.522 00:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:31:12.045 00:36:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=9a086994-9b7b-41b0-a7a8-0848ad77bef0 00:31:12.045 00:36:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 9a086994-9b7b-41b0-a7a8-0848ad77bef0 00:31:12.045 00:36:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=9a086994-9b7b-41b0-a7a8-0848ad77bef0 00:31:12.045 00:36:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:31:12.045 00:36:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:31:12.046 00:36:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:31:12.046 00:36:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:12.304 00:36:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:31:12.304 { 00:31:12.304 "uuid": "9a086994-9b7b-41b0-a7a8-0848ad77bef0", 00:31:12.304 "name": "lvs_0", 00:31:12.304 "base_bdev": "Nvme0n1", 00:31:12.304 "total_data_clusters": 930, 00:31:12.304 "free_clusters": 930, 00:31:12.304 "block_size": 512, 00:31:12.304 "cluster_size": 1073741824 00:31:12.304 } 00:31:12.304 ]' 00:31:12.304 00:36:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="9a086994-9b7b-41b0-a7a8-0848ad77bef0") .free_clusters' 00:31:12.304 00:36:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=930 00:31:12.304 00:36:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="9a086994-9b7b-41b0-a7a8-0848ad77bef0") .cluster_size' 00:31:12.304 00:36:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=1073741824 00:31:12.304 00:36:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=952320 00:31:12.304 00:36:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 952320 00:31:12.304 952320 00:31:12.304 00:36:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:31:12.867 5e84c5ed-a2f6-436f-9605-6d81c41ece9d 00:31:12.867 00:36:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:31:13.125 00:36:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:31:13.382 00:36:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:13.640 00:36:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:13.640 00:36:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:13.640 00:36:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:13.640 00:36:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:13.640 00:36:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:13.640 00:36:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:13.640 00:36:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:13.640 00:36:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:13.640 00:36:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:13.640 00:36:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:13.640 00:36:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:13.640 00:36:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:13.640 00:36:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:13.640 00:36:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:13.640 00:36:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:13.640 00:36:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:13.640 00:36:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:13.640 00:36:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:13.640 00:36:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:13.640 00:36:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:13.640 00:36:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:13.640 00:36:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:13.906 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:13.906 fio-3.35 00:31:13.906 Starting 1 thread 00:31:16.436 00:31:16.436 test: (groupid=0, jobs=1): err= 0: pid=357086: Mon Nov 18 00:36:39 2024 00:31:16.436 read: IOPS=5434, BW=21.2MiB/s (22.3MB/s)(42.6MiB/2009msec) 00:31:16.436 slat (nsec): min=1837, max=153091, avg=2437.04, stdev=2140.00 00:31:16.436 clat (usec): min=1295, max=172538, avg=12803.33, stdev=12167.31 00:31:16.436 lat (usec): min=1298, max=172574, avg=12805.77, stdev=12167.60 00:31:16.436 clat percentiles (msec): 00:31:16.436 | 1.00th=[ 10], 5.00th=[ 11], 10.00th=[ 11], 20.00th=[ 11], 00:31:16.436 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 13], 00:31:16.436 | 70.00th=[ 13], 80.00th=[ 13], 90.00th=[ 14], 95.00th=[ 14], 00:31:16.436 | 99.00th=[ 15], 99.50th=[ 157], 99.90th=[ 174], 99.95th=[ 174], 00:31:16.436 | 99.99th=[ 174] 00:31:16.436 bw ( KiB/s): min=15248, max=24136, per=99.82%, avg=21698.00, stdev=4305.06, samples=4 00:31:16.436 iops : min= 3812, max= 6034, avg=5424.50, stdev=1076.27, samples=4 00:31:16.436 write: IOPS=5415, BW=21.2MiB/s (22.2MB/s)(42.5MiB/2009msec); 0 zone resets 00:31:16.436 slat (usec): min=2, max=108, avg= 2.64, stdev= 1.54 00:31:16.436 clat (usec): min=361, max=170486, avg=10622.92, stdev=11421.90 00:31:16.436 lat (usec): min=363, max=170491, avg=10625.56, stdev=11422.16 00:31:16.436 clat percentiles (msec): 00:31:16.436 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 10], 00:31:16.436 | 30.00th=[ 10], 40.00th=[ 10], 50.00th=[ 10], 60.00th=[ 11], 00:31:16.436 | 70.00th=[ 11], 80.00th=[ 11], 90.00th=[ 11], 95.00th=[ 12], 00:31:16.436 | 99.00th=[ 13], 99.50th=[ 155], 99.90th=[ 169], 99.95th=[ 171], 00:31:16.436 | 99.99th=[ 171] 00:31:16.436 bw ( KiB/s): min=16040, max=23744, per=99.91%, avg=21642.00, stdev=3739.17, samples=4 00:31:16.436 iops : min= 4010, max= 5936, avg=5410.50, stdev=934.79, samples=4 00:31:16.436 lat (usec) : 500=0.01%, 750=0.01% 00:31:16.436 lat (msec) : 2=0.02%, 4=0.10%, 10=32.14%, 20=67.12%, 50=0.01% 00:31:16.436 lat (msec) : 250=0.59% 00:31:16.436 cpu : usr=64.34%, sys=34.46%, ctx=99, majf=0, minf=41 00:31:16.436 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:31:16.436 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.436 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:16.436 issued rwts: total=10918,10879,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:16.436 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:16.436 00:31:16.436 Run status group 0 (all jobs): 00:31:16.436 READ: bw=21.2MiB/s (22.3MB/s), 21.2MiB/s-21.2MiB/s (22.3MB/s-22.3MB/s), io=42.6MiB (44.7MB), run=2009-2009msec 00:31:16.436 WRITE: bw=21.2MiB/s (22.2MB/s), 21.2MiB/s-21.2MiB/s (22.2MB/s-22.2MB/s), io=42.5MiB (44.6MB), run=2009-2009msec 00:31:16.436 00:36:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:16.436 00:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:31:17.808 00:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=0d5f9b42-c4b5-4b2f-9e4d-f3c480f614bb 00:31:17.808 00:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 0d5f9b42-c4b5-4b2f-9e4d-f3c480f614bb 00:31:17.808 00:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=0d5f9b42-c4b5-4b2f-9e4d-f3c480f614bb 00:31:17.808 00:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:31:17.808 00:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:31:17.808 00:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:31:17.808 00:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:18.065 00:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:31:18.065 { 00:31:18.065 "uuid": "9a086994-9b7b-41b0-a7a8-0848ad77bef0", 00:31:18.065 "name": "lvs_0", 00:31:18.065 "base_bdev": "Nvme0n1", 00:31:18.065 "total_data_clusters": 930, 00:31:18.065 "free_clusters": 0, 00:31:18.065 "block_size": 512, 00:31:18.065 "cluster_size": 1073741824 00:31:18.065 }, 00:31:18.065 { 00:31:18.065 "uuid": "0d5f9b42-c4b5-4b2f-9e4d-f3c480f614bb", 00:31:18.065 "name": "lvs_n_0", 00:31:18.065 "base_bdev": "5e84c5ed-a2f6-436f-9605-6d81c41ece9d", 00:31:18.065 "total_data_clusters": 237847, 00:31:18.065 "free_clusters": 237847, 00:31:18.065 "block_size": 512, 00:31:18.065 "cluster_size": 4194304 00:31:18.065 } 00:31:18.065 ]' 00:31:18.065 00:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="0d5f9b42-c4b5-4b2f-9e4d-f3c480f614bb") .free_clusters' 00:31:18.065 00:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=237847 00:31:18.065 00:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="0d5f9b42-c4b5-4b2f-9e4d-f3c480f614bb") .cluster_size' 00:31:18.065 00:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=4194304 00:31:18.065 00:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=951388 00:31:18.065 00:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 951388 00:31:18.065 951388 00:31:18.065 00:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:31:18.630 ebba3242-d90a-4e5d-b75e-bfd82bac0d7f 00:31:18.630 00:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:31:18.888 00:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:31:19.146 00:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:31:19.403 00:36:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:19.403 00:36:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:19.403 00:36:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:19.403 00:36:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:19.403 00:36:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:19.403 00:36:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:19.403 00:36:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:19.403 00:36:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:19.403 00:36:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:19.403 00:36:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:19.403 00:36:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:19.403 00:36:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:19.403 00:36:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:19.403 00:36:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:19.403 00:36:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:19.403 00:36:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:19.403 00:36:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:19.403 00:36:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:19.661 00:36:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:19.661 00:36:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:19.661 00:36:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:19.661 00:36:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:19.661 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:19.661 fio-3.35 00:31:19.661 Starting 1 thread 00:31:22.188 00:31:22.188 test: (groupid=0, jobs=1): err= 0: pid=357828: Mon Nov 18 00:36:45 2024 00:31:22.188 read: IOPS=5443, BW=21.3MiB/s (22.3MB/s)(42.7MiB/2009msec) 00:31:22.188 slat (nsec): min=1938, max=122294, avg=2636.40, stdev=2073.23 00:31:22.188 clat (usec): min=4028, max=21049, avg=12801.06, stdev=1144.65 00:31:22.188 lat (usec): min=4049, max=21051, avg=12803.69, stdev=1144.54 00:31:22.188 clat percentiles (usec): 00:31:22.188 | 1.00th=[10159], 5.00th=[11076], 10.00th=[11469], 20.00th=[11863], 00:31:22.188 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12780], 60.00th=[13042], 00:31:22.188 | 70.00th=[13304], 80.00th=[13698], 90.00th=[14222], 95.00th=[14615], 00:31:22.188 | 99.00th=[15270], 99.50th=[15533], 99.90th=[18220], 99.95th=[19792], 00:31:22.188 | 99.99th=[21103] 00:31:22.188 bw ( KiB/s): min=20528, max=22368, per=99.76%, avg=21722.00, stdev=820.57, samples=4 00:31:22.188 iops : min= 5132, max= 5592, avg=5430.50, stdev=205.14, samples=4 00:31:22.188 write: IOPS=5421, BW=21.2MiB/s (22.2MB/s)(42.5MiB/2009msec); 0 zone resets 00:31:22.188 slat (nsec): min=2073, max=94833, avg=2788.82, stdev=1920.64 00:31:22.188 clat (usec): min=2806, max=18057, avg=10552.58, stdev=947.34 00:31:22.188 lat (usec): min=2811, max=18060, avg=10555.37, stdev=947.29 00:31:22.188 clat percentiles (usec): 00:31:22.188 | 1.00th=[ 8356], 5.00th=[ 9110], 10.00th=[ 9503], 20.00th=[ 9896], 00:31:22.188 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10552], 60.00th=[10814], 00:31:22.188 | 70.00th=[10945], 80.00th=[11338], 90.00th=[11600], 95.00th=[11994], 00:31:22.188 | 99.00th=[12649], 99.50th=[12911], 99.90th=[16057], 99.95th=[17695], 00:31:22.188 | 99.99th=[17957] 00:31:22.188 bw ( KiB/s): min=21504, max=21952, per=99.97%, avg=21680.00, stdev=198.12, samples=4 00:31:22.188 iops : min= 5376, max= 5488, avg=5420.00, stdev=49.53, samples=4 00:31:22.188 lat (msec) : 4=0.04%, 10=12.82%, 20=87.12%, 50=0.02% 00:31:22.188 cpu : usr=57.97%, sys=40.74%, ctx=106, majf=0, minf=41 00:31:22.188 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:31:22.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.188 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:22.188 issued rwts: total=10936,10892,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.188 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:22.188 00:31:22.188 Run status group 0 (all jobs): 00:31:22.188 READ: bw=21.3MiB/s (22.3MB/s), 21.3MiB/s-21.3MiB/s (22.3MB/s-22.3MB/s), io=42.7MiB (44.8MB), run=2009-2009msec 00:31:22.188 WRITE: bw=21.2MiB/s (22.2MB/s), 21.2MiB/s-21.2MiB/s (22.2MB/s-22.2MB/s), io=42.5MiB (44.6MB), run=2009-2009msec 00:31:22.188 00:36:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:31:22.446 00:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:31:22.446 00:36:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:31:26.626 00:36:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:26.626 00:36:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:31:29.906 00:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:29.906 00:36:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:31:31.805 00:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:31:31.805 00:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:31:31.805 00:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:31:31.805 00:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:31.805 00:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:31:31.805 00:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:31.805 00:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:31:31.805 00:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:31.805 00:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:31.805 rmmod nvme_tcp 00:31:31.805 rmmod nvme_fabrics 00:31:31.805 rmmod nvme_keyring 00:31:31.805 00:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:31.805 00:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:31:31.805 00:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:31:31.805 00:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 354986 ']' 00:31:31.805 00:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 354986 00:31:31.805 00:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 354986 ']' 00:31:31.805 00:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 354986 00:31:31.805 00:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:31:31.805 00:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:31.805 00:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 354986 00:31:31.805 00:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:31.805 00:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:31.805 00:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 354986' 00:31:31.805 killing process with pid 354986 00:31:31.805 00:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 354986 00:31:31.805 00:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 354986 00:31:31.805 00:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:31.805 00:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:31.805 00:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:31.805 00:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:31:31.805 00:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:31:31.805 00:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:31:31.805 00:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:31.805 00:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:31.805 00:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:31.805 00:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:31.805 00:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:31.805 00:36:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:34.350 00:36:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:34.350 00:31:34.350 real 0m38.062s 00:31:34.350 user 2m25.938s 00:31:34.350 sys 0m7.488s 00:31:34.350 00:36:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:34.350 00:36:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.350 ************************************ 00:31:34.350 END TEST nvmf_fio_host 00:31:34.350 ************************************ 00:31:34.350 00:36:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:31:34.350 00:36:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:34.350 00:36:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:34.350 00:36:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.350 ************************************ 00:31:34.350 START TEST nvmf_failover 00:31:34.350 ************************************ 00:31:34.350 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:31:34.350 * Looking for test storage... 00:31:34.350 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:34.350 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:34.350 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:31:34.350 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:34.350 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:34.350 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:34.350 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:34.350 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:34.350 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:31:34.350 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:31:34.350 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:31:34.350 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:31:34.350 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:31:34.350 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:31:34.350 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:31:34.350 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:34.350 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:31:34.350 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:31:34.350 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:34.350 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:34.350 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:31:34.350 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:31:34.350 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:34.350 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:31:34.350 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:31:34.350 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:31:34.350 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:31:34.350 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:34.350 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:31:34.350 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:31:34.350 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:34.350 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:34.350 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:31:34.350 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:34.350 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:34.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:34.350 --rc genhtml_branch_coverage=1 00:31:34.350 --rc genhtml_function_coverage=1 00:31:34.350 --rc genhtml_legend=1 00:31:34.350 --rc geninfo_all_blocks=1 00:31:34.350 --rc geninfo_unexecuted_blocks=1 00:31:34.350 00:31:34.350 ' 00:31:34.350 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:34.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:34.350 --rc genhtml_branch_coverage=1 00:31:34.350 --rc genhtml_function_coverage=1 00:31:34.350 --rc genhtml_legend=1 00:31:34.350 --rc geninfo_all_blocks=1 00:31:34.350 --rc geninfo_unexecuted_blocks=1 00:31:34.350 00:31:34.350 ' 00:31:34.350 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:34.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:34.350 --rc genhtml_branch_coverage=1 00:31:34.350 --rc genhtml_function_coverage=1 00:31:34.350 --rc genhtml_legend=1 00:31:34.350 --rc geninfo_all_blocks=1 00:31:34.350 --rc geninfo_unexecuted_blocks=1 00:31:34.350 00:31:34.350 ' 00:31:34.350 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:34.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:34.350 --rc genhtml_branch_coverage=1 00:31:34.350 --rc genhtml_function_coverage=1 00:31:34.350 --rc genhtml_legend=1 00:31:34.350 --rc geninfo_all_blocks=1 00:31:34.350 --rc geninfo_unexecuted_blocks=1 00:31:34.350 00:31:34.350 ' 00:31:34.350 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:34.350 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:31:34.350 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:34.350 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:34.350 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:34.350 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:34.350 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:34.350 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:34.350 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:34.350 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:34.350 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:34.350 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:34.350 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:34.350 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:34.350 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:34.350 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:34.351 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:34.351 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:34.351 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:34.351 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:31:34.351 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:34.351 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:34.351 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:34.351 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:34.351 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:34.351 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:34.351 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:31:34.351 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:34.351 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:31:34.351 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:34.351 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:34.351 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:34.351 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:34.351 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:34.351 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:34.351 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:34.351 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:34.351 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:34.351 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:34.351 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:34.351 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:34.351 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:34.351 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:34.351 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:31:34.351 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:34.351 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:34.351 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:34.351 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:34.351 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:34.351 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:34.351 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:34.351 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:34.351 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:34.351 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:34.351 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:31:34.351 00:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:36.257 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:36.257 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:31:36.257 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:36.257 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:36.257 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:36.257 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:36.257 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:36.257 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:31:36.257 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:36.257 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:31:36.257 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:31:36.257 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:31:36.257 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:31:36.257 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:31:36.257 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:31:36.257 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:36.257 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:36.257 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:36.257 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:36.257 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:36.257 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:36.257 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:36.257 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:36.257 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:36.257 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:36.257 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:36.257 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:36.257 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:36.257 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:36.257 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:36.257 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:36.257 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:36.257 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:36.257 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:36.257 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:36.257 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:36.257 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:36.257 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:36.257 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:36.257 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:36.257 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:36.257 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:36.257 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:36.257 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:36.257 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:36.257 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:36.257 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:36.257 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:36.257 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:36.257 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:36.257 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:36.257 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:36.257 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:36.257 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:36.257 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:36.257 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:36.257 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:36.257 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:36.257 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:36.257 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:36.257 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:36.257 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:36.257 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:36.257 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:36.257 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:36.257 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:36.257 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:36.257 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:36.257 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:36.257 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:36.258 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:36.258 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:36.258 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:36.258 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:31:36.258 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:36.258 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:36.258 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:36.258 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:36.258 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:36.258 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:36.258 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:36.258 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:36.258 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:36.258 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:36.258 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:36.258 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:36.258 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:36.258 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:36.258 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:36.258 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:36.258 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:36.258 00:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:36.258 00:37:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:36.258 00:37:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:36.258 00:37:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:36.258 00:37:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:36.258 00:37:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:36.258 00:37:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:36.258 00:37:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:36.258 00:37:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:36.516 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:36.516 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.163 ms 00:31:36.516 00:31:36.516 --- 10.0.0.2 ping statistics --- 00:31:36.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:36.516 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:31:36.516 00:37:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:36.516 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:36.516 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:31:36.516 00:31:36.516 --- 10.0.0.1 ping statistics --- 00:31:36.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:36.516 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:31:36.516 00:37:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:36.516 00:37:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:31:36.516 00:37:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:36.516 00:37:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:36.516 00:37:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:36.516 00:37:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:36.516 00:37:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:36.516 00:37:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:36.516 00:37:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:36.516 00:37:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:31:36.516 00:37:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:36.516 00:37:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:36.516 00:37:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:36.516 00:37:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=361087 00:31:36.516 00:37:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:36.516 00:37:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 361087 00:31:36.516 00:37:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 361087 ']' 00:31:36.516 00:37:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:36.516 00:37:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:36.516 00:37:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:36.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:36.516 00:37:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:36.516 00:37:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:36.516 [2024-11-18 00:37:00.159436] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:31:36.516 [2024-11-18 00:37:00.159513] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:36.516 [2024-11-18 00:37:00.236064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:36.516 [2024-11-18 00:37:00.284137] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:36.516 [2024-11-18 00:37:00.284199] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:36.516 [2024-11-18 00:37:00.284213] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:36.516 [2024-11-18 00:37:00.284223] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:36.516 [2024-11-18 00:37:00.284232] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:36.516 [2024-11-18 00:37:00.285757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:36.516 [2024-11-18 00:37:00.285819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:36.516 [2024-11-18 00:37:00.285822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:36.777 00:37:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:36.777 00:37:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:31:36.777 00:37:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:36.777 00:37:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:36.777 00:37:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:36.777 00:37:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:36.777 00:37:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:37.046 [2024-11-18 00:37:00.709567] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:37.046 00:37:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:37.308 Malloc0 00:31:37.308 00:37:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:37.566 00:37:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:37.824 00:37:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:38.081 [2024-11-18 00:37:01.898988] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:38.339 00:37:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:38.597 [2024-11-18 00:37:02.167821] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:38.597 00:37:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:38.855 [2024-11-18 00:37:02.492881] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:31:38.855 00:37:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=361376 00:31:38.856 00:37:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:31:38.856 00:37:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:38.856 00:37:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 361376 /var/tmp/bdevperf.sock 00:31:38.856 00:37:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 361376 ']' 00:31:38.856 00:37:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:38.856 00:37:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:38.856 00:37:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:38.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:38.856 00:37:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:38.856 00:37:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:39.114 00:37:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:39.114 00:37:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:31:39.114 00:37:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:31:39.375 NVMe0n1 00:31:39.375 00:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:31:39.960 00:31:39.960 00:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=361511 00:31:39.960 00:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:39.960 00:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:31:40.898 00:37:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:41.156 [2024-11-18 00:37:04.789763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195e060 is same with the state(6) to be set 00:31:41.156 [2024-11-18 00:37:04.789889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195e060 is same with the state(6) to be set 00:31:41.156 [2024-11-18 00:37:04.789906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195e060 is same with the state(6) to be set 00:31:41.156 [2024-11-18 00:37:04.789919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195e060 is same with the state(6) to be set 00:31:41.156 [2024-11-18 00:37:04.789932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195e060 is same with the state(6) to be set 00:31:41.156 [2024-11-18 00:37:04.789944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195e060 is same with the state(6) to be set 00:31:41.156 [2024-11-18 00:37:04.789956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195e060 is same with the state(6) to be set 00:31:41.156 [2024-11-18 00:37:04.789968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195e060 is same with the state(6) to be set 00:31:41.156 [2024-11-18 00:37:04.789980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195e060 is same with the state(6) to be set 00:31:41.156 [2024-11-18 00:37:04.789992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195e060 is same with the state(6) to be set 00:31:41.156 [2024-11-18 00:37:04.790005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195e060 is same with the state(6) to be set 00:31:41.156 [2024-11-18 00:37:04.790017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195e060 is same with the state(6) to be set 00:31:41.156 [2024-11-18 00:37:04.790029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195e060 is same with the state(6) to be set 00:31:41.156 [2024-11-18 00:37:04.790041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195e060 is same with the state(6) to be set 00:31:41.156 [2024-11-18 00:37:04.790053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195e060 is same with the state(6) to be set 00:31:41.156 [2024-11-18 00:37:04.790065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195e060 is same with the state(6) to be set 00:31:41.156 [2024-11-18 00:37:04.790092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195e060 is same with the state(6) to be set 00:31:41.156 [2024-11-18 00:37:04.790104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195e060 is same with the state(6) to be set 00:31:41.156 [2024-11-18 00:37:04.790116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195e060 is same with the state(6) to be set 00:31:41.156 [2024-11-18 00:37:04.790128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195e060 is same with the state(6) to be set 00:31:41.156 [2024-11-18 00:37:04.790154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195e060 is same with the state(6) to be set 00:31:41.156 [2024-11-18 00:37:04.790165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195e060 is same with the state(6) to be set 00:31:41.156 [2024-11-18 00:37:04.790177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195e060 is same with the state(6) to be set 00:31:41.156 [2024-11-18 00:37:04.790188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195e060 is same with the state(6) to be set 00:31:41.156 [2024-11-18 00:37:04.790200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195e060 is same with the state(6) to be set 00:31:41.156 [2024-11-18 00:37:04.790211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195e060 is same with the state(6) to be set 00:31:41.156 [2024-11-18 00:37:04.790232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195e060 is same with the state(6) to be set 00:31:41.156 [2024-11-18 00:37:04.790244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195e060 is same with the state(6) to be set 00:31:41.156 [2024-11-18 00:37:04.790255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195e060 is same with the state(6) to be set 00:31:41.156 [2024-11-18 00:37:04.790267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195e060 is same with the state(6) to be set 00:31:41.156 [2024-11-18 00:37:04.790279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195e060 is same with the state(6) to be set 00:31:41.156 [2024-11-18 00:37:04.790305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195e060 is same with the state(6) to be set 00:31:41.156 [2024-11-18 00:37:04.790326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195e060 is same with the state(6) to be set 00:31:41.156 [2024-11-18 00:37:04.790339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195e060 is same with the state(6) to be set 00:31:41.156 [2024-11-18 00:37:04.790351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195e060 is same with the state(6) to be set 00:31:41.156 00:37:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:31:44.435 00:37:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:31:44.697 00:31:44.697 00:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:44.956 00:37:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:31:48.239 00:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:48.239 [2024-11-18 00:37:11.884415] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:48.239 00:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:31:49.173 00:37:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:49.432 [2024-11-18 00:37:13.205925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1960790 is same with the state(6) to be set 00:31:49.432 [2024-11-18 00:37:13.205985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1960790 is same with the state(6) to be set 00:31:49.432 [2024-11-18 00:37:13.206011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1960790 is same with the state(6) to be set 00:31:49.432 [2024-11-18 00:37:13.206024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1960790 is same with the state(6) to be set 00:31:49.432 [2024-11-18 00:37:13.206036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1960790 is same with the state(6) to be set 00:31:49.432 [2024-11-18 00:37:13.206049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1960790 is same with the state(6) to be set 00:31:49.432 [2024-11-18 00:37:13.206061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1960790 is same with the state(6) to be set 00:31:49.432 [2024-11-18 00:37:13.206073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1960790 is same with the state(6) to be set 00:31:49.432 [2024-11-18 00:37:13.206085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1960790 is same with the state(6) to be set 00:31:49.432 [2024-11-18 00:37:13.206098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1960790 is same with the state(6) to be set 00:31:49.432 [2024-11-18 00:37:13.206125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1960790 is same with the state(6) to be set 00:31:49.432 [2024-11-18 00:37:13.206139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1960790 is same with the state(6) to be set 00:31:49.432 [2024-11-18 00:37:13.206151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1960790 is same with the state(6) to be set 00:31:49.432 [2024-11-18 00:37:13.206163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1960790 is same with the state(6) to be set 00:31:49.432 [2024-11-18 00:37:13.206176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1960790 is same with the state(6) to be set 00:31:49.432 [2024-11-18 00:37:13.206189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1960790 is same with the state(6) to be set 00:31:49.432 [2024-11-18 00:37:13.206201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1960790 is same with the state(6) to be set 00:31:49.432 [2024-11-18 00:37:13.206214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1960790 is same with the state(6) to be set 00:31:49.432 [2024-11-18 00:37:13.206226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1960790 is same with the state(6) to be set 00:31:49.432 [2024-11-18 00:37:13.206239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1960790 is same with the state(6) to be set 00:31:49.432 [2024-11-18 00:37:13.206252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1960790 is same with the state(6) to be set 00:31:49.432 [2024-11-18 00:37:13.206264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1960790 is same with the state(6) to be set 00:31:49.432 [2024-11-18 00:37:13.206276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1960790 is same with the state(6) to be set 00:31:49.432 [2024-11-18 00:37:13.206288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1960790 is same with the state(6) to be set 00:31:49.432 [2024-11-18 00:37:13.206322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1960790 is same with the state(6) to be set 00:31:49.432 [2024-11-18 00:37:13.206337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1960790 is same with the state(6) to be set 00:31:49.432 [2024-11-18 00:37:13.206353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1960790 is same with the state(6) to be set 00:31:49.432 [2024-11-18 00:37:13.206366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1960790 is same with the state(6) to be set 00:31:49.432 [2024-11-18 00:37:13.206378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1960790 is same with the state(6) to be set 00:31:49.432 [2024-11-18 00:37:13.206392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1960790 is same with the state(6) to be set 00:31:49.432 [2024-11-18 00:37:13.206404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1960790 is same with the state(6) to be set 00:31:49.432 [2024-11-18 00:37:13.206416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1960790 is same with the state(6) to be set 00:31:49.432 [2024-11-18 00:37:13.206430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1960790 is same with the state(6) to be set 00:31:49.432 [2024-11-18 00:37:13.206442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1960790 is same with the state(6) to be set 00:31:49.432 [2024-11-18 00:37:13.206455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1960790 is same with the state(6) to be set 00:31:49.432 00:37:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 361511 00:31:55.998 { 00:31:55.998 "results": [ 00:31:55.998 { 00:31:55.998 "job": "NVMe0n1", 00:31:55.998 "core_mask": "0x1", 00:31:55.998 "workload": "verify", 00:31:55.998 "status": "finished", 00:31:55.998 "verify_range": { 00:31:55.998 "start": 0, 00:31:55.998 "length": 16384 00:31:55.998 }, 00:31:55.998 "queue_depth": 128, 00:31:55.998 "io_size": 4096, 00:31:55.998 "runtime": 15.006746, 00:31:55.998 "iops": 8313.527796099168, 00:31:55.998 "mibps": 32.474717953512375, 00:31:55.998 "io_failed": 11276, 00:31:55.998 "io_timeout": 0, 00:31:55.998 "avg_latency_us": 14092.531173671265, 00:31:55.998 "min_latency_us": 561.3037037037037, 00:31:55.998 "max_latency_us": 16699.543703703705 00:31:55.998 } 00:31:55.998 ], 00:31:55.998 "core_count": 1 00:31:55.998 } 00:31:55.999 00:37:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 361376 00:31:55.999 00:37:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 361376 ']' 00:31:55.999 00:37:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 361376 00:31:55.999 00:37:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:31:55.999 00:37:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:55.999 00:37:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 361376 00:31:55.999 00:37:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:55.999 00:37:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:55.999 00:37:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 361376' 00:31:55.999 killing process with pid 361376 00:31:55.999 00:37:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 361376 00:31:55.999 00:37:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 361376 00:31:55.999 00:37:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:55.999 [2024-11-18 00:37:02.559910] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:31:55.999 [2024-11-18 00:37:02.560010] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid361376 ] 00:31:55.999 [2024-11-18 00:37:02.627878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:55.999 [2024-11-18 00:37:02.676544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:55.999 Running I/O for 15 seconds... 00:31:55.999 8200.00 IOPS, 32.03 MiB/s [2024-11-17T23:37:19.821Z] [2024-11-18 00:37:04.791214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:76496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.999 [2024-11-18 00:37:04.791261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.999 [2024-11-18 00:37:04.791287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:76504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.999 [2024-11-18 00:37:04.791305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.999 [2024-11-18 00:37:04.791333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:76512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.999 [2024-11-18 00:37:04.791349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.999 [2024-11-18 00:37:04.791364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:76520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.999 [2024-11-18 00:37:04.791379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.999 [2024-11-18 00:37:04.791394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:76528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.999 [2024-11-18 00:37:04.791408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.999 [2024-11-18 00:37:04.791423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:76536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.999 [2024-11-18 00:37:04.791437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.999 [2024-11-18 00:37:04.791452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:76544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.999 [2024-11-18 00:37:04.791466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.999 [2024-11-18 00:37:04.791481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:76552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.999 [2024-11-18 00:37:04.791496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.999 [2024-11-18 00:37:04.791511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:76560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.999 [2024-11-18 00:37:04.791524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.999 [2024-11-18 00:37:04.791539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:76568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.999 [2024-11-18 00:37:04.791553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.999 [2024-11-18 00:37:04.791568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:76576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.999 [2024-11-18 00:37:04.791582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.999 [2024-11-18 00:37:04.791604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:76584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.999 [2024-11-18 00:37:04.791633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.999 [2024-11-18 00:37:04.791649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:76592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.999 [2024-11-18 00:37:04.791662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.999 [2024-11-18 00:37:04.791676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:76600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.999 [2024-11-18 00:37:04.791689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.999 [2024-11-18 00:37:04.791703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:76608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.999 [2024-11-18 00:37:04.791716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.999 [2024-11-18 00:37:04.791730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:76616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.999 [2024-11-18 00:37:04.791743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.999 [2024-11-18 00:37:04.791757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:76624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.999 [2024-11-18 00:37:04.791771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.999 [2024-11-18 00:37:04.791785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:76632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.999 [2024-11-18 00:37:04.791798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.999 [2024-11-18 00:37:04.791812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:76640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.999 [2024-11-18 00:37:04.791825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.999 [2024-11-18 00:37:04.791847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:76648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.999 [2024-11-18 00:37:04.791861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.999 [2024-11-18 00:37:04.791877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:76656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.999 [2024-11-18 00:37:04.791890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.999 [2024-11-18 00:37:04.791905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:76664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.999 [2024-11-18 00:37:04.791917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.999 [2024-11-18 00:37:04.791932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:76672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.999 [2024-11-18 00:37:04.791945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.999 [2024-11-18 00:37:04.791960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:76680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.999 [2024-11-18 00:37:04.791977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.999 [2024-11-18 00:37:04.791992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:76688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.999 [2024-11-18 00:37:04.792005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.999 [2024-11-18 00:37:04.792020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:76696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.999 [2024-11-18 00:37:04.792033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.999 [2024-11-18 00:37:04.792047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:76704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.999 [2024-11-18 00:37:04.792060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.999 [2024-11-18 00:37:04.792074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:76712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.999 [2024-11-18 00:37:04.792087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.999 [2024-11-18 00:37:04.792116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:76720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:55.999 [2024-11-18 00:37:04.792129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.999 [2024-11-18 00:37:04.792143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:75992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.999 [2024-11-18 00:37:04.792156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.999 [2024-11-18 00:37:04.792185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:76000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.999 [2024-11-18 00:37:04.792197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.999 [2024-11-18 00:37:04.792211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:76008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.999 [2024-11-18 00:37:04.792224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.000 [2024-11-18 00:37:04.792238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:76016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.000 [2024-11-18 00:37:04.792252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.000 [2024-11-18 00:37:04.792266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:76024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.000 [2024-11-18 00:37:04.792279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.000 [2024-11-18 00:37:04.792293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:76032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.000 [2024-11-18 00:37:04.792306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.000 [2024-11-18 00:37:04.792352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:76040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.000 [2024-11-18 00:37:04.792367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.000 [2024-11-18 00:37:04.792386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:76048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.000 [2024-11-18 00:37:04.792401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.000 [2024-11-18 00:37:04.792416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:76056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.000 [2024-11-18 00:37:04.792430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.000 [2024-11-18 00:37:04.792445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:76064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.000 [2024-11-18 00:37:04.792459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.000 [2024-11-18 00:37:04.792474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:76072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.000 [2024-11-18 00:37:04.792487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.000 [2024-11-18 00:37:04.792502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:76080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.000 [2024-11-18 00:37:04.792515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.000 [2024-11-18 00:37:04.792531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:76088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.000 [2024-11-18 00:37:04.792544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.000 [2024-11-18 00:37:04.792559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:76096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.000 [2024-11-18 00:37:04.792572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.000 [2024-11-18 00:37:04.792587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:76104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.000 [2024-11-18 00:37:04.792602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.000 [2024-11-18 00:37:04.792617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:76728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.000 [2024-11-18 00:37:04.792645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.000 [2024-11-18 00:37:04.792659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:76112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.000 [2024-11-18 00:37:04.792672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.000 [2024-11-18 00:37:04.792687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:76120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.000 [2024-11-18 00:37:04.792700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.000 [2024-11-18 00:37:04.792714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:76128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.000 [2024-11-18 00:37:04.792727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.000 [2024-11-18 00:37:04.792741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:76136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.000 [2024-11-18 00:37:04.792754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.000 [2024-11-18 00:37:04.792772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:76144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.000 [2024-11-18 00:37:04.792786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.000 [2024-11-18 00:37:04.792801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.000 [2024-11-18 00:37:04.792814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.000 [2024-11-18 00:37:04.792835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:76160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.000 [2024-11-18 00:37:04.792849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.000 [2024-11-18 00:37:04.792863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:76168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.000 [2024-11-18 00:37:04.792876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.000 [2024-11-18 00:37:04.792890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:76176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.000 [2024-11-18 00:37:04.792904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.000 [2024-11-18 00:37:04.792918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:76184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.000 [2024-11-18 00:37:04.792931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.000 [2024-11-18 00:37:04.792945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:76192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.000 [2024-11-18 00:37:04.792958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.000 [2024-11-18 00:37:04.792973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:76200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.000 [2024-11-18 00:37:04.792986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.000 [2024-11-18 00:37:04.793000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:76208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.000 [2024-11-18 00:37:04.793013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.000 [2024-11-18 00:37:04.793027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:76216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.000 [2024-11-18 00:37:04.793041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.000 [2024-11-18 00:37:04.793055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:76224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.000 [2024-11-18 00:37:04.793068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.000 [2024-11-18 00:37:04.793082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:76232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.000 [2024-11-18 00:37:04.793094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.000 [2024-11-18 00:37:04.793109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.000 [2024-11-18 00:37:04.793125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.000 [2024-11-18 00:37:04.793140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:76248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.000 [2024-11-18 00:37:04.793153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.000 [2024-11-18 00:37:04.793167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:76256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.000 [2024-11-18 00:37:04.793180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.000 [2024-11-18 00:37:04.793194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:76264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.000 [2024-11-18 00:37:04.793207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.000 [2024-11-18 00:37:04.793222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.000 [2024-11-18 00:37:04.793234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.000 [2024-11-18 00:37:04.793248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:76280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.000 [2024-11-18 00:37:04.793262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.000 [2024-11-18 00:37:04.793282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.000 [2024-11-18 00:37:04.793297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.000 [2024-11-18 00:37:04.793334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:76296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.000 [2024-11-18 00:37:04.793350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.000 [2024-11-18 00:37:04.793368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:76304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.000 [2024-11-18 00:37:04.793382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.000 [2024-11-18 00:37:04.793397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:76312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.001 [2024-11-18 00:37:04.793412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.001 [2024-11-18 00:37:04.793428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:76320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.001 [2024-11-18 00:37:04.793442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.001 [2024-11-18 00:37:04.793456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.001 [2024-11-18 00:37:04.793470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.001 [2024-11-18 00:37:04.793486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:76336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.001 [2024-11-18 00:37:04.793500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.001 [2024-11-18 00:37:04.793520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:76344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.001 [2024-11-18 00:37:04.793536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.001 [2024-11-18 00:37:04.793552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.001 [2024-11-18 00:37:04.793566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.001 [2024-11-18 00:37:04.793582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:76360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.001 [2024-11-18 00:37:04.793596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.001 [2024-11-18 00:37:04.793612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:76368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.001 [2024-11-18 00:37:04.793641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.001 [2024-11-18 00:37:04.793656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:76376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.001 [2024-11-18 00:37:04.793669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.001 [2024-11-18 00:37:04.793683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:76384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.001 [2024-11-18 00:37:04.793696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.001 [2024-11-18 00:37:04.793711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.001 [2024-11-18 00:37:04.793730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.001 [2024-11-18 00:37:04.793745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:76400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.001 [2024-11-18 00:37:04.793757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.001 [2024-11-18 00:37:04.793772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:76408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.001 [2024-11-18 00:37:04.793785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.001 [2024-11-18 00:37:04.793804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:76416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.001 [2024-11-18 00:37:04.793819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.001 [2024-11-18 00:37:04.793833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:76424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.001 [2024-11-18 00:37:04.793846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.001 [2024-11-18 00:37:04.793860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:76432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.001 [2024-11-18 00:37:04.793873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.001 [2024-11-18 00:37:04.793888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:76440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.001 [2024-11-18 00:37:04.793905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.001 [2024-11-18 00:37:04.793920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:76448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.001 [2024-11-18 00:37:04.793933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.001 [2024-11-18 00:37:04.793947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:76456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.001 [2024-11-18 00:37:04.793960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.001 [2024-11-18 00:37:04.793974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:76464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.001 [2024-11-18 00:37:04.793987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.001 [2024-11-18 00:37:04.794002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:76472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.001 [2024-11-18 00:37:04.794014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.001 [2024-11-18 00:37:04.794029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:76480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.001 [2024-11-18 00:37:04.794042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.001 [2024-11-18 00:37:04.794056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:76488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.001 [2024-11-18 00:37:04.794069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.001 [2024-11-18 00:37:04.794083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:76736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.001 [2024-11-18 00:37:04.794096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.001 [2024-11-18 00:37:04.794111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:76744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.001 [2024-11-18 00:37:04.794124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.001 [2024-11-18 00:37:04.794138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:76752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.001 [2024-11-18 00:37:04.794151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.001 [2024-11-18 00:37:04.794165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:76760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.001 [2024-11-18 00:37:04.794184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.001 [2024-11-18 00:37:04.794199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:76768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.001 [2024-11-18 00:37:04.794212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.001 [2024-11-18 00:37:04.794226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:76776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.001 [2024-11-18 00:37:04.794239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.001 [2024-11-18 00:37:04.794264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:76784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.001 [2024-11-18 00:37:04.794279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.001 [2024-11-18 00:37:04.794293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:76792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.001 [2024-11-18 00:37:04.794306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.001 [2024-11-18 00:37:04.794346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:76800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.001 [2024-11-18 00:37:04.794361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.001 [2024-11-18 00:37:04.794376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:76808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.001 [2024-11-18 00:37:04.794389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.001 [2024-11-18 00:37:04.794404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:76816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.001 [2024-11-18 00:37:04.794418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.001 [2024-11-18 00:37:04.794433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:76824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.001 [2024-11-18 00:37:04.794446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.001 [2024-11-18 00:37:04.794461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:76832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.001 [2024-11-18 00:37:04.794474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.001 [2024-11-18 00:37:04.794489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:76840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.001 [2024-11-18 00:37:04.794503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.001 [2024-11-18 00:37:04.794517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:76848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.001 [2024-11-18 00:37:04.794530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.001 [2024-11-18 00:37:04.794545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:76856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.001 [2024-11-18 00:37:04.794559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.001 [2024-11-18 00:37:04.794573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:76864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.001 [2024-11-18 00:37:04.794587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.002 [2024-11-18 00:37:04.794601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:76872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.002 [2024-11-18 00:37:04.794615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.002 [2024-11-18 00:37:04.794646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:76880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.002 [2024-11-18 00:37:04.794660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.002 [2024-11-18 00:37:04.794677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:76888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.002 [2024-11-18 00:37:04.794697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.002 [2024-11-18 00:37:04.794712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:76896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.002 [2024-11-18 00:37:04.794725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.002 [2024-11-18 00:37:04.794739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:76904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.002 [2024-11-18 00:37:04.794752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.002 [2024-11-18 00:37:04.794772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:76912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.002 [2024-11-18 00:37:04.794786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.002 [2024-11-18 00:37:04.794800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:76920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.002 [2024-11-18 00:37:04.794813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.002 [2024-11-18 00:37:04.794827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.002 [2024-11-18 00:37:04.794840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.002 [2024-11-18 00:37:04.794854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:76936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.002 [2024-11-18 00:37:04.794867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.002 [2024-11-18 00:37:04.794881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:76944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.002 [2024-11-18 00:37:04.794894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.002 [2024-11-18 00:37:04.794908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:76952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.002 [2024-11-18 00:37:04.794921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.002 [2024-11-18 00:37:04.794935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:76960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.002 [2024-11-18 00:37:04.794950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.002 [2024-11-18 00:37:04.794964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.002 [2024-11-18 00:37:04.794977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.002 [2024-11-18 00:37:04.794991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:76976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.002 [2024-11-18 00:37:04.795004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.002 [2024-11-18 00:37:04.795019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:76984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.002 [2024-11-18 00:37:04.795035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.002 [2024-11-18 00:37:04.795050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:76992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.002 [2024-11-18 00:37:04.795064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.002 [2024-11-18 00:37:04.795078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:77000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.002 [2024-11-18 00:37:04.795091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.002 [2024-11-18 00:37:04.795121] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:56.002 [2024-11-18 00:37:04.795137] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:56.002 [2024-11-18 00:37:04.795149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77008 len:8 PRP1 0x0 PRP2 0x0 00:31:56.002 [2024-11-18 00:37:04.795166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.002 [2024-11-18 00:37:04.795246] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:31:56.002 [2024-11-18 00:37:04.795285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:56.002 [2024-11-18 00:37:04.795327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.002 [2024-11-18 00:37:04.795343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:56.002 [2024-11-18 00:37:04.795363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.002 [2024-11-18 00:37:04.795378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:56.002 [2024-11-18 00:37:04.795391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.002 [2024-11-18 00:37:04.795405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:56.002 [2024-11-18 00:37:04.795420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.002 [2024-11-18 00:37:04.795434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:31:56.002 [2024-11-18 00:37:04.795495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9f53b0 (9): Bad file descriptor 00:31:56.002 [2024-11-18 00:37:04.798764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:31:56.002 [2024-11-18 00:37:04.861846] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:31:56.002 8172.50 IOPS, 31.92 MiB/s [2024-11-17T23:37:19.824Z] 8362.67 IOPS, 32.67 MiB/s [2024-11-17T23:37:19.824Z] 8462.25 IOPS, 33.06 MiB/s [2024-11-17T23:37:19.824Z] [2024-11-18 00:37:08.589550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:95872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.002 [2024-11-18 00:37:08.589618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.002 [2024-11-18 00:37:08.589647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:95880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.002 [2024-11-18 00:37:08.589664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.002 [2024-11-18 00:37:08.589706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:95888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.002 [2024-11-18 00:37:08.589721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.002 [2024-11-18 00:37:08.589736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:95896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.002 [2024-11-18 00:37:08.589749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.002 [2024-11-18 00:37:08.589764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:95904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.002 [2024-11-18 00:37:08.589793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.002 [2024-11-18 00:37:08.589808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:95912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.002 [2024-11-18 00:37:08.589822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.002 [2024-11-18 00:37:08.589836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:95920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.002 [2024-11-18 00:37:08.589852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.002 [2024-11-18 00:37:08.589866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:95928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.002 [2024-11-18 00:37:08.589881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.002 [2024-11-18 00:37:08.589895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:95936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.002 [2024-11-18 00:37:08.589908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.002 [2024-11-18 00:37:08.589922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:95944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.002 [2024-11-18 00:37:08.589935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.002 [2024-11-18 00:37:08.589950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:95952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.002 [2024-11-18 00:37:08.589964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.002 [2024-11-18 00:37:08.589979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:95960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.002 [2024-11-18 00:37:08.589993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.002 [2024-11-18 00:37:08.590007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:95968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.002 [2024-11-18 00:37:08.590019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.002 [2024-11-18 00:37:08.590035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:95976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.002 [2024-11-18 00:37:08.590048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.003 [2024-11-18 00:37:08.590064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:95984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.003 [2024-11-18 00:37:08.590087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.003 [2024-11-18 00:37:08.590103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:95992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.003 [2024-11-18 00:37:08.590117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.003 [2024-11-18 00:37:08.590132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:95048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.003 [2024-11-18 00:37:08.590145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.003 [2024-11-18 00:37:08.590160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:95056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.003 [2024-11-18 00:37:08.590173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.003 [2024-11-18 00:37:08.590187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:95064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.003 [2024-11-18 00:37:08.590199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.003 [2024-11-18 00:37:08.590214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:95072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.003 [2024-11-18 00:37:08.590226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.003 [2024-11-18 00:37:08.590240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:95080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.003 [2024-11-18 00:37:08.590253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.003 [2024-11-18 00:37:08.590267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:95088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.003 [2024-11-18 00:37:08.590279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.003 [2024-11-18 00:37:08.590293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:95096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.003 [2024-11-18 00:37:08.590306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.003 [2024-11-18 00:37:08.590328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:95104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.003 [2024-11-18 00:37:08.590341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.003 [2024-11-18 00:37:08.590355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:95112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.003 [2024-11-18 00:37:08.590368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.003 [2024-11-18 00:37:08.590382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:95120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.003 [2024-11-18 00:37:08.590394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.003 [2024-11-18 00:37:08.590408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:95128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.003 [2024-11-18 00:37:08.590420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.003 [2024-11-18 00:37:08.590438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:95136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.003 [2024-11-18 00:37:08.590451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.003 [2024-11-18 00:37:08.590465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:95144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.003 [2024-11-18 00:37:08.590477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.003 [2024-11-18 00:37:08.590492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:95152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.003 [2024-11-18 00:37:08.590505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.003 [2024-11-18 00:37:08.590518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:95160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.003 [2024-11-18 00:37:08.590530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.003 [2024-11-18 00:37:08.590544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:95168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.003 [2024-11-18 00:37:08.590556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.003 [2024-11-18 00:37:08.590570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:95176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.003 [2024-11-18 00:37:08.590583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.003 [2024-11-18 00:37:08.590597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:95184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.003 [2024-11-18 00:37:08.590626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.003 [2024-11-18 00:37:08.590641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:95192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.003 [2024-11-18 00:37:08.590653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.003 [2024-11-18 00:37:08.590668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:95200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.003 [2024-11-18 00:37:08.590689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.003 [2024-11-18 00:37:08.590704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.003 [2024-11-18 00:37:08.590717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.003 [2024-11-18 00:37:08.590731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:95216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.003 [2024-11-18 00:37:08.590745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.003 [2024-11-18 00:37:08.590759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:95224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.003 [2024-11-18 00:37:08.590773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.003 [2024-11-18 00:37:08.590787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:95232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.003 [2024-11-18 00:37:08.590800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.003 [2024-11-18 00:37:08.590819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:95240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.003 [2024-11-18 00:37:08.590832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.003 [2024-11-18 00:37:08.590847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:95248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.003 [2024-11-18 00:37:08.590859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.003 [2024-11-18 00:37:08.590874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:95256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.003 [2024-11-18 00:37:08.590887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.003 [2024-11-18 00:37:08.590901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:95264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.003 [2024-11-18 00:37:08.590914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.003 [2024-11-18 00:37:08.590928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:95272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.003 [2024-11-18 00:37:08.590942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.003 [2024-11-18 00:37:08.590956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:95280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.003 [2024-11-18 00:37:08.590969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.003 [2024-11-18 00:37:08.590984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:95288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.003 [2024-11-18 00:37:08.590996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.004 [2024-11-18 00:37:08.591010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.004 [2024-11-18 00:37:08.591023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.004 [2024-11-18 00:37:08.591038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:95296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.004 [2024-11-18 00:37:08.591051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.004 [2024-11-18 00:37:08.591065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:95304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.004 [2024-11-18 00:37:08.591078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.004 [2024-11-18 00:37:08.591093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:95312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.004 [2024-11-18 00:37:08.591106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.004 [2024-11-18 00:37:08.591120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:95320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.004 [2024-11-18 00:37:08.591138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.004 [2024-11-18 00:37:08.591153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:95328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.004 [2024-11-18 00:37:08.591170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.004 [2024-11-18 00:37:08.591185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:95336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.004 [2024-11-18 00:37:08.591198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.004 [2024-11-18 00:37:08.591213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:95344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.004 [2024-11-18 00:37:08.591226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.004 [2024-11-18 00:37:08.591241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:95352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.004 [2024-11-18 00:37:08.591254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.004 [2024-11-18 00:37:08.591268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:95360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.004 [2024-11-18 00:37:08.591281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.004 [2024-11-18 00:37:08.591296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:95368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.004 [2024-11-18 00:37:08.591334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.004 [2024-11-18 00:37:08.591350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:95376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.004 [2024-11-18 00:37:08.591364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.004 [2024-11-18 00:37:08.591379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:95384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.004 [2024-11-18 00:37:08.591392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.004 [2024-11-18 00:37:08.591407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:95392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.004 [2024-11-18 00:37:08.591421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.004 [2024-11-18 00:37:08.591436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:95400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.004 [2024-11-18 00:37:08.591449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.004 [2024-11-18 00:37:08.591464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:95408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.004 [2024-11-18 00:37:08.591478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.004 [2024-11-18 00:37:08.591493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:95416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.004 [2024-11-18 00:37:08.591506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.004 [2024-11-18 00:37:08.591520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:95424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.004 [2024-11-18 00:37:08.591533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.004 [2024-11-18 00:37:08.591553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:95432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.004 [2024-11-18 00:37:08.591568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.004 [2024-11-18 00:37:08.591583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:95440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.004 [2024-11-18 00:37:08.591597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.004 [2024-11-18 00:37:08.591612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:95448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.004 [2024-11-18 00:37:08.591649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.004 [2024-11-18 00:37:08.591664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:95456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.004 [2024-11-18 00:37:08.591703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.004 [2024-11-18 00:37:08.591719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:95464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.004 [2024-11-18 00:37:08.591732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.004 [2024-11-18 00:37:08.591747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:95472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.004 [2024-11-18 00:37:08.591760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.004 [2024-11-18 00:37:08.591775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:95480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.004 [2024-11-18 00:37:08.591788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.004 [2024-11-18 00:37:08.591803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:95488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.004 [2024-11-18 00:37:08.591816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.004 [2024-11-18 00:37:08.591832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:95496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.004 [2024-11-18 00:37:08.591861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.004 [2024-11-18 00:37:08.591876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:95504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.004 [2024-11-18 00:37:08.591890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.004 [2024-11-18 00:37:08.591904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:95512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.004 [2024-11-18 00:37:08.591917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.004 [2024-11-18 00:37:08.591931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:95520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.004 [2024-11-18 00:37:08.591944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.004 [2024-11-18 00:37:08.591959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:95528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.004 [2024-11-18 00:37:08.591975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.004 [2024-11-18 00:37:08.591990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.004 [2024-11-18 00:37:08.592003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.004 [2024-11-18 00:37:08.592017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:95544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.004 [2024-11-18 00:37:08.592031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.004 [2024-11-18 00:37:08.592045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:96008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.004 [2024-11-18 00:37:08.592058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.004 [2024-11-18 00:37:08.592073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:96016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.004 [2024-11-18 00:37:08.592086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.004 [2024-11-18 00:37:08.592101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:96024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.004 [2024-11-18 00:37:08.592114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.004 [2024-11-18 00:37:08.592128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:96032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.004 [2024-11-18 00:37:08.592141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.004 [2024-11-18 00:37:08.592155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.004 [2024-11-18 00:37:08.592169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.004 [2024-11-18 00:37:08.592183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.004 [2024-11-18 00:37:08.592196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.005 [2024-11-18 00:37:08.592211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.005 [2024-11-18 00:37:08.592223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.005 [2024-11-18 00:37:08.592238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:95552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.005 [2024-11-18 00:37:08.592251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.005 [2024-11-18 00:37:08.592265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:95560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.005 [2024-11-18 00:37:08.592278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.005 [2024-11-18 00:37:08.592293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:95568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.005 [2024-11-18 00:37:08.592306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.005 [2024-11-18 00:37:08.592351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:95576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.005 [2024-11-18 00:37:08.592367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.005 [2024-11-18 00:37:08.592382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.005 [2024-11-18 00:37:08.592396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.005 [2024-11-18 00:37:08.592411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:95592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.005 [2024-11-18 00:37:08.592424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.005 [2024-11-18 00:37:08.592440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:95600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.005 [2024-11-18 00:37:08.592453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.005 [2024-11-18 00:37:08.592468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:95608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.005 [2024-11-18 00:37:08.592482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.005 [2024-11-18 00:37:08.592497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.005 [2024-11-18 00:37:08.592510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.005 [2024-11-18 00:37:08.592525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:95624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.005 [2024-11-18 00:37:08.592539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.005 [2024-11-18 00:37:08.592559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:95632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.005 [2024-11-18 00:37:08.592574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.005 [2024-11-18 00:37:08.592589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:95640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.005 [2024-11-18 00:37:08.592603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.005 [2024-11-18 00:37:08.592618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:95648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.005 [2024-11-18 00:37:08.592647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.005 [2024-11-18 00:37:08.592662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:95656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.005 [2024-11-18 00:37:08.592675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.005 [2024-11-18 00:37:08.592689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:95664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.005 [2024-11-18 00:37:08.592702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.005 [2024-11-18 00:37:08.592716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:95672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.005 [2024-11-18 00:37:08.592729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.005 [2024-11-18 00:37:08.592747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.005 [2024-11-18 00:37:08.592762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.005 [2024-11-18 00:37:08.592777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.005 [2024-11-18 00:37:08.592790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.005 [2024-11-18 00:37:08.592805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.005 [2024-11-18 00:37:08.592818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.005 [2024-11-18 00:37:08.592833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:95704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.005 [2024-11-18 00:37:08.592846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.005 [2024-11-18 00:37:08.592860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.005 [2024-11-18 00:37:08.592881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.005 [2024-11-18 00:37:08.592895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:95720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.005 [2024-11-18 00:37:08.592908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.005 [2024-11-18 00:37:08.592922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:95728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.005 [2024-11-18 00:37:08.592936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.005 [2024-11-18 00:37:08.592950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:95736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.005 [2024-11-18 00:37:08.592963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.005 [2024-11-18 00:37:08.592978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:95744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.005 [2024-11-18 00:37:08.592990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.005 [2024-11-18 00:37:08.593005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:95752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.005 [2024-11-18 00:37:08.593018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.005 [2024-11-18 00:37:08.593038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:95760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.005 [2024-11-18 00:37:08.593052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.005 [2024-11-18 00:37:08.593066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:95768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.005 [2024-11-18 00:37:08.593079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.005 [2024-11-18 00:37:08.593094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.005 [2024-11-18 00:37:08.593111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.005 [2024-11-18 00:37:08.593126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.005 [2024-11-18 00:37:08.593139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.005 [2024-11-18 00:37:08.593154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:95792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.005 [2024-11-18 00:37:08.593167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.005 [2024-11-18 00:37:08.593182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.005 [2024-11-18 00:37:08.593195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.005 [2024-11-18 00:37:08.593209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.005 [2024-11-18 00:37:08.593223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.005 [2024-11-18 00:37:08.593237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.005 [2024-11-18 00:37:08.593250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.005 [2024-11-18 00:37:08.593265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.005 [2024-11-18 00:37:08.593279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.005 [2024-11-18 00:37:08.593294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.005 [2024-11-18 00:37:08.593307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.005 [2024-11-18 00:37:08.593346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.005 [2024-11-18 00:37:08.593361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.005 [2024-11-18 00:37:08.593376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:95840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.005 [2024-11-18 00:37:08.593390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.005 [2024-11-18 00:37:08.593405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.005 [2024-11-18 00:37:08.593418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.006 [2024-11-18 00:37:08.593434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.006 [2024-11-18 00:37:08.593447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.006 [2024-11-18 00:37:08.593461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2d450 is same with the state(6) to be set 00:31:56.006 [2024-11-18 00:37:08.593480] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:56.006 [2024-11-18 00:37:08.593495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:56.006 [2024-11-18 00:37:08.593507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95864 len:8 PRP1 0x0 PRP2 0x0 00:31:56.006 [2024-11-18 00:37:08.593525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.006 [2024-11-18 00:37:08.593598] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:31:56.006 [2024-11-18 00:37:08.593652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:56.006 [2024-11-18 00:37:08.593671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.006 [2024-11-18 00:37:08.593700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:56.006 [2024-11-18 00:37:08.593714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.006 [2024-11-18 00:37:08.593728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:56.006 [2024-11-18 00:37:08.593741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.006 [2024-11-18 00:37:08.593755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:56.006 [2024-11-18 00:37:08.593767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.006 [2024-11-18 00:37:08.593780] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:31:56.006 [2024-11-18 00:37:08.597129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:31:56.006 [2024-11-18 00:37:08.597171] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9f53b0 (9): Bad file descriptor 00:31:56.006 8392.40 IOPS, 32.78 MiB/s [2024-11-17T23:37:19.828Z] [2024-11-18 00:37:08.740779] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:31:56.006 8311.00 IOPS, 32.46 MiB/s [2024-11-17T23:37:19.828Z] 8350.57 IOPS, 32.62 MiB/s [2024-11-17T23:37:19.828Z] 8395.62 IOPS, 32.80 MiB/s [2024-11-17T23:37:19.828Z] 8434.11 IOPS, 32.95 MiB/s [2024-11-17T23:37:19.828Z] [2024-11-18 00:37:13.206473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:56.006 [2024-11-18 00:37:13.206515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.006 [2024-11-18 00:37:13.206534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:56.006 [2024-11-18 00:37:13.206548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.006 [2024-11-18 00:37:13.206562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:56.006 [2024-11-18 00:37:13.206576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.006 [2024-11-18 00:37:13.206599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:56.006 [2024-11-18 00:37:13.206612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.006 [2024-11-18 00:37:13.206626] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f53b0 is same with the state(6) to be set 00:31:56.006 [2024-11-18 00:37:13.206693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:57648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.006 [2024-11-18 00:37:13.206720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.006 [2024-11-18 00:37:13.206743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:57656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.006 [2024-11-18 00:37:13.206759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.006 [2024-11-18 00:37:13.206775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:57664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.006 [2024-11-18 00:37:13.206804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.006 [2024-11-18 00:37:13.206820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:57672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.006 [2024-11-18 00:37:13.206833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.006 [2024-11-18 00:37:13.206848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:57680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.006 [2024-11-18 00:37:13.206877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.006 [2024-11-18 00:37:13.206892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:57688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.006 [2024-11-18 00:37:13.206904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.006 [2024-11-18 00:37:13.206919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:57696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.006 [2024-11-18 00:37:13.206932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.006 [2024-11-18 00:37:13.206946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:57704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.006 [2024-11-18 00:37:13.206959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.006 [2024-11-18 00:37:13.206973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:57712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.006 [2024-11-18 00:37:13.206986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.006 [2024-11-18 00:37:13.207000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:57720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.006 [2024-11-18 00:37:13.207013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.006 [2024-11-18 00:37:13.207026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:57728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.006 [2024-11-18 00:37:13.207039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.006 [2024-11-18 00:37:13.207053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:57736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.006 [2024-11-18 00:37:13.207066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.006 [2024-11-18 00:37:13.207081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:57744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.006 [2024-11-18 00:37:13.207094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.006 [2024-11-18 00:37:13.207116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:57752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.006 [2024-11-18 00:37:13.207130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.006 [2024-11-18 00:37:13.207144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:57760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.006 [2024-11-18 00:37:13.207157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.006 [2024-11-18 00:37:13.207171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:57768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.006 [2024-11-18 00:37:13.207183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.006 [2024-11-18 00:37:13.207197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:57776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.006 [2024-11-18 00:37:13.207210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.006 [2024-11-18 00:37:13.207224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:57784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.006 [2024-11-18 00:37:13.207237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.006 [2024-11-18 00:37:13.207251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:57792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.006 [2024-11-18 00:37:13.207263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.006 [2024-11-18 00:37:13.207278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:57800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.006 [2024-11-18 00:37:13.207291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.006 [2024-11-18 00:37:13.207339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:57808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.006 [2024-11-18 00:37:13.207355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.006 [2024-11-18 00:37:13.207370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:57816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.006 [2024-11-18 00:37:13.207384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.006 [2024-11-18 00:37:13.207399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:57824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.006 [2024-11-18 00:37:13.207413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.006 [2024-11-18 00:37:13.207429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:57832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.006 [2024-11-18 00:37:13.207442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.007 [2024-11-18 00:37:13.207457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:57840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.007 [2024-11-18 00:37:13.207472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.007 [2024-11-18 00:37:13.207487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:57848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.007 [2024-11-18 00:37:13.207505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.007 [2024-11-18 00:37:13.207521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:57856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.007 [2024-11-18 00:37:13.207534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.007 [2024-11-18 00:37:13.207549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:57864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.007 [2024-11-18 00:37:13.207562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.007 [2024-11-18 00:37:13.207577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:57872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.007 [2024-11-18 00:37:13.207591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.007 [2024-11-18 00:37:13.207605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:57880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.007 [2024-11-18 00:37:13.207634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.007 [2024-11-18 00:37:13.207648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:57888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.007 [2024-11-18 00:37:13.207661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.007 [2024-11-18 00:37:13.207675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:57896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.007 [2024-11-18 00:37:13.207688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.007 [2024-11-18 00:37:13.207703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:57904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.007 [2024-11-18 00:37:13.207716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.007 [2024-11-18 00:37:13.207730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:57912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.007 [2024-11-18 00:37:13.207743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.007 [2024-11-18 00:37:13.207757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:57920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.007 [2024-11-18 00:37:13.207770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.007 [2024-11-18 00:37:13.207784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:57928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.007 [2024-11-18 00:37:13.207797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.007 [2024-11-18 00:37:13.207810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:57936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.007 [2024-11-18 00:37:13.207824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.007 [2024-11-18 00:37:13.207838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:57944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.007 [2024-11-18 00:37:13.207851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.007 [2024-11-18 00:37:13.207869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:57952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.007 [2024-11-18 00:37:13.207882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.007 [2024-11-18 00:37:13.207896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:57960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.007 [2024-11-18 00:37:13.207909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.007 [2024-11-18 00:37:13.207924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:57968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.007 [2024-11-18 00:37:13.207938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.007 [2024-11-18 00:37:13.207952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:57976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.007 [2024-11-18 00:37:13.207965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.007 [2024-11-18 00:37:13.207979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:57984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.007 [2024-11-18 00:37:13.207992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.007 [2024-11-18 00:37:13.208006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:57992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.007 [2024-11-18 00:37:13.208019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.007 [2024-11-18 00:37:13.208034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:58000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.007 [2024-11-18 00:37:13.208047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.007 [2024-11-18 00:37:13.208062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:58008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.007 [2024-11-18 00:37:13.208075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.007 [2024-11-18 00:37:13.208090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:58016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.007 [2024-11-18 00:37:13.208103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.007 [2024-11-18 00:37:13.208117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:58024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.007 [2024-11-18 00:37:13.208130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.007 [2024-11-18 00:37:13.208145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:58032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.007 [2024-11-18 00:37:13.208158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.007 [2024-11-18 00:37:13.208172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:58040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.007 [2024-11-18 00:37:13.208185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.007 [2024-11-18 00:37:13.208199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:58048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.007 [2024-11-18 00:37:13.208215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.007 [2024-11-18 00:37:13.208230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:58056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.007 [2024-11-18 00:37:13.208243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.007 [2024-11-18 00:37:13.208257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:58064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.007 [2024-11-18 00:37:13.208270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.007 [2024-11-18 00:37:13.208284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:58072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.007 [2024-11-18 00:37:13.208297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.007 [2024-11-18 00:37:13.208333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:58080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.007 [2024-11-18 00:37:13.208349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.007 [2024-11-18 00:37:13.208364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:58088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.007 [2024-11-18 00:37:13.208378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.007 [2024-11-18 00:37:13.208393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:58096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.007 [2024-11-18 00:37:13.208406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.007 [2024-11-18 00:37:13.208421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:58104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.007 [2024-11-18 00:37:13.208434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.007 [2024-11-18 00:37:13.208449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:58112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.007 [2024-11-18 00:37:13.208461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.007 [2024-11-18 00:37:13.208476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:58120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.007 [2024-11-18 00:37:13.208489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.008 [2024-11-18 00:37:13.208504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:58128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.008 [2024-11-18 00:37:13.208517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.008 [2024-11-18 00:37:13.208532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:58136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.008 [2024-11-18 00:37:13.208545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.008 [2024-11-18 00:37:13.208560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:58144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.008 [2024-11-18 00:37:13.208573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.008 [2024-11-18 00:37:13.208598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:58152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.008 [2024-11-18 00:37:13.208631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.008 [2024-11-18 00:37:13.208647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:58160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.008 [2024-11-18 00:37:13.208660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.008 [2024-11-18 00:37:13.208674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:58168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.008 [2024-11-18 00:37:13.208688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.008 [2024-11-18 00:37:13.208702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:58176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.008 [2024-11-18 00:37:13.208715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.008 [2024-11-18 00:37:13.208729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:58184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.008 [2024-11-18 00:37:13.208742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.008 [2024-11-18 00:37:13.208756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:58192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.008 [2024-11-18 00:37:13.208770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.008 [2024-11-18 00:37:13.208784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:58200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.008 [2024-11-18 00:37:13.208796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.008 [2024-11-18 00:37:13.208811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:58208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.008 [2024-11-18 00:37:13.208823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.008 [2024-11-18 00:37:13.208837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:58216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.008 [2024-11-18 00:37:13.208850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.008 [2024-11-18 00:37:13.208864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:58224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.008 [2024-11-18 00:37:13.208876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.008 [2024-11-18 00:37:13.208890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:58232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.008 [2024-11-18 00:37:13.208903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.008 [2024-11-18 00:37:13.208917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:58240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.008 [2024-11-18 00:37:13.208930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.008 [2024-11-18 00:37:13.208944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:58248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.008 [2024-11-18 00:37:13.208957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.008 [2024-11-18 00:37:13.208976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:58256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.008 [2024-11-18 00:37:13.208990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.008 [2024-11-18 00:37:13.209004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:58264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.008 [2024-11-18 00:37:13.209017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.008 [2024-11-18 00:37:13.209031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:58272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.008 [2024-11-18 00:37:13.209044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.008 [2024-11-18 00:37:13.209058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:58280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.008 [2024-11-18 00:37:13.209071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.008 [2024-11-18 00:37:13.209086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:58288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.008 [2024-11-18 00:37:13.209098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.008 [2024-11-18 00:37:13.209112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:58296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.008 [2024-11-18 00:37:13.209125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.008 [2024-11-18 00:37:13.209139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:58304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.008 [2024-11-18 00:37:13.209152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.008 [2024-11-18 00:37:13.209166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:58312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.008 [2024-11-18 00:37:13.209180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.008 [2024-11-18 00:37:13.209194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:58320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.008 [2024-11-18 00:37:13.209207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.008 [2024-11-18 00:37:13.209222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:58328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.008 [2024-11-18 00:37:13.209234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.008 [2024-11-18 00:37:13.209248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:58336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.008 [2024-11-18 00:37:13.209261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.008 [2024-11-18 00:37:13.209275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:58344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.008 [2024-11-18 00:37:13.209288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.008 [2024-11-18 00:37:13.209302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:58352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.008 [2024-11-18 00:37:13.209341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.008 [2024-11-18 00:37:13.209368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:58360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.008 [2024-11-18 00:37:13.209382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.008 [2024-11-18 00:37:13.209397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:58368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.008 [2024-11-18 00:37:13.209410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.008 [2024-11-18 00:37:13.209424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:58376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.008 [2024-11-18 00:37:13.209437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.008 [2024-11-18 00:37:13.209454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:58384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.008 [2024-11-18 00:37:13.209468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.008 [2024-11-18 00:37:13.209483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:58392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.008 [2024-11-18 00:37:13.209496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.008 [2024-11-18 00:37:13.209511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:58400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.008 [2024-11-18 00:37:13.209525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.008 [2024-11-18 00:37:13.209539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:58408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.008 [2024-11-18 00:37:13.209553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.008 [2024-11-18 00:37:13.209568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:58416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.008 [2024-11-18 00:37:13.209581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.009 [2024-11-18 00:37:13.209596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:58424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.009 [2024-11-18 00:37:13.209609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.009 [2024-11-18 00:37:13.209624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:58432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.009 [2024-11-18 00:37:13.209637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.009 [2024-11-18 00:37:13.209651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:58440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.009 [2024-11-18 00:37:13.209664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.009 [2024-11-18 00:37:13.209679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:58448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.009 [2024-11-18 00:37:13.209692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.009 [2024-11-18 00:37:13.209710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:58456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.009 [2024-11-18 00:37:13.209724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.009 [2024-11-18 00:37:13.209739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:58464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.009 [2024-11-18 00:37:13.209753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.009 [2024-11-18 00:37:13.209767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:58472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.009 [2024-11-18 00:37:13.209780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.009 [2024-11-18 00:37:13.209795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:58480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.009 [2024-11-18 00:37:13.209808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.009 [2024-11-18 00:37:13.209823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:58488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.009 [2024-11-18 00:37:13.209836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.009 [2024-11-18 00:37:13.209850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:58496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.009 [2024-11-18 00:37:13.209864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.009 [2024-11-18 00:37:13.209878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:58504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.009 [2024-11-18 00:37:13.209891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.009 [2024-11-18 00:37:13.209906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:58512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.009 [2024-11-18 00:37:13.209920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.009 [2024-11-18 00:37:13.209934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:58520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.009 [2024-11-18 00:37:13.209947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.009 [2024-11-18 00:37:13.209961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:58528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.009 [2024-11-18 00:37:13.209975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.009 [2024-11-18 00:37:13.209989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:58536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.009 [2024-11-18 00:37:13.210002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.009 [2024-11-18 00:37:13.210016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:58544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.009 [2024-11-18 00:37:13.210028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.009 [2024-11-18 00:37:13.210043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:58552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.009 [2024-11-18 00:37:13.210056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.009 [2024-11-18 00:37:13.210074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:58560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.009 [2024-11-18 00:37:13.210089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.009 [2024-11-18 00:37:13.210103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:58568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.009 [2024-11-18 00:37:13.210116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.009 [2024-11-18 00:37:13.210131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:58576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.009 [2024-11-18 00:37:13.210144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.009 [2024-11-18 00:37:13.210159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:58584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.009 [2024-11-18 00:37:13.210172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.009 [2024-11-18 00:37:13.210186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:58592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.009 [2024-11-18 00:37:13.210199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.009 [2024-11-18 00:37:13.210213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:58600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.009 [2024-11-18 00:37:13.210226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.009 [2024-11-18 00:37:13.210241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:58608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.009 [2024-11-18 00:37:13.210254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.009 [2024-11-18 00:37:13.210268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:58616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.009 [2024-11-18 00:37:13.210281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.009 [2024-11-18 00:37:13.210319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:58624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.009 [2024-11-18 00:37:13.210335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.009 [2024-11-18 00:37:13.210350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:58632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.009 [2024-11-18 00:37:13.210365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.009 [2024-11-18 00:37:13.210380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:58640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.009 [2024-11-18 00:37:13.210394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.009 [2024-11-18 00:37:13.210409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:58648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.009 [2024-11-18 00:37:13.210423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.009 [2024-11-18 00:37:13.210438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:58656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.009 [2024-11-18 00:37:13.210456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.009 [2024-11-18 00:37:13.210491] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:56.009 [2024-11-18 00:37:13.210508] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:56.009 [2024-11-18 00:37:13.210520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58664 len:8 PRP1 0x0 PRP2 0x0 00:31:56.009 [2024-11-18 00:37:13.210534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.009 [2024-11-18 00:37:13.210596] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:31:56.009 [2024-11-18 00:37:13.210630] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:31:56.009 [2024-11-18 00:37:13.213895] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:31:56.009 [2024-11-18 00:37:13.213936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9f53b0 (9): Bad file descriptor 00:31:56.009 [2024-11-18 00:37:13.279460] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:31:56.009 8364.00 IOPS, 32.67 MiB/s [2024-11-17T23:37:19.831Z] 8342.82 IOPS, 32.59 MiB/s [2024-11-17T23:37:19.831Z] 8333.92 IOPS, 32.55 MiB/s [2024-11-17T23:37:19.831Z] 8328.77 IOPS, 32.53 MiB/s [2024-11-17T23:37:19.831Z] 8320.36 IOPS, 32.50 MiB/s 00:31:56.009 Latency(us) 00:31:56.009 [2024-11-17T23:37:19.831Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:56.009 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:56.009 Verification LBA range: start 0x0 length 0x4000 00:31:56.009 NVMe0n1 : 15.01 8313.53 32.47 751.40 0.00 14092.53 561.30 16699.54 00:31:56.009 [2024-11-17T23:37:19.831Z] =================================================================================================================== 00:31:56.009 [2024-11-17T23:37:19.831Z] Total : 8313.53 32.47 751.40 0.00 14092.53 561.30 16699.54 00:31:56.009 Received shutdown signal, test time was about 15.000000 seconds 00:31:56.009 00:31:56.009 Latency(us) 00:31:56.009 [2024-11-17T23:37:19.831Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:56.009 [2024-11-17T23:37:19.831Z] =================================================================================================================== 00:31:56.009 [2024-11-17T23:37:19.832Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:56.010 00:37:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:31:56.010 00:37:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:31:56.010 00:37:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:31:56.010 00:37:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=363344 00:31:56.010 00:37:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:31:56.010 00:37:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 363344 /var/tmp/bdevperf.sock 00:31:56.010 00:37:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 363344 ']' 00:31:56.010 00:37:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:56.010 00:37:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:56.010 00:37:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:56.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:56.010 00:37:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:56.010 00:37:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:56.010 00:37:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:56.010 00:37:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:31:56.010 00:37:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:56.010 [2024-11-18 00:37:19.454146] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:56.010 00:37:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:56.010 [2024-11-18 00:37:19.775070] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:31:56.010 00:37:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:31:56.587 NVMe0n1 00:31:56.587 00:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:31:57.156 00:31:57.156 00:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:31:57.414 00:31:57.414 00:37:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:57.414 00:37:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:31:57.671 00:37:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:57.929 00:37:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:32:01.211 00:37:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:01.211 00:37:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:32:01.211 00:37:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=364014 00:32:01.211 00:37:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:01.211 00:37:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 364014 00:32:02.583 { 00:32:02.583 "results": [ 00:32:02.583 { 00:32:02.583 "job": "NVMe0n1", 00:32:02.583 "core_mask": "0x1", 00:32:02.583 "workload": "verify", 00:32:02.583 "status": "finished", 00:32:02.583 "verify_range": { 00:32:02.583 "start": 0, 00:32:02.583 "length": 16384 00:32:02.583 }, 00:32:02.583 "queue_depth": 128, 00:32:02.583 "io_size": 4096, 00:32:02.583 "runtime": 1.049414, 00:32:02.583 "iops": 8326.551770797798, 00:32:02.583 "mibps": 32.5255928546789, 00:32:02.583 "io_failed": 0, 00:32:02.583 "io_timeout": 0, 00:32:02.583 "avg_latency_us": 14719.02876122174, 00:32:02.583 "min_latency_us": 3325.345185185185, 00:32:02.583 "max_latency_us": 44079.02814814815 00:32:02.583 } 00:32:02.583 ], 00:32:02.583 "core_count": 1 00:32:02.583 } 00:32:02.583 00:37:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:02.583 [2024-11-18 00:37:18.980420] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:32:02.583 [2024-11-18 00:37:18.980514] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid363344 ] 00:32:02.583 [2024-11-18 00:37:19.048096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:02.583 [2024-11-18 00:37:19.092478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:02.583 [2024-11-18 00:37:21.619153] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:02.583 [2024-11-18 00:37:21.619249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:02.583 [2024-11-18 00:37:21.619273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.583 [2024-11-18 00:37:21.619290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:02.583 [2024-11-18 00:37:21.619329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.583 [2024-11-18 00:37:21.619345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:02.583 [2024-11-18 00:37:21.619360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.583 [2024-11-18 00:37:21.619375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:02.583 [2024-11-18 00:37:21.619390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.583 [2024-11-18 00:37:21.619407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:32:02.583 [2024-11-18 00:37:21.619452] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:32:02.583 [2024-11-18 00:37:21.619485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143c3b0 (9): Bad file descriptor 00:32:02.583 [2024-11-18 00:37:21.670546] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:32:02.583 Running I/O for 1 seconds... 00:32:02.583 8610.00 IOPS, 33.63 MiB/s 00:32:02.583 Latency(us) 00:32:02.583 [2024-11-17T23:37:26.405Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:02.583 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:02.583 Verification LBA range: start 0x0 length 0x4000 00:32:02.583 NVMe0n1 : 1.05 8326.55 32.53 0.00 0.00 14719.03 3325.35 44079.03 00:32:02.583 [2024-11-17T23:37:26.405Z] =================================================================================================================== 00:32:02.583 [2024-11-17T23:37:26.405Z] Total : 8326.55 32.53 0.00 0.00 14719.03 3325.35 44079.03 00:32:02.583 00:37:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:02.583 00:37:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:32:02.583 00:37:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:02.840 00:37:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:02.841 00:37:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:32:03.098 00:37:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:03.663 00:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:32:06.951 00:37:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:06.951 00:37:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:32:06.951 00:37:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 363344 00:32:06.951 00:37:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 363344 ']' 00:32:06.951 00:37:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 363344 00:32:06.951 00:37:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:32:06.952 00:37:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:06.952 00:37:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 363344 00:32:06.952 00:37:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:06.952 00:37:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:06.952 00:37:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 363344' 00:32:06.952 killing process with pid 363344 00:32:06.952 00:37:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 363344 00:32:06.952 00:37:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 363344 00:32:06.952 00:37:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:32:06.952 00:37:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:07.208 00:37:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:32:07.208 00:37:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:07.208 00:37:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:32:07.208 00:37:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:07.208 00:37:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:32:07.208 00:37:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:07.208 00:37:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:32:07.208 00:37:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:07.208 00:37:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:07.208 rmmod nvme_tcp 00:32:07.208 rmmod nvme_fabrics 00:32:07.208 rmmod nvme_keyring 00:32:07.208 00:37:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:07.208 00:37:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:32:07.208 00:37:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:32:07.208 00:37:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 361087 ']' 00:32:07.208 00:37:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 361087 00:32:07.208 00:37:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 361087 ']' 00:32:07.208 00:37:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 361087 00:32:07.208 00:37:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:32:07.208 00:37:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:07.208 00:37:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 361087 00:32:07.467 00:37:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:07.467 00:37:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:07.467 00:37:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 361087' 00:32:07.467 killing process with pid 361087 00:32:07.467 00:37:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 361087 00:32:07.467 00:37:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 361087 00:32:07.467 00:37:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:07.467 00:37:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:07.467 00:37:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:07.467 00:37:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:32:07.467 00:37:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:32:07.467 00:37:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:07.467 00:37:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:32:07.467 00:37:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:07.467 00:37:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:07.467 00:37:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:07.467 00:37:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:07.467 00:37:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:09.999 00:32:09.999 real 0m35.605s 00:32:09.999 user 2m5.124s 00:32:09.999 sys 0m6.257s 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:09.999 ************************************ 00:32:09.999 END TEST nvmf_failover 00:32:09.999 ************************************ 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.999 ************************************ 00:32:09.999 START TEST nvmf_host_discovery 00:32:09.999 ************************************ 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:09.999 * Looking for test storage... 00:32:09.999 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:09.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:09.999 --rc genhtml_branch_coverage=1 00:32:09.999 --rc genhtml_function_coverage=1 00:32:09.999 --rc genhtml_legend=1 00:32:09.999 --rc geninfo_all_blocks=1 00:32:09.999 --rc geninfo_unexecuted_blocks=1 00:32:09.999 00:32:09.999 ' 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:09.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:09.999 --rc genhtml_branch_coverage=1 00:32:09.999 --rc genhtml_function_coverage=1 00:32:09.999 --rc genhtml_legend=1 00:32:09.999 --rc geninfo_all_blocks=1 00:32:09.999 --rc geninfo_unexecuted_blocks=1 00:32:09.999 00:32:09.999 ' 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:09.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:09.999 --rc genhtml_branch_coverage=1 00:32:09.999 --rc genhtml_function_coverage=1 00:32:09.999 --rc genhtml_legend=1 00:32:09.999 --rc geninfo_all_blocks=1 00:32:09.999 --rc geninfo_unexecuted_blocks=1 00:32:09.999 00:32:09.999 ' 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:09.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:09.999 --rc genhtml_branch_coverage=1 00:32:09.999 --rc genhtml_function_coverage=1 00:32:09.999 --rc genhtml_legend=1 00:32:09.999 --rc geninfo_all_blocks=1 00:32:09.999 --rc geninfo_unexecuted_blocks=1 00:32:09.999 00:32:09.999 ' 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:09.999 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:10.000 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:10.000 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:10.000 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:10.000 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:32:10.000 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:10.000 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:32:10.000 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:10.000 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:10.000 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:10.000 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:10.000 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:10.000 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:10.000 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:10.000 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:10.000 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:10.000 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:10.000 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:32:10.000 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:32:10.000 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:32:10.000 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:32:10.000 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:32:10.000 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:32:10.000 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:32:10.000 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:10.000 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:10.000 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:10.000 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:10.000 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:10.000 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:10.000 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:10.000 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:10.000 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:10.000 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:10.000 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:32:10.000 00:37:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:11.903 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:11.903 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:32:11.903 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:11.903 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:11.903 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:11.903 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:11.903 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:11.903 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:32:11.903 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:11.903 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:32:11.903 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:32:11.903 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:32:11.903 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:32:11.903 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:32:11.903 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:32:11.903 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:11.903 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:11.903 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:11.903 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:11.904 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:11.904 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:11.904 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:11.904 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:11.904 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:12.162 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:12.162 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:12.162 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:12.162 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:12.162 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:12.162 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:12.162 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:12.162 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:12.162 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:12.162 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:32:12.162 00:32:12.162 --- 10.0.0.2 ping statistics --- 00:32:12.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:12.162 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:32:12.162 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:12.162 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:12.162 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.081 ms 00:32:12.162 00:32:12.162 --- 10.0.0.1 ping statistics --- 00:32:12.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:12.162 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:32:12.162 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:12.162 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:32:12.162 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:12.162 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:12.163 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:12.163 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:12.163 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:12.163 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:12.163 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:12.163 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:32:12.163 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:12.163 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:12.163 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:12.163 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=366745 00:32:12.163 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:12.163 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 366745 00:32:12.163 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 366745 ']' 00:32:12.163 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:12.163 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:12.163 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:12.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:12.163 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:12.163 00:37:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:12.420 [2024-11-18 00:37:35.998060] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:32:12.420 [2024-11-18 00:37:35.998133] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:12.420 [2024-11-18 00:37:36.069729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:12.420 [2024-11-18 00:37:36.114732] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:12.420 [2024-11-18 00:37:36.114781] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:12.420 [2024-11-18 00:37:36.114803] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:12.420 [2024-11-18 00:37:36.114814] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:12.420 [2024-11-18 00:37:36.114824] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:12.420 [2024-11-18 00:37:36.115417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:12.420 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:12.420 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:32:12.420 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:12.420 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:12.420 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:12.678 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:12.678 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:12.678 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.678 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:12.678 [2024-11-18 00:37:36.249220] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:12.678 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.678 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:32:12.678 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.678 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:12.678 [2024-11-18 00:37:36.257418] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:12.678 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.678 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:32:12.678 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.678 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:12.678 null0 00:32:12.678 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.678 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:32:12.678 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.678 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:12.678 null1 00:32:12.678 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.678 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:32:12.678 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.678 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:12.678 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.678 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=366768 00:32:12.678 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:32:12.678 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 366768 /tmp/host.sock 00:32:12.678 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 366768 ']' 00:32:12.678 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:32:12.679 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:12.679 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:12.679 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:12.679 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:12.679 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:12.679 [2024-11-18 00:37:36.329306] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:32:12.679 [2024-11-18 00:37:36.329391] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid366768 ] 00:32:12.679 [2024-11-18 00:37:36.394817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:12.679 [2024-11-18 00:37:36.439800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:12.937 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:12.937 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:32:12.937 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:12.937 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:32:12.937 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.937 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:12.937 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.937 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:32:12.937 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.937 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:12.937 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.937 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:32:12.937 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:32:12.937 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:12.937 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.937 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:12.937 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:12.937 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:12.937 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:12.937 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.937 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:32:12.937 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:32:12.937 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:12.937 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.937 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:12.937 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:12.937 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:12.937 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:12.937 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.937 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:32:12.937 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:32:12.937 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.937 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:12.937 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.937 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:32:12.937 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:12.937 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:12.937 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.937 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:12.937 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:12.937 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:12.937 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.937 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:32:12.937 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:32:12.937 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:12.937 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:12.937 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.937 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:12.937 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:12.937 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:12.937 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.938 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:32:12.938 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:32:12.938 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.938 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:12.938 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.938 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:32:12.938 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:12.938 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:12.938 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.938 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:12.938 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:12.938 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:12.938 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:13.196 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:32:13.196 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:32:13.196 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:13.196 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:13.196 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:13.196 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:13.196 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:13.196 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:13.196 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:13.196 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:32:13.196 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:13.196 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:13.196 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:13.196 [2024-11-18 00:37:36.822961] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:13.196 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:13.196 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:32:13.196 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:13.196 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:13.196 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:13.196 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:13.196 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:13.196 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:13.196 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:13.196 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:32:13.196 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:32:13.196 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:13.196 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:13.196 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:13.196 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:13.196 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:13.196 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:13.196 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:13.196 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:32:13.196 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:32:13.196 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:13.196 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:13.196 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:13.196 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:13.196 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:13.196 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:13.196 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:13.196 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:13.196 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:13.196 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:13.196 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:13.196 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:13.196 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:13.196 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:32:13.196 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:13.196 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:13.196 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:32:13.196 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:13.196 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:13.196 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:13.196 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:13.196 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:13.196 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:13.196 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:13.196 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:13.196 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:13.196 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:13.196 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:13.196 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:13.196 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:13.197 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:13.197 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:13.197 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:13.197 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:32:13.197 00:37:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:32:14.129 [2024-11-18 00:37:37.627959] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:14.129 [2024-11-18 00:37:37.627983] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:14.129 [2024-11-18 00:37:37.628005] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:14.129 [2024-11-18 00:37:37.714279] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:14.129 [2024-11-18 00:37:37.815136] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:32:14.129 [2024-11-18 00:37:37.816157] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xd331b0:1 started. 00:32:14.129 [2024-11-18 00:37:37.817912] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:14.129 [2024-11-18 00:37:37.817932] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:14.129 [2024-11-18 00:37:37.824841] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xd331b0 was disconnected and freed. delete nvme_qpair. 00:32:14.387 00:37:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:14.387 00:37:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:14.387 00:37:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:14.387 00:37:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:14.387 00:37:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:14.387 00:37:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:14.387 00:37:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:14.387 00:37:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:14.387 00:37:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:14.387 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:14.387 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:14.387 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:14.387 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:14.387 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:14.387 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:14.387 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:14.387 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:32:14.387 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:14.387 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:14.387 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:14.387 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:14.387 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:14.387 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:14.387 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:14.387 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:14.387 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:32:14.387 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:14.387 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:14.387 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:14.387 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:14.387 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:14.387 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:32:14.387 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:14.387 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:14.387 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:14.387 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:14.387 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:14.387 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:14.387 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:14.387 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:14.387 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:32:14.387 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:14.387 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:32:14.387 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:14.387 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:14.387 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:14.387 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:14.387 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:14.387 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:14.387 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:14.387 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:14.387 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:14.387 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:14.387 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:14.387 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:14.387 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:14.387 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:32:14.387 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:14.387 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:14.387 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:32:14.387 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:14.387 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:14.387 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:14.387 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:14.387 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:14.387 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:14.387 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:14.387 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:14.387 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:14.387 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:14.387 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:14.387 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:14.387 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:14.387 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:14.387 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:14.648 [2024-11-18 00:37:38.403302] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xd1d240:1 started. 00:32:14.648 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:14.648 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:14.648 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:14.648 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:32:14.648 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:14.648 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:14.648 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:14.648 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:14.649 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:14.649 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:14.649 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:14.649 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:32:14.649 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:14.649 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:14.649 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:14.649 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:14.649 [2024-11-18 00:37:38.446911] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xd1d240 was disconnected and freed. delete nvme_qpair. 00:32:14.649 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:14.649 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:32:14.649 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:14.649 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:32:16.049 00:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:16.049 00:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:16.049 00:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:16.049 00:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:32:16.049 00:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:16.050 00:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.050 00:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:16.050 00:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.050 00:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:16.050 00:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:16.050 00:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:16.050 00:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:16.050 00:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:32:16.050 00:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.050 00:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:16.050 [2024-11-18 00:37:39.510604] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:16.050 [2024-11-18 00:37:39.511560] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:16.050 [2024-11-18 00:37:39.511595] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:16.050 00:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.050 00:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:16.050 00:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:16.050 00:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:16.050 00:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:16.050 00:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:16.050 00:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:16.050 00:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:16.050 00:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:16.050 00:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.050 00:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:16.050 00:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:16.050 00:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:16.050 00:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.050 00:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:16.050 00:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:16.050 00:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:16.050 00:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:16.050 00:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:16.050 00:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:16.050 00:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:16.050 00:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:16.050 00:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:16.050 00:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:16.050 00:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.050 00:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:16.050 00:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:16.050 00:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:16.050 00:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.050 00:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:16.050 [2024-11-18 00:37:39.597874] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:32:16.050 00:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:16.050 00:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:16.050 00:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:16.050 00:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:16.050 00:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:16.050 00:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:32:16.050 00:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:16.050 00:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:16.050 00:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.050 00:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:16.050 00:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:16.050 00:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:16.050 00:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:16.050 00:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.050 00:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:32:16.050 00:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:32:16.050 [2024-11-18 00:37:39.866456] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:32:16.050 [2024-11-18 00:37:39.866513] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:16.050 [2024-11-18 00:37:39.866528] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:16.050 [2024-11-18 00:37:39.866536] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:16.984 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:16.984 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:32:16.984 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:16.984 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:16.984 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:16.984 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.984 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:16.984 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:16.984 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:16.984 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.985 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:32:16.985 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:16.985 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:32:16.985 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:16.985 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:16.985 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:16.985 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:16.985 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:16.985 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:16.985 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:16.985 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:16.985 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.985 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:16.985 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:16.985 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.985 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:16.985 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:16.985 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:16.985 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:16.985 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:16.985 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.985 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:16.985 [2024-11-18 00:37:40.739267] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:16.985 [2024-11-18 00:37:40.739338] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:16.985 [2024-11-18 00:37:40.740780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:16.985 [2024-11-18 00:37:40.740816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.985 [2024-11-18 00:37:40.740849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:16.985 [2024-11-18 00:37:40.740863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.985 [2024-11-18 00:37:40.740877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:16.985 [2024-11-18 00:37:40.740891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.985 [2024-11-18 00:37:40.740920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:16.985 [2024-11-18 00:37:40.740934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:16.985 [2024-11-18 00:37:40.740948] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd051f0 is same with the state(6) to be set 00:32:16.985 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.985 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:16.985 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:16.985 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:16.985 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:16.985 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:16.985 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:16.985 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:16.985 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:16.985 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:16.985 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.985 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:16.985 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:16.985 [2024-11-18 00:37:40.750764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd051f0 (9): Bad file descriptor 00:32:16.985 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.985 [2024-11-18 00:37:40.760805] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:16.985 [2024-11-18 00:37:40.760836] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:16.985 [2024-11-18 00:37:40.760847] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:16.985 [2024-11-18 00:37:40.760870] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:16.985 [2024-11-18 00:37:40.760911] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:16.985 [2024-11-18 00:37:40.761069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.985 [2024-11-18 00:37:40.761100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd051f0 with addr=10.0.0.2, port=4420 00:32:16.985 [2024-11-18 00:37:40.761117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd051f0 is same with the state(6) to be set 00:32:16.985 [2024-11-18 00:37:40.761140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd051f0 (9): Bad file descriptor 00:32:16.985 [2024-11-18 00:37:40.761163] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:16.985 [2024-11-18 00:37:40.761177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:16.985 [2024-11-18 00:37:40.761194] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:16.985 [2024-11-18 00:37:40.761207] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:16.985 [2024-11-18 00:37:40.761217] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:16.985 [2024-11-18 00:37:40.761225] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:16.985 [2024-11-18 00:37:40.770944] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:16.985 [2024-11-18 00:37:40.770965] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:16.985 [2024-11-18 00:37:40.770974] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:16.985 [2024-11-18 00:37:40.770981] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:16.985 [2024-11-18 00:37:40.771004] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:16.985 [2024-11-18 00:37:40.771169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.985 [2024-11-18 00:37:40.771198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd051f0 with addr=10.0.0.2, port=4420 00:32:16.985 [2024-11-18 00:37:40.771215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd051f0 is same with the state(6) to be set 00:32:16.985 [2024-11-18 00:37:40.771238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd051f0 (9): Bad file descriptor 00:32:16.985 [2024-11-18 00:37:40.771259] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:16.985 [2024-11-18 00:37:40.771273] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:16.985 [2024-11-18 00:37:40.771286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:16.985 [2024-11-18 00:37:40.771299] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:16.985 [2024-11-18 00:37:40.771308] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:16.985 [2024-11-18 00:37:40.771328] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:16.986 [2024-11-18 00:37:40.781039] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:16.986 [2024-11-18 00:37:40.781064] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:16.986 [2024-11-18 00:37:40.781074] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:16.986 [2024-11-18 00:37:40.781081] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:16.986 [2024-11-18 00:37:40.781104] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:16.986 [2024-11-18 00:37:40.781345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.986 [2024-11-18 00:37:40.781374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd051f0 with addr=10.0.0.2, port=4420 00:32:16.986 [2024-11-18 00:37:40.781392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd051f0 is same with the state(6) to be set 00:32:16.986 [2024-11-18 00:37:40.781415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd051f0 (9): Bad file descriptor 00:32:16.986 [2024-11-18 00:37:40.781480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:16.986 [2024-11-18 00:37:40.781501] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:16.986 [2024-11-18 00:37:40.781517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:16.986 [2024-11-18 00:37:40.781530] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:16.986 [2024-11-18 00:37:40.781539] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:16.986 [2024-11-18 00:37:40.781547] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:16.986 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:16.986 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:16.986 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:16.986 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:16.986 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:16.986 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:16.986 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:16.986 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:16.986 [2024-11-18 00:37:40.791140] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:16.986 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:16.986 [2024-11-18 00:37:40.791164] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:16.986 [2024-11-18 00:37:40.791176] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:16.986 [2024-11-18 00:37:40.791183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:16.986 [2024-11-18 00:37:40.791208] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:16.986 [2024-11-18 00:37:40.791379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.986 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.986 [2024-11-18 00:37:40.791409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd051f0 with 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:16.986 addr=10.0.0.2, port=4420 00:32:16.986 [2024-11-18 00:37:40.791443] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd051f0 is same with the state(6) to be set 00:32:16.986 [2024-11-18 00:37:40.791467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd051f0 (9): Bad file descriptor 00:32:16.986 [2024-11-18 00:37:40.791503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:16.986 [2024-11-18 00:37:40.791523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:16.986 [2024-11-18 00:37:40.791540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:16.986 [2024-11-18 00:37:40.791562] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:16.986 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:16.986 [2024-11-18 00:37:40.791571] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:16.986 [2024-11-18 00:37:40.791583] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:16.986 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:16.986 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:16.986 [2024-11-18 00:37:40.801242] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:16.986 [2024-11-18 00:37:40.801266] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:16.986 [2024-11-18 00:37:40.801276] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:16.986 [2024-11-18 00:37:40.801283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:16.986 [2024-11-18 00:37:40.801332] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:16.986 [2024-11-18 00:37:40.801456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.986 [2024-11-18 00:37:40.801485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd051f0 with addr=10.0.0.2, port=4420 00:32:16.986 [2024-11-18 00:37:40.801503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd051f0 is same with the state(6) to be set 00:32:16.986 [2024-11-18 00:37:40.801526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd051f0 (9): Bad file descriptor 00:32:16.986 [2024-11-18 00:37:40.801558] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:16.986 [2024-11-18 00:37:40.801583] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:16.986 [2024-11-18 00:37:40.801598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:16.986 [2024-11-18 00:37:40.801611] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:16.986 [2024-11-18 00:37:40.801620] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:16.986 [2024-11-18 00:37:40.801628] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:17.246 [2024-11-18 00:37:40.811367] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:17.246 [2024-11-18 00:37:40.811390] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:17.246 [2024-11-18 00:37:40.811400] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:17.246 [2024-11-18 00:37:40.811413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:17.246 [2024-11-18 00:37:40.811440] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:17.246 [2024-11-18 00:37:40.811570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.246 [2024-11-18 00:37:40.811598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd051f0 with addr=10.0.0.2, port=4420 00:32:17.246 [2024-11-18 00:37:40.811616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd051f0 is same with the state(6) to be set 00:32:17.246 [2024-11-18 00:37:40.811638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd051f0 (9): Bad file descriptor 00:32:17.246 [2024-11-18 00:37:40.811672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:17.246 [2024-11-18 00:37:40.811691] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:17.246 [2024-11-18 00:37:40.811705] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:17.246 [2024-11-18 00:37:40.811718] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:17.246 [2024-11-18 00:37:40.811728] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:17.246 [2024-11-18 00:37:40.811735] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:17.246 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:17.246 [2024-11-18 00:37:40.821474] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:17.246 [2024-11-18 00:37:40.821496] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:17.246 [2024-11-18 00:37:40.821505] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:17.246 [2024-11-18 00:37:40.821513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:17.246 [2024-11-18 00:37:40.821538] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:17.246 [2024-11-18 00:37:40.821678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:17.246 [2024-11-18 00:37:40.821707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd051f0 with addr=10.0.0.2, port=4420 00:32:17.246 [2024-11-18 00:37:40.821724] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd051f0 is same with the state(6) to be set 00:32:17.246 [2024-11-18 00:37:40.821747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd051f0 (9): Bad file descriptor 00:32:17.246 [2024-11-18 00:37:40.821780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:17.246 [2024-11-18 00:37:40.821798] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:17.246 [2024-11-18 00:37:40.821812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:17.246 [2024-11-18 00:37:40.821825] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:17.246 [2024-11-18 00:37:40.821834] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:17.246 [2024-11-18 00:37:40.821841] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:17.246 [2024-11-18 00:37:40.825802] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:32:17.246 [2024-11-18 00:37:40.825833] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:17.246 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:17.246 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:17.246 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:32:17.246 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:32:17.246 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:17.246 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:17.246 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:32:17.246 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:17.246 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:17.246 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:17.246 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:17.246 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:17.246 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:17.246 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:17.246 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:17.246 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:32:17.246 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:17.246 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:32:17.246 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:17.246 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:17.246 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:17.246 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:17.246 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:17.246 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:17.246 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:17.246 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:17.246 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:17.246 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:17.246 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:17.246 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:17.246 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:17.246 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:17.246 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:17.246 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:17.246 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:32:17.246 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:17.246 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:17.246 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:17.246 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:32:17.246 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:32:17.246 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:17.246 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:17.246 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:32:17.246 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:17.246 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:17.246 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:17.246 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:17.246 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:17.246 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:17.246 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:17.246 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:17.246 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:32:17.246 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:17.246 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:32:17.246 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:32:17.246 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:17.246 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:17.246 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:32:17.246 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:17.246 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:17.246 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:17.246 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:17.246 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:17.246 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:17.247 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:17.247 00:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:17.247 00:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:32:17.247 00:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:17.247 00:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:32:17.247 00:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:32:17.247 00:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:17.247 00:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:17.247 00:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:17.247 00:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:17.247 00:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:17.247 00:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:17.247 00:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:17.247 00:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:17.247 00:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:17.247 00:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:17.247 00:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:17.247 00:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:32:17.247 00:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:32:17.247 00:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:17.247 00:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:17.247 00:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:17.247 00:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:17.247 00:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:18.620 [2024-11-18 00:37:42.088126] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:18.620 [2024-11-18 00:37:42.088151] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:18.620 [2024-11-18 00:37:42.088175] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:18.620 [2024-11-18 00:37:42.176477] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:32:18.879 [2024-11-18 00:37:42.483096] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:32:18.879 [2024-11-18 00:37:42.484114] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0xd2ddf0:1 started. 00:32:18.879 [2024-11-18 00:37:42.486250] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:18.879 [2024-11-18 00:37:42.486303] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:18.879 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:18.879 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:18.879 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:32:18.879 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:18.879 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:18.879 [2024-11-18 00:37:42.487884] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0xd2ddf0 was disconnected and freed. delete nvme_qpair. 00:32:18.879 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:18.879 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:18.879 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:18.879 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:18.880 request: 00:32:18.880 { 00:32:18.880 "name": "nvme", 00:32:18.880 "trtype": "tcp", 00:32:18.880 "traddr": "10.0.0.2", 00:32:18.880 "adrfam": "ipv4", 00:32:18.880 "trsvcid": "8009", 00:32:18.880 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:18.880 "wait_for_attach": true, 00:32:18.880 "method": "bdev_nvme_start_discovery", 00:32:18.880 "req_id": 1 00:32:18.880 } 00:32:18.880 Got JSON-RPC error response 00:32:18.880 response: 00:32:18.880 { 00:32:18.880 "code": -17, 00:32:18.880 "message": "File exists" 00:32:18.880 } 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:18.880 request: 00:32:18.880 { 00:32:18.880 "name": "nvme_second", 00:32:18.880 "trtype": "tcp", 00:32:18.880 "traddr": "10.0.0.2", 00:32:18.880 "adrfam": "ipv4", 00:32:18.880 "trsvcid": "8009", 00:32:18.880 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:18.880 "wait_for_attach": true, 00:32:18.880 "method": "bdev_nvme_start_discovery", 00:32:18.880 "req_id": 1 00:32:18.880 } 00:32:18.880 Got JSON-RPC error response 00:32:18.880 response: 00:32:18.880 { 00:32:18.880 "code": -17, 00:32:18.880 "message": "File exists" 00:32:18.880 } 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:18.880 00:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:20.254 [2024-11-18 00:37:43.697729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:20.254 [2024-11-18 00:37:43.697780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd2fe90 with addr=10.0.0.2, port=8010 00:32:20.254 [2024-11-18 00:37:43.697812] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:20.254 [2024-11-18 00:37:43.697827] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:20.254 [2024-11-18 00:37:43.697840] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:21.206 [2024-11-18 00:37:44.700145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:21.206 [2024-11-18 00:37:44.700201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd2fe90 with addr=10.0.0.2, port=8010 00:32:21.206 [2024-11-18 00:37:44.700231] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:21.206 [2024-11-18 00:37:44.700246] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:21.206 [2024-11-18 00:37:44.700259] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:22.139 [2024-11-18 00:37:45.702374] bdev_nvme.c:7427:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:32:22.139 request: 00:32:22.139 { 00:32:22.139 "name": "nvme_second", 00:32:22.139 "trtype": "tcp", 00:32:22.139 "traddr": "10.0.0.2", 00:32:22.139 "adrfam": "ipv4", 00:32:22.139 "trsvcid": "8010", 00:32:22.139 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:22.139 "wait_for_attach": false, 00:32:22.139 "attach_timeout_ms": 3000, 00:32:22.139 "method": "bdev_nvme_start_discovery", 00:32:22.139 "req_id": 1 00:32:22.139 } 00:32:22.139 Got JSON-RPC error response 00:32:22.139 response: 00:32:22.139 { 00:32:22.140 "code": -110, 00:32:22.140 "message": "Connection timed out" 00:32:22.140 } 00:32:22.140 00:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:22.140 00:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:32:22.140 00:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:22.140 00:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:22.140 00:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:22.140 00:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:32:22.140 00:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:22.140 00:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:22.140 00:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.140 00:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:22.140 00:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:22.140 00:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:22.140 00:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.140 00:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:32:22.140 00:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:32:22.140 00:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 366768 00:32:22.140 00:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:32:22.140 00:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:22.140 00:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:32:22.140 00:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:22.140 00:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:32:22.140 00:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:22.140 00:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:22.140 rmmod nvme_tcp 00:32:22.140 rmmod nvme_fabrics 00:32:22.140 rmmod nvme_keyring 00:32:22.140 00:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:22.140 00:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:32:22.140 00:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:32:22.140 00:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 366745 ']' 00:32:22.140 00:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 366745 00:32:22.140 00:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 366745 ']' 00:32:22.140 00:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 366745 00:32:22.140 00:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:32:22.140 00:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:22.140 00:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 366745 00:32:22.140 00:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:22.140 00:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:22.140 00:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 366745' 00:32:22.140 killing process with pid 366745 00:32:22.140 00:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 366745 00:32:22.140 00:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 366745 00:32:22.400 00:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:22.400 00:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:22.400 00:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:22.400 00:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:32:22.400 00:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:32:22.400 00:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:22.400 00:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:32:22.400 00:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:22.400 00:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:22.400 00:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:22.400 00:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:22.400 00:37:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:24.305 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:24.305 00:32:24.305 real 0m14.740s 00:32:24.305 user 0m21.540s 00:32:24.305 sys 0m3.085s 00:32:24.305 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:24.305 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:24.305 ************************************ 00:32:24.305 END TEST nvmf_host_discovery 00:32:24.305 ************************************ 00:32:24.305 00:37:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:24.305 00:37:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:24.305 00:37:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:24.305 00:37:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.565 ************************************ 00:32:24.565 START TEST nvmf_host_multipath_status 00:32:24.565 ************************************ 00:32:24.565 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:24.565 * Looking for test storage... 00:32:24.565 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:24.565 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:24.565 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:32:24.565 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:24.565 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:24.565 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:24.565 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:24.565 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:24.565 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:32:24.565 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:32:24.565 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:32:24.565 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:32:24.565 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:32:24.565 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:32:24.565 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:32:24.565 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:24.565 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:32:24.565 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:32:24.565 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:24.565 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:24.565 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:32:24.565 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:32:24.565 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:24.565 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:32:24.565 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:32:24.565 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:32:24.565 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:32:24.565 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:24.565 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:32:24.565 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:32:24.565 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:24.565 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:24.565 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:32:24.565 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:24.565 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:24.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:24.565 --rc genhtml_branch_coverage=1 00:32:24.565 --rc genhtml_function_coverage=1 00:32:24.565 --rc genhtml_legend=1 00:32:24.565 --rc geninfo_all_blocks=1 00:32:24.565 --rc geninfo_unexecuted_blocks=1 00:32:24.565 00:32:24.565 ' 00:32:24.565 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:24.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:24.565 --rc genhtml_branch_coverage=1 00:32:24.565 --rc genhtml_function_coverage=1 00:32:24.565 --rc genhtml_legend=1 00:32:24.565 --rc geninfo_all_blocks=1 00:32:24.565 --rc geninfo_unexecuted_blocks=1 00:32:24.565 00:32:24.565 ' 00:32:24.565 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:24.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:24.565 --rc genhtml_branch_coverage=1 00:32:24.565 --rc genhtml_function_coverage=1 00:32:24.565 --rc genhtml_legend=1 00:32:24.565 --rc geninfo_all_blocks=1 00:32:24.565 --rc geninfo_unexecuted_blocks=1 00:32:24.565 00:32:24.565 ' 00:32:24.565 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:24.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:24.565 --rc genhtml_branch_coverage=1 00:32:24.565 --rc genhtml_function_coverage=1 00:32:24.565 --rc genhtml_legend=1 00:32:24.565 --rc geninfo_all_blocks=1 00:32:24.565 --rc geninfo_unexecuted_blocks=1 00:32:24.565 00:32:24.565 ' 00:32:24.565 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:24.565 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:32:24.565 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:24.565 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:24.565 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:24.565 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:24.565 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:24.565 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:24.565 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:24.565 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:24.565 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:24.565 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:24.565 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:24.565 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:24.565 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:24.565 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:24.565 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:24.565 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:24.565 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:24.565 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:32:24.565 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:24.565 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:24.565 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:24.566 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.566 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.566 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.566 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:32:24.566 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.566 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:32:24.566 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:24.566 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:24.566 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:24.566 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:24.566 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:24.566 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:24.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:24.566 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:24.566 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:24.566 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:24.566 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:32:24.566 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:32:24.566 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:24.566 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:32:24.566 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:24.566 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:32:24.566 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:32:24.566 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:24.566 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:24.566 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:24.566 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:24.566 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:24.566 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:24.566 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:24.566 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:24.566 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:24.566 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:24.566 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:32:24.566 00:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:27.094 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:27.094 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:32:27.094 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:27.094 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:27.094 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:27.094 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:27.094 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:27.094 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:32:27.094 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:27.094 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:32:27.094 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:32:27.094 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:27.095 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:27.095 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:27.095 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:27.095 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:27.095 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:27.095 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:32:27.095 00:32:27.095 --- 10.0.0.2 ping statistics --- 00:32:27.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:27.095 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:27.095 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:27.095 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:32:27.095 00:32:27.095 --- 10.0.0.1 ping statistics --- 00:32:27.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:27.095 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:27.095 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:27.096 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:27.096 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:32:27.096 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:27.096 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:27.096 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:27.096 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=370070 00:32:27.096 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:32:27.096 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 370070 00:32:27.096 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 370070 ']' 00:32:27.096 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:27.096 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:27.096 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:27.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:27.096 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:27.096 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:27.096 [2024-11-18 00:37:50.695608] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:32:27.096 [2024-11-18 00:37:50.695722] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:27.096 [2024-11-18 00:37:50.770080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:27.096 [2024-11-18 00:37:50.815678] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:27.096 [2024-11-18 00:37:50.815746] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:27.096 [2024-11-18 00:37:50.815759] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:27.096 [2024-11-18 00:37:50.815777] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:27.096 [2024-11-18 00:37:50.815787] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:27.096 [2024-11-18 00:37:50.820331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:27.096 [2024-11-18 00:37:50.820342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:27.354 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:27.354 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:32:27.354 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:27.354 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:27.354 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:27.354 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:27.354 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=370070 00:32:27.354 00:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:27.612 [2024-11-18 00:37:51.210950] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:27.612 00:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:27.873 Malloc0 00:32:27.873 00:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:32:28.130 00:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:28.388 00:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:28.646 [2024-11-18 00:37:52.328822] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:28.646 00:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:28.904 [2024-11-18 00:37:52.597509] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:28.904 00:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=370238 00:32:28.904 00:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:32:28.904 00:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:28.904 00:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 370238 /var/tmp/bdevperf.sock 00:32:28.904 00:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 370238 ']' 00:32:28.904 00:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:28.904 00:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:28.904 00:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:28.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:28.904 00:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:28.904 00:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:29.162 00:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:29.162 00:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:32:29.162 00:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:32:29.420 00:37:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:32:29.986 Nvme0n1 00:32:29.986 00:37:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:32:30.244 Nvme0n1 00:32:30.244 00:37:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:32:30.244 00:37:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:32:32.789 00:37:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:32:32.789 00:37:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:32:32.789 00:37:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:33.047 00:37:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:32:33.980 00:37:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:32:33.980 00:37:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:33.980 00:37:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:33.980 00:37:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:34.238 00:37:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:34.238 00:37:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:34.238 00:37:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:34.238 00:37:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:34.496 00:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:34.496 00:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:34.496 00:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:34.496 00:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:34.754 00:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:34.754 00:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:34.754 00:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:34.754 00:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:35.012 00:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:35.012 00:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:35.012 00:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:35.012 00:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:35.270 00:37:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:35.270 00:37:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:35.270 00:37:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:35.270 00:37:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:35.527 00:37:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:35.527 00:37:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:32:35.527 00:37:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:36.092 00:37:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:36.092 00:37:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:32:37.471 00:38:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:32:37.471 00:38:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:37.471 00:38:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:37.471 00:38:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:37.471 00:38:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:37.471 00:38:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:37.471 00:38:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:37.471 00:38:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:37.731 00:38:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:37.731 00:38:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:37.731 00:38:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:37.731 00:38:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:37.990 00:38:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:37.990 00:38:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:37.990 00:38:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:37.990 00:38:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:38.248 00:38:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:38.248 00:38:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:38.248 00:38:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:38.248 00:38:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:38.505 00:38:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:38.505 00:38:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:38.505 00:38:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:38.505 00:38:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:38.763 00:38:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:38.763 00:38:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:32:38.763 00:38:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:39.021 00:38:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:32:39.586 00:38:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:32:40.520 00:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:32:40.520 00:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:40.520 00:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:40.520 00:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:40.778 00:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:40.779 00:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:40.779 00:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:40.779 00:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:41.037 00:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:41.037 00:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:41.037 00:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:41.037 00:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:41.295 00:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:41.295 00:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:41.295 00:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:41.295 00:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:41.553 00:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:41.553 00:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:41.553 00:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:41.553 00:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:41.814 00:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:41.814 00:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:41.814 00:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:41.814 00:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:42.071 00:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:42.071 00:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:32:42.071 00:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:42.330 00:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:42.588 00:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:32:43.959 00:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:32:43.959 00:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:43.959 00:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:43.959 00:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:43.959 00:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:43.960 00:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:43.960 00:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:43.960 00:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:44.218 00:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:44.218 00:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:44.218 00:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:44.218 00:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:44.476 00:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:44.476 00:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:44.476 00:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:44.476 00:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:44.734 00:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:44.734 00:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:44.734 00:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:44.734 00:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:44.992 00:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:44.992 00:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:44.992 00:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:44.992 00:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:45.557 00:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:45.557 00:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:32:45.557 00:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:32:45.557 00:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:45.814 00:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:32:47.186 00:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:32:47.186 00:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:47.186 00:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:47.186 00:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:47.186 00:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:47.186 00:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:47.186 00:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:47.186 00:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:47.443 00:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:47.443 00:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:47.443 00:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:47.443 00:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:47.701 00:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:47.701 00:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:47.701 00:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:47.701 00:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:47.959 00:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:47.959 00:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:32:47.959 00:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:47.959 00:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:48.216 00:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:48.216 00:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:48.216 00:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:48.216 00:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:48.473 00:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:48.473 00:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:32:48.473 00:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:32:48.731 00:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:48.989 00:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:32:50.359 00:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:32:50.359 00:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:50.359 00:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:50.359 00:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:50.359 00:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:50.359 00:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:50.359 00:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:50.359 00:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:50.616 00:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:50.616 00:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:50.616 00:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:50.616 00:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:50.882 00:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:50.882 00:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:50.882 00:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:50.882 00:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:51.140 00:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:51.140 00:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:32:51.140 00:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:51.140 00:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:51.398 00:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:51.398 00:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:51.398 00:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:51.398 00:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:51.656 00:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:51.656 00:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:32:51.914 00:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:32:51.914 00:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:32:52.479 00:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:52.479 00:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:32:53.871 00:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:32:53.871 00:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:53.871 00:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:53.871 00:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:53.871 00:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:53.871 00:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:53.871 00:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:53.871 00:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:54.134 00:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:54.134 00:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:54.134 00:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:54.134 00:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:54.395 00:38:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:54.395 00:38:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:54.395 00:38:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:54.395 00:38:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:54.653 00:38:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:54.653 00:38:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:54.653 00:38:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:54.653 00:38:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:54.911 00:38:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:54.912 00:38:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:54.912 00:38:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:54.912 00:38:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:55.170 00:38:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:55.170 00:38:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:32:55.170 00:38:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:55.428 00:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:55.686 00:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:32:57.061 00:38:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:32:57.061 00:38:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:57.061 00:38:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:57.061 00:38:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:57.061 00:38:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:57.061 00:38:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:57.061 00:38:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:57.061 00:38:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:57.320 00:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:57.320 00:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:57.320 00:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:57.320 00:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:57.577 00:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:57.577 00:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:57.577 00:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:57.577 00:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:57.835 00:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:57.835 00:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:57.835 00:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:57.835 00:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:58.093 00:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:58.093 00:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:58.093 00:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:58.093 00:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:58.350 00:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:58.350 00:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:32:58.350 00:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:58.608 00:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:32:58.866 00:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:33:00.238 00:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:33:00.238 00:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:00.238 00:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:00.238 00:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:00.238 00:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:00.238 00:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:00.238 00:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:00.238 00:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:00.506 00:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:00.507 00:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:00.507 00:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:00.507 00:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:00.769 00:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:00.769 00:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:00.769 00:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:00.769 00:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:01.028 00:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:01.028 00:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:01.028 00:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:01.028 00:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:01.286 00:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:01.286 00:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:01.286 00:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:01.286 00:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:01.544 00:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:01.544 00:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:33:01.544 00:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:01.804 00:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:02.062 00:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:33:03.436 00:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:33:03.436 00:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:03.436 00:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:03.436 00:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:03.436 00:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:03.436 00:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:03.436 00:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:03.436 00:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:03.694 00:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:03.694 00:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:03.694 00:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:03.694 00:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:03.952 00:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:03.952 00:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:03.952 00:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:03.952 00:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:04.226 00:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:04.226 00:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:04.226 00:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:04.227 00:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:04.486 00:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:04.486 00:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:04.486 00:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:04.486 00:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:04.744 00:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:04.744 00:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 370238 00:33:04.744 00:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 370238 ']' 00:33:04.744 00:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 370238 00:33:04.744 00:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:33:04.744 00:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:04.744 00:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 370238 00:33:05.032 00:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:33:05.032 00:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:33:05.032 00:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 370238' 00:33:05.032 killing process with pid 370238 00:33:05.032 00:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 370238 00:33:05.032 00:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 370238 00:33:05.032 { 00:33:05.032 "results": [ 00:33:05.032 { 00:33:05.032 "job": "Nvme0n1", 00:33:05.032 "core_mask": "0x4", 00:33:05.032 "workload": "verify", 00:33:05.032 "status": "terminated", 00:33:05.032 "verify_range": { 00:33:05.032 "start": 0, 00:33:05.032 "length": 16384 00:33:05.032 }, 00:33:05.032 "queue_depth": 128, 00:33:05.032 "io_size": 4096, 00:33:05.032 "runtime": 34.409231, 00:33:05.032 "iops": 7883.9018518025005, 00:33:05.032 "mibps": 30.796491608603517, 00:33:05.032 "io_failed": 0, 00:33:05.032 "io_timeout": 0, 00:33:05.032 "avg_latency_us": 16206.173878301866, 00:33:05.032 "min_latency_us": 191.9051851851852, 00:33:05.032 "max_latency_us": 4076242.1096296296 00:33:05.032 } 00:33:05.032 ], 00:33:05.032 "core_count": 1 00:33:05.032 } 00:33:05.032 00:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 370238 00:33:05.032 00:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:05.032 [2024-11-18 00:37:52.658547] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:33:05.032 [2024-11-18 00:37:52.658653] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid370238 ] 00:33:05.032 [2024-11-18 00:37:52.726554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:05.032 [2024-11-18 00:37:52.776272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:05.032 Running I/O for 90 seconds... 00:33:05.032 8378.00 IOPS, 32.73 MiB/s [2024-11-17T23:38:28.854Z] 8444.50 IOPS, 32.99 MiB/s [2024-11-17T23:38:28.854Z] 8469.67 IOPS, 33.08 MiB/s [2024-11-17T23:38:28.854Z] 8484.25 IOPS, 33.14 MiB/s [2024-11-17T23:38:28.854Z] 8472.00 IOPS, 33.09 MiB/s [2024-11-17T23:38:28.854Z] 8478.00 IOPS, 33.12 MiB/s [2024-11-17T23:38:28.854Z] 8441.14 IOPS, 32.97 MiB/s [2024-11-17T23:38:28.854Z] 8397.62 IOPS, 32.80 MiB/s [2024-11-17T23:38:28.854Z] 8406.00 IOPS, 32.84 MiB/s [2024-11-17T23:38:28.854Z] 8410.40 IOPS, 32.85 MiB/s [2024-11-17T23:38:28.854Z] 8414.64 IOPS, 32.87 MiB/s [2024-11-17T23:38:28.854Z] 8414.33 IOPS, 32.87 MiB/s [2024-11-17T23:38:28.854Z] 8425.54 IOPS, 32.91 MiB/s [2024-11-17T23:38:28.854Z] 8423.71 IOPS, 32.91 MiB/s [2024-11-17T23:38:28.854Z] 8436.73 IOPS, 32.96 MiB/s [2024-11-17T23:38:28.854Z] [2024-11-18 00:38:09.330440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:107808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.032 [2024-11-18 00:38:09.330493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:05.032 [2024-11-18 00:38:09.330530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:107816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.032 [2024-11-18 00:38:09.330550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:05.032 [2024-11-18 00:38:09.330574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:107824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.032 [2024-11-18 00:38:09.330592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:05.032 [2024-11-18 00:38:09.330616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:107832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.032 [2024-11-18 00:38:09.330649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:05.032 [2024-11-18 00:38:09.330673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:107840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.032 [2024-11-18 00:38:09.330690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:05.032 [2024-11-18 00:38:09.330711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:107848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.032 [2024-11-18 00:38:09.330728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:05.032 [2024-11-18 00:38:09.330750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:107856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.032 [2024-11-18 00:38:09.330781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:05.032 [2024-11-18 00:38:09.330802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:107864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.032 [2024-11-18 00:38:09.330818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:05.032 [2024-11-18 00:38:09.330839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:107872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.032 [2024-11-18 00:38:09.330854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:05.032 [2024-11-18 00:38:09.330886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:107880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.032 [2024-11-18 00:38:09.330902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:05.032 [2024-11-18 00:38:09.330923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:107888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.032 [2024-11-18 00:38:09.330938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:05.032 [2024-11-18 00:38:09.330973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:107896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.032 [2024-11-18 00:38:09.330989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:05.032 [2024-11-18 00:38:09.331010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:107904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.032 [2024-11-18 00:38:09.331039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:05.032 [2024-11-18 00:38:09.331061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:107912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.032 [2024-11-18 00:38:09.331077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:05.032 [2024-11-18 00:38:09.331098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:107920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.032 [2024-11-18 00:38:09.331113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:05.032 [2024-11-18 00:38:09.331134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:107928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.032 [2024-11-18 00:38:09.331150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:05.032 [2024-11-18 00:38:09.331372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:107936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.032 [2024-11-18 00:38:09.331410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:05.032 [2024-11-18 00:38:09.331438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:107944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.032 [2024-11-18 00:38:09.331461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:05.032 [2024-11-18 00:38:09.331485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:107952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.032 [2024-11-18 00:38:09.331503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:05.032 [2024-11-18 00:38:09.331525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:107960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.032 [2024-11-18 00:38:09.331542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:05.033 [2024-11-18 00:38:09.331564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:107968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.033 [2024-11-18 00:38:09.331581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:05.033 [2024-11-18 00:38:09.331603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:107976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.033 [2024-11-18 00:38:09.331625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:05.033 [2024-11-18 00:38:09.331648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:107984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.033 [2024-11-18 00:38:09.331665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:05.033 [2024-11-18 00:38:09.331703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:107992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.033 [2024-11-18 00:38:09.331720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:05.033 [2024-11-18 00:38:09.331742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:108000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.033 [2024-11-18 00:38:09.331773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:05.033 [2024-11-18 00:38:09.331795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:108008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.033 [2024-11-18 00:38:09.331811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:05.033 [2024-11-18 00:38:09.331831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:108016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.033 [2024-11-18 00:38:09.331847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:05.033 [2024-11-18 00:38:09.331884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:108024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.033 [2024-11-18 00:38:09.331899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:05.033 [2024-11-18 00:38:09.331933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:108032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.033 [2024-11-18 00:38:09.331950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:05.033 [2024-11-18 00:38:09.331972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:108040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.033 [2024-11-18 00:38:09.331988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:05.033 [2024-11-18 00:38:09.332008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:108048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.033 [2024-11-18 00:38:09.332024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:05.033 [2024-11-18 00:38:09.332045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:108056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.033 [2024-11-18 00:38:09.332061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:05.033 [2024-11-18 00:38:09.332081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:108064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.033 [2024-11-18 00:38:09.332097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:05.033 [2024-11-18 00:38:09.332118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:108072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.033 [2024-11-18 00:38:09.332138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:05.033 [2024-11-18 00:38:09.332159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:108080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.033 [2024-11-18 00:38:09.332175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:05.033 [2024-11-18 00:38:09.332196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:108088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.033 [2024-11-18 00:38:09.332212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:05.033 [2024-11-18 00:38:09.332248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:108096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.033 [2024-11-18 00:38:09.332264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:05.033 [2024-11-18 00:38:09.332285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:108104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.033 [2024-11-18 00:38:09.332323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:05.033 [2024-11-18 00:38:09.332347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:108112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.033 [2024-11-18 00:38:09.332378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:05.033 [2024-11-18 00:38:09.332401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:108120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.033 [2024-11-18 00:38:09.332418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:05.033 [2024-11-18 00:38:09.332723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:108128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.033 [2024-11-18 00:38:09.332759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:05.033 [2024-11-18 00:38:09.332786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:108136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.033 [2024-11-18 00:38:09.332809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:05.033 [2024-11-18 00:38:09.332832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:108144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.033 [2024-11-18 00:38:09.332849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:05.033 [2024-11-18 00:38:09.332870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:108152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.033 [2024-11-18 00:38:09.332887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:05.033 [2024-11-18 00:38:09.332908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:108160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.033 [2024-11-18 00:38:09.332925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:05.033 [2024-11-18 00:38:09.332947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:108168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.033 [2024-11-18 00:38:09.332963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:05.033 [2024-11-18 00:38:09.332989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:108176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.033 [2024-11-18 00:38:09.333006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:05.033 [2024-11-18 00:38:09.333045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:108184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.033 [2024-11-18 00:38:09.333062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:05.033 [2024-11-18 00:38:09.333100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:108192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.033 [2024-11-18 00:38:09.333116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:05.033 [2024-11-18 00:38:09.333138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:108200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.033 [2024-11-18 00:38:09.333169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:05.033 [2024-11-18 00:38:09.333191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:108208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.033 [2024-11-18 00:38:09.333208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:05.033 [2024-11-18 00:38:09.333228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:108216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.033 [2024-11-18 00:38:09.333244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:05.033 [2024-11-18 00:38:09.333264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:108224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.033 [2024-11-18 00:38:09.333280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:05.033 [2024-11-18 00:38:09.333325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:108232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.033 [2024-11-18 00:38:09.333344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:05.033 [2024-11-18 00:38:09.333383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:108240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.033 [2024-11-18 00:38:09.333400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:05.033 [2024-11-18 00:38:09.333423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:108248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.033 [2024-11-18 00:38:09.333441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:05.033 [2024-11-18 00:38:09.333694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:108256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.033 [2024-11-18 00:38:09.333730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:05.033 [2024-11-18 00:38:09.333759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:108264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.033 [2024-11-18 00:38:09.333778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:05.033 [2024-11-18 00:38:09.333805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:108272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.034 [2024-11-18 00:38:09.333824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:05.034 [2024-11-18 00:38:09.333846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:108280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.034 [2024-11-18 00:38:09.333863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:05.034 [2024-11-18 00:38:09.333885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:108288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.034 [2024-11-18 00:38:09.333902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:05.034 [2024-11-18 00:38:09.333925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:108296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.034 [2024-11-18 00:38:09.333941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:05.034 [2024-11-18 00:38:09.333963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:108304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.034 [2024-11-18 00:38:09.333980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:05.034 [2024-11-18 00:38:09.334002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:107304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.034 [2024-11-18 00:38:09.334035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:05.034 [2024-11-18 00:38:09.334058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:107312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.034 [2024-11-18 00:38:09.334075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:05.034 [2024-11-18 00:38:09.334113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:107320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.034 [2024-11-18 00:38:09.334128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:05.034 [2024-11-18 00:38:09.334149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:107328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.034 [2024-11-18 00:38:09.334165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:05.034 [2024-11-18 00:38:09.334186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:107336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.034 [2024-11-18 00:38:09.334202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:05.034 [2024-11-18 00:38:09.334223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:107344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.034 [2024-11-18 00:38:09.334239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:05.034 [2024-11-18 00:38:09.334260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:107352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.034 [2024-11-18 00:38:09.334275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:05.034 [2024-11-18 00:38:09.334324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:107360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.034 [2024-11-18 00:38:09.334345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:05.034 [2024-11-18 00:38:09.334384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:107368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.034 [2024-11-18 00:38:09.334401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:05.034 [2024-11-18 00:38:09.334424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:107376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.034 [2024-11-18 00:38:09.334440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:05.034 [2024-11-18 00:38:09.334463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:107384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.034 [2024-11-18 00:38:09.334480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:05.034 [2024-11-18 00:38:09.334502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:107392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.034 [2024-11-18 00:38:09.334519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:05.034 [2024-11-18 00:38:09.334541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:107400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.034 [2024-11-18 00:38:09.334558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:05.034 [2024-11-18 00:38:09.334581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:107408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.034 [2024-11-18 00:38:09.334612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:05.034 [2024-11-18 00:38:09.334635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:107416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.034 [2024-11-18 00:38:09.334651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:05.034 [2024-11-18 00:38:09.334688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:107424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.034 [2024-11-18 00:38:09.334704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:05.034 [2024-11-18 00:38:09.334725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:107432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.034 [2024-11-18 00:38:09.334740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:05.034 [2024-11-18 00:38:09.334761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:107440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.034 [2024-11-18 00:38:09.334776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:05.034 [2024-11-18 00:38:09.334797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:107448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.034 [2024-11-18 00:38:09.334813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:05.034 [2024-11-18 00:38:09.334834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:107456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.034 [2024-11-18 00:38:09.334853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:05.034 [2024-11-18 00:38:09.334875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:107464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.034 [2024-11-18 00:38:09.334892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:05.034 [2024-11-18 00:38:09.334912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:107472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.034 [2024-11-18 00:38:09.334928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:05.034 [2024-11-18 00:38:09.334949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:107480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.034 [2024-11-18 00:38:09.334965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:05.034 [2024-11-18 00:38:09.334986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:107488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.034 [2024-11-18 00:38:09.335002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:05.034 [2024-11-18 00:38:09.335023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:107496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.034 [2024-11-18 00:38:09.335038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:05.034 [2024-11-18 00:38:09.335059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:107504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.034 [2024-11-18 00:38:09.335075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:05.034 [2024-11-18 00:38:09.335096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:107512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.034 [2024-11-18 00:38:09.335112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:05.034 [2024-11-18 00:38:09.335133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:107520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.034 [2024-11-18 00:38:09.335148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:05.034 [2024-11-18 00:38:09.335169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:107528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.034 [2024-11-18 00:38:09.335185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:05.034 [2024-11-18 00:38:09.335206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:107536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.034 [2024-11-18 00:38:09.335221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:05.034 [2024-11-18 00:38:09.335242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:107544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.034 [2024-11-18 00:38:09.335258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:05.034 [2024-11-18 00:38:09.335279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:108312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.034 [2024-11-18 00:38:09.335325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:05.034 [2024-11-18 00:38:09.335738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:108320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.034 [2024-11-18 00:38:09.335761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:05.034 [2024-11-18 00:38:09.335787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:107552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.034 [2024-11-18 00:38:09.335806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:05.035 [2024-11-18 00:38:09.335828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:107560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.035 [2024-11-18 00:38:09.335845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:05.035 [2024-11-18 00:38:09.335866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:107568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.035 [2024-11-18 00:38:09.335882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:05.035 [2024-11-18 00:38:09.335904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:107576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.035 [2024-11-18 00:38:09.335921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:05.035 [2024-11-18 00:38:09.335943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:107584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.035 [2024-11-18 00:38:09.335959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:05.035 [2024-11-18 00:38:09.335980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:107592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.035 [2024-11-18 00:38:09.335997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:05.035 [2024-11-18 00:38:09.336019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:107600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.035 [2024-11-18 00:38:09.336035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:05.035 [2024-11-18 00:38:09.336057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.035 [2024-11-18 00:38:09.336073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:05.035 [2024-11-18 00:38:09.336095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:107616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.035 [2024-11-18 00:38:09.336111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:05.035 [2024-11-18 00:38:09.336133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:107624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.035 [2024-11-18 00:38:09.336149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:05.035 [2024-11-18 00:38:09.336170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:107632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.035 [2024-11-18 00:38:09.336187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:05.035 [2024-11-18 00:38:09.336214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:107640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.035 [2024-11-18 00:38:09.336231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:05.035 [2024-11-18 00:38:09.336253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:107648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.035 [2024-11-18 00:38:09.336269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:05.035 [2024-11-18 00:38:09.336291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:107656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.035 [2024-11-18 00:38:09.336333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:05.035 [2024-11-18 00:38:09.336358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:107664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.035 [2024-11-18 00:38:09.336375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.035 [2024-11-18 00:38:09.336398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:107672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.035 [2024-11-18 00:38:09.336414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.035 [2024-11-18 00:38:09.336437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:107680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.035 [2024-11-18 00:38:09.336454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:05.035 [2024-11-18 00:38:09.336476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:107688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.035 [2024-11-18 00:38:09.336492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:05.035 [2024-11-18 00:38:09.336514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:107696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.035 [2024-11-18 00:38:09.336531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:05.035 [2024-11-18 00:38:09.336553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:107704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.035 [2024-11-18 00:38:09.336569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:05.035 [2024-11-18 00:38:09.336592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:107712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.035 [2024-11-18 00:38:09.336623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:05.035 [2024-11-18 00:38:09.336645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:107720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.035 [2024-11-18 00:38:09.336661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:05.035 [2024-11-18 00:38:09.336682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:107728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.035 [2024-11-18 00:38:09.336698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:05.035 [2024-11-18 00:38:09.336722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:107736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.035 [2024-11-18 00:38:09.336754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:05.035 [2024-11-18 00:38:09.336778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:107744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.035 [2024-11-18 00:38:09.336794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:05.035 [2024-11-18 00:38:09.336832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:107752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.035 [2024-11-18 00:38:09.336849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:05.035 [2024-11-18 00:38:09.336871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:107760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.035 [2024-11-18 00:38:09.336888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:05.035 [2024-11-18 00:38:09.336911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:107768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.035 [2024-11-18 00:38:09.336928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:05.035 [2024-11-18 00:38:09.336950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:107776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.035 [2024-11-18 00:38:09.336967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:05.035 [2024-11-18 00:38:09.336989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:107784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.035 [2024-11-18 00:38:09.337006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:05.035 [2024-11-18 00:38:09.337028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:107792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.035 [2024-11-18 00:38:09.337045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:05.035 [2024-11-18 00:38:09.337068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:107800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.035 [2024-11-18 00:38:09.337085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:05.035 [2024-11-18 00:38:09.337123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:107808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.035 [2024-11-18 00:38:09.337140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:05.035 [2024-11-18 00:38:09.337162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:107816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.035 [2024-11-18 00:38:09.337195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:05.035 [2024-11-18 00:38:09.337216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:107824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.035 [2024-11-18 00:38:09.337233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:05.035 [2024-11-18 00:38:09.337258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:107832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.035 [2024-11-18 00:38:09.337274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:05.035 [2024-11-18 00:38:09.337318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:107840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.035 [2024-11-18 00:38:09.337337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:05.035 [2024-11-18 00:38:09.337361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:107848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.035 [2024-11-18 00:38:09.337378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:05.035 [2024-11-18 00:38:09.337400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:107856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.035 [2024-11-18 00:38:09.337416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:05.035 [2024-11-18 00:38:09.337820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:107864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.036 [2024-11-18 00:38:09.337842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:05.036 [2024-11-18 00:38:09.337872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:107872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.036 [2024-11-18 00:38:09.337890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:05.036 [2024-11-18 00:38:09.337913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:107880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.036 [2024-11-18 00:38:09.337930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:05.036 [2024-11-18 00:38:09.337951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:107888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.036 [2024-11-18 00:38:09.337968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:05.036 [2024-11-18 00:38:09.337990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:107896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.036 [2024-11-18 00:38:09.338006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:05.036 [2024-11-18 00:38:09.338027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:107904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.036 [2024-11-18 00:38:09.338043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:05.036 [2024-11-18 00:38:09.338065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:107912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.036 [2024-11-18 00:38:09.338097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:05.036 [2024-11-18 00:38:09.338119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:107920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.036 [2024-11-18 00:38:09.338134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:05.036 [2024-11-18 00:38:09.338155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:107928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.036 [2024-11-18 00:38:09.338175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:05.036 [2024-11-18 00:38:09.338198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:107936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.036 [2024-11-18 00:38:09.338214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:05.036 [2024-11-18 00:38:09.338234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:107944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.036 [2024-11-18 00:38:09.338250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:05.036 [2024-11-18 00:38:09.338272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:107952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.036 [2024-11-18 00:38:09.338288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:05.036 [2024-11-18 00:38:09.338308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:107960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.036 [2024-11-18 00:38:09.338350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:05.036 [2024-11-18 00:38:09.338374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:107968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.036 [2024-11-18 00:38:09.338391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:05.036 [2024-11-18 00:38:09.338413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:107976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.036 [2024-11-18 00:38:09.338429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:05.036 [2024-11-18 00:38:09.338451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:107984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.036 [2024-11-18 00:38:09.338467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:05.036 [2024-11-18 00:38:09.338489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:107992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.036 [2024-11-18 00:38:09.338504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:05.036 [2024-11-18 00:38:09.338526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:108000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.036 [2024-11-18 00:38:09.338542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:05.036 [2024-11-18 00:38:09.338564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:108008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.036 [2024-11-18 00:38:09.338580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:05.036 [2024-11-18 00:38:09.338601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:108016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.036 [2024-11-18 00:38:09.338618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:05.036 [2024-11-18 00:38:09.338656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:108024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.036 [2024-11-18 00:38:09.338676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:05.036 [2024-11-18 00:38:09.338698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:108032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.036 [2024-11-18 00:38:09.338714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:05.036 [2024-11-18 00:38:09.338736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:108040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.036 [2024-11-18 00:38:09.338751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:05.036 [2024-11-18 00:38:09.338773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:108048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.036 [2024-11-18 00:38:09.338788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:05.036 [2024-11-18 00:38:09.338809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:108056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.036 [2024-11-18 00:38:09.338825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:05.036 [2024-11-18 00:38:09.338847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:108064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.036 [2024-11-18 00:38:09.338863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:05.036 [2024-11-18 00:38:09.338884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:108072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.036 [2024-11-18 00:38:09.338900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:05.036 [2024-11-18 00:38:09.338920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:108080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.036 [2024-11-18 00:38:09.338936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:05.036 [2024-11-18 00:38:09.338957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:108088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.036 [2024-11-18 00:38:09.338973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:05.036 [2024-11-18 00:38:09.338993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:108096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.036 [2024-11-18 00:38:09.339009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:05.036 [2024-11-18 00:38:09.339030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:108104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.036 [2024-11-18 00:38:09.339047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:05.036 [2024-11-18 00:38:09.339067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:108112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.036 [2024-11-18 00:38:09.339098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:05.036 [2024-11-18 00:38:09.339121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:108120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.036 [2024-11-18 00:38:09.339141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:05.037 [2024-11-18 00:38:09.339164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:108128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.037 [2024-11-18 00:38:09.339181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:05.037 [2024-11-18 00:38:09.339202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:108136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.037 [2024-11-18 00:38:09.339218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:05.037 [2024-11-18 00:38:09.339239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:108144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.037 [2024-11-18 00:38:09.339256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:05.037 [2024-11-18 00:38:09.339277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:108152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.037 [2024-11-18 00:38:09.339293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:05.037 [2024-11-18 00:38:09.339337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:108160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.037 [2024-11-18 00:38:09.339357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:05.037 [2024-11-18 00:38:09.339380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:108168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.037 [2024-11-18 00:38:09.339412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:05.037 [2024-11-18 00:38:09.339435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:108176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.037 [2024-11-18 00:38:09.339451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:05.037 [2024-11-18 00:38:09.339472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:108184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.037 [2024-11-18 00:38:09.339488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:05.037 [2024-11-18 00:38:09.339509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:108192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.037 [2024-11-18 00:38:09.339526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:05.037 [2024-11-18 00:38:09.339547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:108200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.037 [2024-11-18 00:38:09.339562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:05.037 [2024-11-18 00:38:09.339584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:108208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.037 [2024-11-18 00:38:09.339600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:05.037 [2024-11-18 00:38:09.339636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:108216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.037 [2024-11-18 00:38:09.339652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:05.037 [2024-11-18 00:38:09.339677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:108224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.037 [2024-11-18 00:38:09.339693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:05.037 [2024-11-18 00:38:09.339714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:108232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.037 [2024-11-18 00:38:09.339729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:05.037 [2024-11-18 00:38:09.339750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:108240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.037 [2024-11-18 00:38:09.339765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:05.037 [2024-11-18 00:38:09.339786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:108248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.037 [2024-11-18 00:38:09.339801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:05.037 [2024-11-18 00:38:09.339822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:108256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.037 [2024-11-18 00:38:09.339837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:05.037 [2024-11-18 00:38:09.339859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:108264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.037 [2024-11-18 00:38:09.339874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:05.037 [2024-11-18 00:38:09.339895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:108272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.037 [2024-11-18 00:38:09.339910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:05.037 [2024-11-18 00:38:09.339931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:108280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.037 [2024-11-18 00:38:09.339947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:05.037 [2024-11-18 00:38:09.339967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:108288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.037 [2024-11-18 00:38:09.339983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:05.037 [2024-11-18 00:38:09.340003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:108296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.037 [2024-11-18 00:38:09.340019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:05.037 [2024-11-18 00:38:09.340040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:108304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.037 [2024-11-18 00:38:09.340055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:05.037 [2024-11-18 00:38:09.340076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:107304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.037 [2024-11-18 00:38:09.340091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:05.037 [2024-11-18 00:38:09.340116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:107312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.037 [2024-11-18 00:38:09.340132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:05.037 [2024-11-18 00:38:09.340152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:107320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.037 [2024-11-18 00:38:09.340168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:05.037 [2024-11-18 00:38:09.340189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:107328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.037 [2024-11-18 00:38:09.340204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:05.037 [2024-11-18 00:38:09.340224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:107336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.037 [2024-11-18 00:38:09.340239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:05.037 [2024-11-18 00:38:09.340259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:107344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.037 [2024-11-18 00:38:09.340275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:05.037 [2024-11-18 00:38:09.340317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:107352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.037 [2024-11-18 00:38:09.340336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:05.037 [2024-11-18 00:38:09.340359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:107360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.037 [2024-11-18 00:38:09.340376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:05.037 [2024-11-18 00:38:09.340399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:107368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.037 [2024-11-18 00:38:09.340415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:05.037 [2024-11-18 00:38:09.340437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:107376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.037 [2024-11-18 00:38:09.340454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:05.037 [2024-11-18 00:38:09.340477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:107384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.037 [2024-11-18 00:38:09.340494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:05.037 [2024-11-18 00:38:09.340516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:107392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.037 [2024-11-18 00:38:09.340532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:05.037 [2024-11-18 00:38:09.340553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:107400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.037 [2024-11-18 00:38:09.340570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:05.037 [2024-11-18 00:38:09.340592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:107408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.037 [2024-11-18 00:38:09.340627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:05.037 [2024-11-18 00:38:09.340650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:107416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.037 [2024-11-18 00:38:09.340681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:05.037 [2024-11-18 00:38:09.340702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:107424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.037 [2024-11-18 00:38:09.340718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:05.038 [2024-11-18 00:38:09.340738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:107432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.038 [2024-11-18 00:38:09.340754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:05.038 [2024-11-18 00:38:09.340774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:107440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.038 [2024-11-18 00:38:09.340790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:05.038 [2024-11-18 00:38:09.340811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:107448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.038 [2024-11-18 00:38:09.340827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:05.038 [2024-11-18 00:38:09.340847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:107456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.038 [2024-11-18 00:38:09.340863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:05.038 [2024-11-18 00:38:09.340883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:107464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.038 [2024-11-18 00:38:09.340898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:05.038 [2024-11-18 00:38:09.340919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:107472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.038 [2024-11-18 00:38:09.340934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:05.038 [2024-11-18 00:38:09.340955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:107480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.038 [2024-11-18 00:38:09.340970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:05.038 [2024-11-18 00:38:09.340990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:107488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.038 [2024-11-18 00:38:09.341005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:05.038 [2024-11-18 00:38:09.341026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:107496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.038 [2024-11-18 00:38:09.341041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:05.038 [2024-11-18 00:38:09.341062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:107504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.038 [2024-11-18 00:38:09.341082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:05.038 [2024-11-18 00:38:09.341104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:107512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.038 [2024-11-18 00:38:09.341120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:05.038 [2024-11-18 00:38:09.341141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:107520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.038 [2024-11-18 00:38:09.341156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:05.038 [2024-11-18 00:38:09.341177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:107528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.038 [2024-11-18 00:38:09.341192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:05.038 [2024-11-18 00:38:09.341213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.038 [2024-11-18 00:38:09.341228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:05.038 [2024-11-18 00:38:09.341249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:107544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.038 [2024-11-18 00:38:09.341265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:05.038 [2024-11-18 00:38:09.342252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:108312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.038 [2024-11-18 00:38:09.342275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:05.038 [2024-11-18 00:38:09.342326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:108320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.038 [2024-11-18 00:38:09.342346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:05.038 [2024-11-18 00:38:09.342369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:107552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.038 [2024-11-18 00:38:09.342386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:05.038 [2024-11-18 00:38:09.342408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:107560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.038 [2024-11-18 00:38:09.342425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:05.038 [2024-11-18 00:38:09.342447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:107568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.038 [2024-11-18 00:38:09.342464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:05.038 [2024-11-18 00:38:09.342486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:107576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.038 [2024-11-18 00:38:09.342518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:05.038 [2024-11-18 00:38:09.342540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:107584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.038 [2024-11-18 00:38:09.342561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:05.038 [2024-11-18 00:38:09.342583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:107592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.038 [2024-11-18 00:38:09.342614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:05.038 [2024-11-18 00:38:09.342636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:107600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.038 [2024-11-18 00:38:09.342652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:05.038 [2024-11-18 00:38:09.342672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:107608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.038 [2024-11-18 00:38:09.342688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:05.038 [2024-11-18 00:38:09.342708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:107616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.038 [2024-11-18 00:38:09.342724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:05.038 [2024-11-18 00:38:09.342745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:107624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.038 [2024-11-18 00:38:09.342761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:05.038 [2024-11-18 00:38:09.342781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:107632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.038 [2024-11-18 00:38:09.342797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:05.038 [2024-11-18 00:38:09.342818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:107640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.038 [2024-11-18 00:38:09.342834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:05.038 [2024-11-18 00:38:09.342855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:107648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.038 [2024-11-18 00:38:09.342870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:05.038 [2024-11-18 00:38:09.342892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:107656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.038 [2024-11-18 00:38:09.342908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:05.038 [2024-11-18 00:38:09.342929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:107664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.038 [2024-11-18 00:38:09.342945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.038 [2024-11-18 00:38:09.342965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:107672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.038 [2024-11-18 00:38:09.342982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.038 [2024-11-18 00:38:09.343003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:107680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.038 [2024-11-18 00:38:09.343018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:05.038 [2024-11-18 00:38:09.343043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:107688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.038 [2024-11-18 00:38:09.343059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:05.038 [2024-11-18 00:38:09.343080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:107696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.038 [2024-11-18 00:38:09.343097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:05.038 [2024-11-18 00:38:09.343118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:107704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.038 [2024-11-18 00:38:09.343134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:05.038 [2024-11-18 00:38:09.343155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:107712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.038 [2024-11-18 00:38:09.343171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:05.038 [2024-11-18 00:38:09.343192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:107720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.039 [2024-11-18 00:38:09.343208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:05.039 [2024-11-18 00:38:09.343229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:107728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.039 [2024-11-18 00:38:09.343245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:05.039 [2024-11-18 00:38:09.343265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:107736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.039 [2024-11-18 00:38:09.343281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:05.039 [2024-11-18 00:38:09.343327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:107744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.039 [2024-11-18 00:38:09.343347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:05.039 [2024-11-18 00:38:09.343368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:107752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.039 [2024-11-18 00:38:09.343384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:05.039 [2024-11-18 00:38:09.343405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:107760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.039 [2024-11-18 00:38:09.343421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:05.039 [2024-11-18 00:38:09.343442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:107768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.039 [2024-11-18 00:38:09.343458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:05.039 [2024-11-18 00:38:09.343479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:107776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.039 [2024-11-18 00:38:09.343494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:05.039 [2024-11-18 00:38:09.343520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.039 [2024-11-18 00:38:09.343538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:05.039 [2024-11-18 00:38:09.343559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:107792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.039 [2024-11-18 00:38:09.343575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:05.039 [2024-11-18 00:38:09.343611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:107800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.039 [2024-11-18 00:38:09.343627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:05.039 [2024-11-18 00:38:09.343648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:107808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.039 [2024-11-18 00:38:09.343664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:05.039 [2024-11-18 00:38:09.343684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:107816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.039 [2024-11-18 00:38:09.343700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:05.039 [2024-11-18 00:38:09.343720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:107824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.039 [2024-11-18 00:38:09.343735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:05.039 [2024-11-18 00:38:09.343756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:107832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.039 [2024-11-18 00:38:09.343772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:05.039 [2024-11-18 00:38:09.343792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:107840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.039 [2024-11-18 00:38:09.343807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:05.039 [2024-11-18 00:38:09.343828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:107848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.039 [2024-11-18 00:38:09.343844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:05.039 [2024-11-18 00:38:09.344252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.039 [2024-11-18 00:38:09.344279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:05.039 [2024-11-18 00:38:09.344306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.039 [2024-11-18 00:38:09.344333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:05.039 [2024-11-18 00:38:09.344357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:107872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.039 [2024-11-18 00:38:09.344375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:05.039 [2024-11-18 00:38:09.344398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:107880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.039 [2024-11-18 00:38:09.344419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:05.039 [2024-11-18 00:38:09.344442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:107888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.039 [2024-11-18 00:38:09.344459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:05.039 [2024-11-18 00:38:09.344482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:107896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.039 [2024-11-18 00:38:09.344499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:05.039 [2024-11-18 00:38:09.344520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:107904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.039 [2024-11-18 00:38:09.344537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:05.039 [2024-11-18 00:38:09.344559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:107912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.039 [2024-11-18 00:38:09.344576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:05.039 [2024-11-18 00:38:09.344598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:107920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.039 [2024-11-18 00:38:09.344630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:05.039 [2024-11-18 00:38:09.344652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:107928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.039 [2024-11-18 00:38:09.344667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:05.039 [2024-11-18 00:38:09.344688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:107936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.039 [2024-11-18 00:38:09.344704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:05.039 [2024-11-18 00:38:09.344725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:107944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.039 [2024-11-18 00:38:09.344741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:05.039 [2024-11-18 00:38:09.344761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:107952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.039 [2024-11-18 00:38:09.344777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:05.039 [2024-11-18 00:38:09.344798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:107960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.039 [2024-11-18 00:38:09.344813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:05.039 [2024-11-18 00:38:09.344834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:107968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.039 [2024-11-18 00:38:09.344850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:05.039 [2024-11-18 00:38:09.344871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:107976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.039 [2024-11-18 00:38:09.344892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:05.039 [2024-11-18 00:38:09.344914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:107984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.039 [2024-11-18 00:38:09.344930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:05.039 [2024-11-18 00:38:09.344951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:107992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.039 [2024-11-18 00:38:09.344967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:05.039 [2024-11-18 00:38:09.344988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:108000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.039 [2024-11-18 00:38:09.345004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:05.039 [2024-11-18 00:38:09.345025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:108008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.039 [2024-11-18 00:38:09.345040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:05.039 [2024-11-18 00:38:09.345061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:108016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.039 [2024-11-18 00:38:09.345078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:05.039 [2024-11-18 00:38:09.345098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:108024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.039 [2024-11-18 00:38:09.345113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:05.040 [2024-11-18 00:38:09.345134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:108032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.040 [2024-11-18 00:38:09.345149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:05.040 [2024-11-18 00:38:09.345170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:108040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.040 [2024-11-18 00:38:09.345185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:05.040 [2024-11-18 00:38:09.345205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:108048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.040 [2024-11-18 00:38:09.345221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:05.040 [2024-11-18 00:38:09.345241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:108056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.040 [2024-11-18 00:38:09.345257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:05.040 [2024-11-18 00:38:09.345277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:108064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.040 [2024-11-18 00:38:09.345307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:05.040 [2024-11-18 00:38:09.345339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:108072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.040 [2024-11-18 00:38:09.345356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:05.040 [2024-11-18 00:38:09.345381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:108080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.040 [2024-11-18 00:38:09.345398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:05.040 [2024-11-18 00:38:09.345420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:108088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.040 [2024-11-18 00:38:09.345436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:05.040 [2024-11-18 00:38:09.345457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:108096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.040 [2024-11-18 00:38:09.345473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:05.040 [2024-11-18 00:38:09.345494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:108104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.040 [2024-11-18 00:38:09.345510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:05.040 [2024-11-18 00:38:09.345531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:108112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.040 [2024-11-18 00:38:09.345547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:05.040 [2024-11-18 00:38:09.345568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:108120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.040 [2024-11-18 00:38:09.345585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:05.040 [2024-11-18 00:38:09.345622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:108128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.040 [2024-11-18 00:38:09.345637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:05.040 [2024-11-18 00:38:09.345658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:108136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.040 [2024-11-18 00:38:09.345673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:05.040 [2024-11-18 00:38:09.345694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:108144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.040 [2024-11-18 00:38:09.345710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:05.040 [2024-11-18 00:38:09.345731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:108152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.040 [2024-11-18 00:38:09.345746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:05.040 [2024-11-18 00:38:09.345767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:108160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.040 [2024-11-18 00:38:09.345783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:05.040 [2024-11-18 00:38:09.345803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:108168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.040 [2024-11-18 00:38:09.345819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:05.040 [2024-11-18 00:38:09.345843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:108176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.040 [2024-11-18 00:38:09.345859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:05.040 [2024-11-18 00:38:09.345880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:108184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.040 [2024-11-18 00:38:09.345895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:05.040 [2024-11-18 00:38:09.345916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:108192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.040 [2024-11-18 00:38:09.345932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:05.040 [2024-11-18 00:38:09.345953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:108200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.040 [2024-11-18 00:38:09.345968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:05.040 [2024-11-18 00:38:09.345989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:108208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.040 [2024-11-18 00:38:09.346004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:05.040 [2024-11-18 00:38:09.346025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:108216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.040 [2024-11-18 00:38:09.346056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:05.040 [2024-11-18 00:38:09.346079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:108224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.040 [2024-11-18 00:38:09.346096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:05.040 [2024-11-18 00:38:09.346118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:108232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.040 [2024-11-18 00:38:09.346135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:05.040 [2024-11-18 00:38:09.346157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:108240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.040 [2024-11-18 00:38:09.346174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:05.040 [2024-11-18 00:38:09.346196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:108248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.040 [2024-11-18 00:38:09.346213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:05.040 [2024-11-18 00:38:09.346235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:108256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.040 [2024-11-18 00:38:09.346251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:05.040 [2024-11-18 00:38:09.346273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:108264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.040 [2024-11-18 00:38:09.346290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:05.040 [2024-11-18 00:38:09.346323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:108272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.040 [2024-11-18 00:38:09.346343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:05.040 [2024-11-18 00:38:09.346365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:108280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.040 [2024-11-18 00:38:09.346382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:05.040 [2024-11-18 00:38:09.346403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:108288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.040 [2024-11-18 00:38:09.346421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:05.040 [2024-11-18 00:38:09.346443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:108296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.040 [2024-11-18 00:38:09.346459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:05.040 [2024-11-18 00:38:09.346481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:108304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.040 [2024-11-18 00:38:09.346497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:05.040 [2024-11-18 00:38:09.346519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:107304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.040 [2024-11-18 00:38:09.346536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:05.040 [2024-11-18 00:38:09.346558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:107312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.040 [2024-11-18 00:38:09.346574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:05.040 [2024-11-18 00:38:09.346612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:107320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.040 [2024-11-18 00:38:09.346629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:05.040 [2024-11-18 00:38:09.346651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:107328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.041 [2024-11-18 00:38:09.346681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:05.041 [2024-11-18 00:38:09.346702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:107336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.041 [2024-11-18 00:38:09.346718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:05.041 [2024-11-18 00:38:09.346738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:107344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.041 [2024-11-18 00:38:09.346753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:05.041 [2024-11-18 00:38:09.346775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:107352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.041 [2024-11-18 00:38:09.346790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:05.041 [2024-11-18 00:38:09.347523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:107360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.041 [2024-11-18 00:38:09.347555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:05.041 [2024-11-18 00:38:09.347584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:107368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.041 [2024-11-18 00:38:09.347603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:05.041 [2024-11-18 00:38:09.347626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:107376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.041 [2024-11-18 00:38:09.347642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:05.041 [2024-11-18 00:38:09.347665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:107384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.041 [2024-11-18 00:38:09.347682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:05.041 [2024-11-18 00:38:09.347704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:107392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.041 [2024-11-18 00:38:09.347721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:05.041 [2024-11-18 00:38:09.347743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:107400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.041 [2024-11-18 00:38:09.347760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:05.041 [2024-11-18 00:38:09.347783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:107408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.041 [2024-11-18 00:38:09.347815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:05.041 [2024-11-18 00:38:09.347838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:107416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.041 [2024-11-18 00:38:09.347854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:05.041 [2024-11-18 00:38:09.347892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:107424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.041 [2024-11-18 00:38:09.347907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:05.041 [2024-11-18 00:38:09.347928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:107432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.041 [2024-11-18 00:38:09.347944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:05.041 [2024-11-18 00:38:09.347965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:107440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.041 [2024-11-18 00:38:09.347981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:05.041 [2024-11-18 00:38:09.348002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:107448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.041 [2024-11-18 00:38:09.348018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:05.041 [2024-11-18 00:38:09.348039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:107456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.041 [2024-11-18 00:38:09.348058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:05.041 [2024-11-18 00:38:09.348080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:107464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.041 [2024-11-18 00:38:09.348097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:05.041 [2024-11-18 00:38:09.348117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:107472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.041 [2024-11-18 00:38:09.348133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:05.041 [2024-11-18 00:38:09.348154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:107480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.041 [2024-11-18 00:38:09.348170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:05.041 [2024-11-18 00:38:09.348191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:107488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.041 [2024-11-18 00:38:09.348207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:05.041 [2024-11-18 00:38:09.348228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:107496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.041 [2024-11-18 00:38:09.348244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:05.041 [2024-11-18 00:38:09.348264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:107504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.041 [2024-11-18 00:38:09.348280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:05.041 [2024-11-18 00:38:09.348302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:107512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.041 [2024-11-18 00:38:09.348341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:05.041 [2024-11-18 00:38:09.348365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:107520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.041 [2024-11-18 00:38:09.348382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:05.041 [2024-11-18 00:38:09.348404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:107528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.041 [2024-11-18 00:38:09.348421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:05.041 [2024-11-18 00:38:09.348442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:107536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.041 [2024-11-18 00:38:09.348458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:05.041 [2024-11-18 00:38:09.348479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:107544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.041 [2024-11-18 00:38:09.348496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:05.041 [2024-11-18 00:38:09.348517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:108312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.041 [2024-11-18 00:38:09.348533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:05.041 [2024-11-18 00:38:09.348559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:108320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.041 [2024-11-18 00:38:09.348576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:05.041 [2024-11-18 00:38:09.348597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:107552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.041 [2024-11-18 00:38:09.348614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:05.041 [2024-11-18 00:38:09.348651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:107560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.041 [2024-11-18 00:38:09.348667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:05.041 [2024-11-18 00:38:09.348687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:107568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.041 [2024-11-18 00:38:09.348703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:05.041 [2024-11-18 00:38:09.348724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:107576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.041 [2024-11-18 00:38:09.348740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:05.041 [2024-11-18 00:38:09.348760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:107584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.042 [2024-11-18 00:38:09.348775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:05.042 [2024-11-18 00:38:09.348796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.042 [2024-11-18 00:38:09.348811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:05.042 [2024-11-18 00:38:09.348832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:107600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.042 [2024-11-18 00:38:09.348848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:05.042 [2024-11-18 00:38:09.348868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:107608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.042 [2024-11-18 00:38:09.348884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:05.042 [2024-11-18 00:38:09.348904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:107616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.042 [2024-11-18 00:38:09.348920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:05.042 [2024-11-18 00:38:09.348941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:107624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.042 [2024-11-18 00:38:09.348957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:05.042 [2024-11-18 00:38:09.348977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:107632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.042 [2024-11-18 00:38:09.348993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:05.042 [2024-11-18 00:38:09.349017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:107640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.042 [2024-11-18 00:38:09.349034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:05.042 [2024-11-18 00:38:09.349055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:107648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.042 [2024-11-18 00:38:09.349071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:05.042 [2024-11-18 00:38:09.349091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:107656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.042 [2024-11-18 00:38:09.349107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:05.042 [2024-11-18 00:38:09.349128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:107664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.042 [2024-11-18 00:38:09.349159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.042 [2024-11-18 00:38:09.349182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:107672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.042 [2024-11-18 00:38:09.349197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.042 [2024-11-18 00:38:09.349219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:107680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.042 [2024-11-18 00:38:09.349235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:05.042 [2024-11-18 00:38:09.349256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:107688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.042 [2024-11-18 00:38:09.349272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:05.042 [2024-11-18 00:38:09.349309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:107696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.042 [2024-11-18 00:38:09.349334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:05.042 [2024-11-18 00:38:09.349358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:107704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.042 [2024-11-18 00:38:09.349375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:05.042 [2024-11-18 00:38:09.349397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:107712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.042 [2024-11-18 00:38:09.349413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:05.042 [2024-11-18 00:38:09.349435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:107720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.042 [2024-11-18 00:38:09.349452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:05.042 [2024-11-18 00:38:09.349474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:107728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.042 [2024-11-18 00:38:09.349491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:05.042 [2024-11-18 00:38:09.349517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:107736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.042 [2024-11-18 00:38:09.349534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:05.042 [2024-11-18 00:38:09.349557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:107744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.042 [2024-11-18 00:38:09.349573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:05.042 [2024-11-18 00:38:09.349596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:107752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.042 [2024-11-18 00:38:09.349613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:05.042 [2024-11-18 00:38:09.349650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:107760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.042 [2024-11-18 00:38:09.349667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:05.042 [2024-11-18 00:38:09.349704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:107768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.042 [2024-11-18 00:38:09.349720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:05.042 [2024-11-18 00:38:09.349741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:107776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.042 [2024-11-18 00:38:09.349756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:05.042 [2024-11-18 00:38:09.349777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:107784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.042 [2024-11-18 00:38:09.349792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:05.042 [2024-11-18 00:38:09.349813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:107792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.042 [2024-11-18 00:38:09.349828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:05.042 [2024-11-18 00:38:09.349849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:107800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.042 [2024-11-18 00:38:09.349864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:05.042 [2024-11-18 00:38:09.349885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:107808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.042 [2024-11-18 00:38:09.349904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:05.042 [2024-11-18 00:38:09.349925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:107816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.042 [2024-11-18 00:38:09.349941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:05.042 [2024-11-18 00:38:09.349962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:107824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.042 [2024-11-18 00:38:09.349977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:05.042 [2024-11-18 00:38:09.349998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:107832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.042 [2024-11-18 00:38:09.350017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:05.042 [2024-11-18 00:38:09.350039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:107840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.042 [2024-11-18 00:38:09.350055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:05.042 [2024-11-18 00:38:09.350663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:107848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.042 [2024-11-18 00:38:09.350685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:05.042 [2024-11-18 00:38:09.350710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:107856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.042 [2024-11-18 00:38:09.350731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:05.042 [2024-11-18 00:38:09.350753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:107864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.042 [2024-11-18 00:38:09.350770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:05.042 [2024-11-18 00:38:09.350790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:107872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.042 [2024-11-18 00:38:09.350806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:05.042 [2024-11-18 00:38:09.350827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:107880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.042 [2024-11-18 00:38:09.350843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:05.042 [2024-11-18 00:38:09.350864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:107888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.042 [2024-11-18 00:38:09.350879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:05.043 [2024-11-18 00:38:09.350900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:107896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.043 [2024-11-18 00:38:09.350915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:05.043 [2024-11-18 00:38:09.350936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:107904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.043 [2024-11-18 00:38:09.350951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:05.043 [2024-11-18 00:38:09.350973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:107912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.043 [2024-11-18 00:38:09.351003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:05.043 [2024-11-18 00:38:09.351025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:107920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.043 [2024-11-18 00:38:09.351040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:05.043 [2024-11-18 00:38:09.351060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:107928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.043 [2024-11-18 00:38:09.351080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:05.043 [2024-11-18 00:38:09.351102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:107936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.043 [2024-11-18 00:38:09.351117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:05.043 [2024-11-18 00:38:09.351137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:107944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.043 [2024-11-18 00:38:09.351152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:05.043 [2024-11-18 00:38:09.351173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:107952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.043 [2024-11-18 00:38:09.351188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:05.043 [2024-11-18 00:38:09.351208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:107960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.043 [2024-11-18 00:38:09.351223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:05.043 [2024-11-18 00:38:09.351242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:107968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.043 [2024-11-18 00:38:09.351258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:05.043 [2024-11-18 00:38:09.351278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:107976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.043 [2024-11-18 00:38:09.351309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:05.043 [2024-11-18 00:38:09.351343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:107984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.043 [2024-11-18 00:38:09.351360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:05.043 [2024-11-18 00:38:09.351381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:107992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.043 [2024-11-18 00:38:09.351398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:05.043 [2024-11-18 00:38:09.351420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:108000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.043 [2024-11-18 00:38:09.351438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:05.043 [2024-11-18 00:38:09.351460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:108008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.043 [2024-11-18 00:38:09.351477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:05.043 [2024-11-18 00:38:09.351500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:108016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.043 [2024-11-18 00:38:09.351516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:05.043 [2024-11-18 00:38:09.351538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:108024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.043 [2024-11-18 00:38:09.351559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:05.043 [2024-11-18 00:38:09.351582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:108032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.043 [2024-11-18 00:38:09.351614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:05.043 [2024-11-18 00:38:09.351637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:108040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.043 [2024-11-18 00:38:09.351652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:05.043 [2024-11-18 00:38:09.351688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:108048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.043 [2024-11-18 00:38:09.351704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:05.043 [2024-11-18 00:38:09.351724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:108056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.043 [2024-11-18 00:38:09.351739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:05.043 [2024-11-18 00:38:09.351760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:108064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.043 [2024-11-18 00:38:09.351774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:05.043 [2024-11-18 00:38:09.351794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:108072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.043 [2024-11-18 00:38:09.351809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:05.043 [2024-11-18 00:38:09.351829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:108080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.043 [2024-11-18 00:38:09.351844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:05.043 [2024-11-18 00:38:09.351864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:108088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.043 [2024-11-18 00:38:09.351879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:05.043 [2024-11-18 00:38:09.351899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:108096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.043 [2024-11-18 00:38:09.351915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:05.043 [2024-11-18 00:38:09.351935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:108104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.043 [2024-11-18 00:38:09.351951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:05.043 [2024-11-18 00:38:09.351971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:108112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.043 [2024-11-18 00:38:09.351986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:05.043 [2024-11-18 00:38:09.352006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:108120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.043 [2024-11-18 00:38:09.352021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:05.043 [2024-11-18 00:38:09.352045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:108128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.043 [2024-11-18 00:38:09.352061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:05.043 [2024-11-18 00:38:09.352081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:108136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.043 [2024-11-18 00:38:09.352097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:05.043 [2024-11-18 00:38:09.352132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:108144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.043 [2024-11-18 00:38:09.352148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:05.043 [2024-11-18 00:38:09.352170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:108152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.043 [2024-11-18 00:38:09.352185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:05.043 [2024-11-18 00:38:09.352206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:108160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.043 [2024-11-18 00:38:09.352222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:05.043 [2024-11-18 00:38:09.352242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:108168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.043 [2024-11-18 00:38:09.352258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:05.043 [2024-11-18 00:38:09.352279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:108176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.043 [2024-11-18 00:38:09.352316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:05.043 [2024-11-18 00:38:09.352343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:108184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.043 [2024-11-18 00:38:09.352360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:05.043 [2024-11-18 00:38:09.352382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:108192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.043 [2024-11-18 00:38:09.352399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:05.043 [2024-11-18 00:38:09.352421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:108200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.043 [2024-11-18 00:38:09.352438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:05.044 [2024-11-18 00:38:09.352460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:108208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.044 [2024-11-18 00:38:09.352476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:05.044 [2024-11-18 00:38:09.352498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:108216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.044 [2024-11-18 00:38:09.352514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:05.044 [2024-11-18 00:38:09.352541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:108224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.044 [2024-11-18 00:38:09.352558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:05.044 [2024-11-18 00:38:09.352580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:108232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.044 [2024-11-18 00:38:09.352596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:05.044 [2024-11-18 00:38:09.352618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:108240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.044 [2024-11-18 00:38:09.352635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:05.044 [2024-11-18 00:38:09.352675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:108248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.044 [2024-11-18 00:38:09.352692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:05.044 [2024-11-18 00:38:09.352728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:108256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.044 [2024-11-18 00:38:09.352743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:05.044 [2024-11-18 00:38:09.352765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:108264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.044 [2024-11-18 00:38:09.352780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:05.044 [2024-11-18 00:38:09.352800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:108272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.044 [2024-11-18 00:38:09.352816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:05.044 [2024-11-18 00:38:09.352835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:108280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.044 [2024-11-18 00:38:09.352851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:05.044 [2024-11-18 00:38:09.352871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:108288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.044 [2024-11-18 00:38:09.352886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:05.044 [2024-11-18 00:38:09.352906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:108296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.044 [2024-11-18 00:38:09.352921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:05.044 [2024-11-18 00:38:09.352941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:108304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.044 [2024-11-18 00:38:09.352956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:05.044 [2024-11-18 00:38:09.352976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:107304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.044 [2024-11-18 00:38:09.352991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:05.044 [2024-11-18 00:38:09.353011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:107312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.044 [2024-11-18 00:38:09.353029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:05.044 [2024-11-18 00:38:09.353050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:107320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.044 [2024-11-18 00:38:09.353066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:05.044 [2024-11-18 00:38:09.353086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:107328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.044 [2024-11-18 00:38:09.353101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:05.044 [2024-11-18 00:38:09.353121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.044 [2024-11-18 00:38:09.353136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:05.044 [2024-11-18 00:38:09.353156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:107344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.044 [2024-11-18 00:38:09.353172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:05.044 [2024-11-18 00:38:09.354002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:107352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.044 [2024-11-18 00:38:09.354026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:05.044 [2024-11-18 00:38:09.354052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:107360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.044 [2024-11-18 00:38:09.354069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:05.044 [2024-11-18 00:38:09.354090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:107368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.044 [2024-11-18 00:38:09.354106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:05.044 [2024-11-18 00:38:09.354131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:107376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.044 [2024-11-18 00:38:09.354148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:05.044 [2024-11-18 00:38:09.354186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:107384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.044 [2024-11-18 00:38:09.354202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:05.044 [2024-11-18 00:38:09.354224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:107392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.044 [2024-11-18 00:38:09.354239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:05.044 [2024-11-18 00:38:09.354260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:107400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.044 [2024-11-18 00:38:09.354277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:05.044 [2024-11-18 00:38:09.354323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:107408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.044 [2024-11-18 00:38:09.354349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:05.044 [2024-11-18 00:38:09.354374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:107416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.044 [2024-11-18 00:38:09.354391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:05.044 [2024-11-18 00:38:09.354413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:107424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.044 [2024-11-18 00:38:09.354430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:05.044 [2024-11-18 00:38:09.354452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:107432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.044 [2024-11-18 00:38:09.354469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:05.044 [2024-11-18 00:38:09.354491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:107440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.044 [2024-11-18 00:38:09.354507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:05.044 [2024-11-18 00:38:09.354530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:107448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.044 [2024-11-18 00:38:09.354547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:05.044 [2024-11-18 00:38:09.354569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:107456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.044 [2024-11-18 00:38:09.354585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:05.044 [2024-11-18 00:38:09.354622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:107464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.044 [2024-11-18 00:38:09.354638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:05.044 [2024-11-18 00:38:09.354674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:107472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.044 [2024-11-18 00:38:09.354690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:05.044 [2024-11-18 00:38:09.354710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:107480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.044 [2024-11-18 00:38:09.354725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:05.044 [2024-11-18 00:38:09.354746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:107488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.044 [2024-11-18 00:38:09.354761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:05.044 [2024-11-18 00:38:09.354781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:107496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.044 [2024-11-18 00:38:09.354797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:05.044 [2024-11-18 00:38:09.354817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:107504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.045 [2024-11-18 00:38:09.354836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:05.045 [2024-11-18 00:38:09.354857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:107512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.045 [2024-11-18 00:38:09.354873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:05.045 [2024-11-18 00:38:09.354893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:107520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.045 [2024-11-18 00:38:09.354908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:05.045 [2024-11-18 00:38:09.354929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:107528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.045 [2024-11-18 00:38:09.354945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:05.045 [2024-11-18 00:38:09.354965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:107536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.045 [2024-11-18 00:38:09.354980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:05.045 [2024-11-18 00:38:09.355000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:107544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.045 [2024-11-18 00:38:09.355015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:05.045 [2024-11-18 00:38:09.355035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:108312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.045 [2024-11-18 00:38:09.355050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:05.045 [2024-11-18 00:38:09.355071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:108320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.045 [2024-11-18 00:38:09.355086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:05.045 [2024-11-18 00:38:09.355106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:107552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.045 [2024-11-18 00:38:09.355121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:05.045 [2024-11-18 00:38:09.355141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:107560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.045 [2024-11-18 00:38:09.355156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:05.045 [2024-11-18 00:38:09.355177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:107568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.045 [2024-11-18 00:38:09.355192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:05.045 [2024-11-18 00:38:09.355212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:107576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.045 [2024-11-18 00:38:09.355227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:05.045 [2024-11-18 00:38:09.355247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.045 [2024-11-18 00:38:09.355262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:05.045 [2024-11-18 00:38:09.355302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:107592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.045 [2024-11-18 00:38:09.355328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:05.045 [2024-11-18 00:38:09.355353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:107600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.045 [2024-11-18 00:38:09.355370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:05.045 [2024-11-18 00:38:09.355392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:107608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.045 [2024-11-18 00:38:09.355409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:05.045 [2024-11-18 00:38:09.355430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:107616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.045 [2024-11-18 00:38:09.355447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:05.045 [2024-11-18 00:38:09.355469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:107624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.045 [2024-11-18 00:38:09.355486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:05.045 [2024-11-18 00:38:09.355508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:107632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.045 [2024-11-18 00:38:09.355524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:05.045 [2024-11-18 00:38:09.355546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:107640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.045 [2024-11-18 00:38:09.355563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:05.045 [2024-11-18 00:38:09.355585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:107648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.045 [2024-11-18 00:38:09.355617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:05.045 [2024-11-18 00:38:09.355639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:107656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.045 [2024-11-18 00:38:09.355655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:05.045 [2024-11-18 00:38:09.355692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:107664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.045 [2024-11-18 00:38:09.355707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.045 [2024-11-18 00:38:09.355742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:107672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.045 [2024-11-18 00:38:09.355758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.045 [2024-11-18 00:38:09.355780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:107680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.045 [2024-11-18 00:38:09.355811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:05.045 [2024-11-18 00:38:09.355839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:107688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.045 [2024-11-18 00:38:09.355856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:05.045 [2024-11-18 00:38:09.355878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:107696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.045 [2024-11-18 00:38:09.355895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:05.045 [2024-11-18 00:38:09.355917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:107704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.045 [2024-11-18 00:38:09.355933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:05.045 [2024-11-18 00:38:09.355956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:107712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.045 [2024-11-18 00:38:09.355972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:05.045 [2024-11-18 00:38:09.355994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:107720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.045 [2024-11-18 00:38:09.356011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:05.045 [2024-11-18 00:38:09.356034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:107728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.045 [2024-11-18 00:38:09.356050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:05.045 [2024-11-18 00:38:09.356073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:107736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.045 [2024-11-18 00:38:09.356090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:05.045 [2024-11-18 00:38:09.356129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:107744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.045 [2024-11-18 00:38:09.356146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:05.045 [2024-11-18 00:38:09.356183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:107752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.045 [2024-11-18 00:38:09.356199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:05.045 [2024-11-18 00:38:09.356219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:107760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.045 [2024-11-18 00:38:09.356235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:05.045 [2024-11-18 00:38:09.356255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:107768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.046 [2024-11-18 00:38:09.356270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:05.046 [2024-11-18 00:38:09.356305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:107776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.046 [2024-11-18 00:38:09.356330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:05.046 [2024-11-18 00:38:09.356354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:107784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.046 [2024-11-18 00:38:09.356375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:05.046 [2024-11-18 00:38:09.356398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:107792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.046 [2024-11-18 00:38:09.356415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:05.046 [2024-11-18 00:38:09.356437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:107800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.046 [2024-11-18 00:38:09.356453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:05.046 [2024-11-18 00:38:09.356475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:107808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.046 [2024-11-18 00:38:09.356491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:05.046 [2024-11-18 00:38:09.356513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:107816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.046 [2024-11-18 00:38:09.356529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:05.046 [2024-11-18 00:38:09.356551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:107824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.046 [2024-11-18 00:38:09.356568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:05.046 [2024-11-18 00:38:09.356605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:107832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.046 [2024-11-18 00:38:09.356621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:05.046 [2024-11-18 00:38:09.357186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.046 [2024-11-18 00:38:09.357207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:05.046 [2024-11-18 00:38:09.357237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.046 [2024-11-18 00:38:09.357255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:05.046 [2024-11-18 00:38:09.357276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:107856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.046 [2024-11-18 00:38:09.357308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:05.046 [2024-11-18 00:38:09.357341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:107864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.046 [2024-11-18 00:38:09.357359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:05.046 [2024-11-18 00:38:09.357386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:107872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.046 [2024-11-18 00:38:09.357404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:05.046 [2024-11-18 00:38:09.357426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:107880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.046 [2024-11-18 00:38:09.357448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:05.046 [2024-11-18 00:38:09.357471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:107888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.046 [2024-11-18 00:38:09.357488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:05.046 [2024-11-18 00:38:09.357510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:107896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.046 [2024-11-18 00:38:09.357526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:05.046 [2024-11-18 00:38:09.357548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:107904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.046 [2024-11-18 00:38:09.357565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:05.046 [2024-11-18 00:38:09.357588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:107912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.046 [2024-11-18 00:38:09.357619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:05.046 [2024-11-18 00:38:09.357641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:107920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.046 [2024-11-18 00:38:09.357656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:05.046 [2024-11-18 00:38:09.357691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:107928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.046 [2024-11-18 00:38:09.357707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:05.046 [2024-11-18 00:38:09.357727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:107936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.046 [2024-11-18 00:38:09.357742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:05.046 [2024-11-18 00:38:09.357762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:107944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.046 [2024-11-18 00:38:09.357777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:05.046 [2024-11-18 00:38:09.357797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:107952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.046 [2024-11-18 00:38:09.357812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:05.046 [2024-11-18 00:38:09.357833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:107960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.046 [2024-11-18 00:38:09.357848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:05.046 [2024-11-18 00:38:09.357868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:107968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.046 [2024-11-18 00:38:09.357883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:05.046 [2024-11-18 00:38:09.357903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:107976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.046 [2024-11-18 00:38:09.357922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:05.046 [2024-11-18 00:38:09.357943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:107984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.046 [2024-11-18 00:38:09.357958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:05.046 [2024-11-18 00:38:09.357978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:107992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.046 [2024-11-18 00:38:09.357993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:05.046 [2024-11-18 00:38:09.358013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:108000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.046 [2024-11-18 00:38:09.358029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:05.046 [2024-11-18 00:38:09.358049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:108008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.046 [2024-11-18 00:38:09.358064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:05.046 [2024-11-18 00:38:09.358084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:108016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.046 [2024-11-18 00:38:09.358099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:05.046 [2024-11-18 00:38:09.358119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:108024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.046 [2024-11-18 00:38:09.358135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:05.046 [2024-11-18 00:38:09.358155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:108032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.046 [2024-11-18 00:38:09.358170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:05.046 [2024-11-18 00:38:09.358190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:108040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.046 [2024-11-18 00:38:09.358206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:05.046 [2024-11-18 00:38:09.358226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:108048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.046 [2024-11-18 00:38:09.358241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:05.046 [2024-11-18 00:38:09.358261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:108056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.046 [2024-11-18 00:38:09.358276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:05.046 [2024-11-18 00:38:09.358319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:108064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.046 [2024-11-18 00:38:09.358342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:05.046 [2024-11-18 00:38:09.358364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:108072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.046 [2024-11-18 00:38:09.358381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:05.047 [2024-11-18 00:38:09.358409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:108080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.047 [2024-11-18 00:38:09.358426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:05.047 [2024-11-18 00:38:09.358448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:108088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.047 [2024-11-18 00:38:09.358465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:05.047 [2024-11-18 00:38:09.358487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:108096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.047 [2024-11-18 00:38:09.358503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:05.047 [2024-11-18 00:38:09.358525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:108104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.047 [2024-11-18 00:38:09.358542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:05.047 [2024-11-18 00:38:09.358564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:108112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.047 [2024-11-18 00:38:09.358580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:05.047 [2024-11-18 00:38:09.358618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:108120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.047 [2024-11-18 00:38:09.358634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:05.047 [2024-11-18 00:38:09.358654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:108128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.047 [2024-11-18 00:38:09.358670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:05.047 [2024-11-18 00:38:09.358690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:108136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.047 [2024-11-18 00:38:09.358706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:05.047 [2024-11-18 00:38:09.358752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:108144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.047 [2024-11-18 00:38:09.358768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:05.047 [2024-11-18 00:38:09.358789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:108152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.047 [2024-11-18 00:38:09.358804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:05.047 [2024-11-18 00:38:09.358825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:108160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.047 [2024-11-18 00:38:09.358841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:05.047 [2024-11-18 00:38:09.358861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:108168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.047 [2024-11-18 00:38:09.358877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:05.047 [2024-11-18 00:38:09.358901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:108176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.047 [2024-11-18 00:38:09.358918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:05.047 [2024-11-18 00:38:09.358939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:108184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.047 [2024-11-18 00:38:09.358969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:05.047 [2024-11-18 00:38:09.358991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:108192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.047 [2024-11-18 00:38:09.359007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:05.047 [2024-11-18 00:38:09.359046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:108200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.047 [2024-11-18 00:38:09.359063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:05.047 [2024-11-18 00:38:09.359085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:108208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.047 [2024-11-18 00:38:09.359101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:05.047 [2024-11-18 00:38:09.359126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:108216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.047 [2024-11-18 00:38:09.359142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:05.047 [2024-11-18 00:38:09.359164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:108224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.047 [2024-11-18 00:38:09.359181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:05.047 [2024-11-18 00:38:09.359202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:108232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.047 [2024-11-18 00:38:09.359219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:05.047 [2024-11-18 00:38:09.359240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:108240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.047 [2024-11-18 00:38:09.359272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:05.047 [2024-11-18 00:38:09.359295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:108248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.047 [2024-11-18 00:38:09.359332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:05.047 [2024-11-18 00:38:09.359359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:108256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.047 [2024-11-18 00:38:09.359377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:05.047 [2024-11-18 00:38:09.359399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:108264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.047 [2024-11-18 00:38:09.359416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:05.047 [2024-11-18 00:38:09.359438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:108272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.047 [2024-11-18 00:38:09.359459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:05.047 [2024-11-18 00:38:09.359482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:108280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.047 [2024-11-18 00:38:09.359499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:05.047 [2024-11-18 00:38:09.359521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:108288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.047 [2024-11-18 00:38:09.359538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:05.047 [2024-11-18 00:38:09.359560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:108296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.047 [2024-11-18 00:38:09.359576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:05.047 [2024-11-18 00:38:09.359598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:108304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.047 [2024-11-18 00:38:09.359615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:05.047 [2024-11-18 00:38:09.359651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:107304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.047 [2024-11-18 00:38:09.359667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:05.047 [2024-11-18 00:38:09.359687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:107312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.047 [2024-11-18 00:38:09.359702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:05.047 [2024-11-18 00:38:09.359722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:107320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.047 [2024-11-18 00:38:09.359737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:05.047 [2024-11-18 00:38:09.359757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:107328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.047 [2024-11-18 00:38:09.359772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:05.047 [2024-11-18 00:38:09.359792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:107336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.047 [2024-11-18 00:38:09.359807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:05.047 [2024-11-18 00:38:09.360543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:107344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.047 [2024-11-18 00:38:09.360567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:05.047 [2024-11-18 00:38:09.360594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:107352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.047 [2024-11-18 00:38:09.360627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:05.047 [2024-11-18 00:38:09.360650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:107360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.047 [2024-11-18 00:38:09.360675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:05.047 [2024-11-18 00:38:09.360699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:107368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.047 [2024-11-18 00:38:09.360716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:05.047 [2024-11-18 00:38:09.360737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:107376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.047 [2024-11-18 00:38:09.360753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:05.048 [2024-11-18 00:38:09.360792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:107384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.048 [2024-11-18 00:38:09.360810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:05.048 [2024-11-18 00:38:09.360847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:107392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.048 [2024-11-18 00:38:09.360864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:05.048 [2024-11-18 00:38:09.360887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:107400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.048 [2024-11-18 00:38:09.360918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:05.048 [2024-11-18 00:38:09.360940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:107408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.048 [2024-11-18 00:38:09.360955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:05.048 [2024-11-18 00:38:09.360976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:107416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.048 [2024-11-18 00:38:09.360992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:05.048 [2024-11-18 00:38:09.361013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:107424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.048 [2024-11-18 00:38:09.361028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:05.048 [2024-11-18 00:38:09.361049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:107432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.048 [2024-11-18 00:38:09.361080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:05.048 [2024-11-18 00:38:09.361101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:107440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.048 [2024-11-18 00:38:09.361117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:05.048 [2024-11-18 00:38:09.361137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:107448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.048 [2024-11-18 00:38:09.361152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:05.048 [2024-11-18 00:38:09.361172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:107456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.048 [2024-11-18 00:38:09.361190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:05.048 [2024-11-18 00:38:09.361211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:107464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.048 [2024-11-18 00:38:09.361227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:05.048 [2024-11-18 00:38:09.361247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:107472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.048 [2024-11-18 00:38:09.361262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:05.048 [2024-11-18 00:38:09.361282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:107480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.048 [2024-11-18 00:38:09.361321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:05.048 [2024-11-18 00:38:09.361347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:107488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.048 [2024-11-18 00:38:09.361365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:05.048 [2024-11-18 00:38:09.361387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:107496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.048 [2024-11-18 00:38:09.361403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:05.048 [2024-11-18 00:38:09.361425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:107504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.048 [2024-11-18 00:38:09.361442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:05.048 [2024-11-18 00:38:09.361465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:107512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.048 [2024-11-18 00:38:09.361481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:05.048 [2024-11-18 00:38:09.361503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:107520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.048 [2024-11-18 00:38:09.361520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:05.048 [2024-11-18 00:38:09.361542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:107528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.048 [2024-11-18 00:38:09.361559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:05.048 [2024-11-18 00:38:09.361581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:107536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.048 [2024-11-18 00:38:09.361612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:05.048 [2024-11-18 00:38:09.361634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:107544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.048 [2024-11-18 00:38:09.361649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:05.048 [2024-11-18 00:38:09.361669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:108312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.048 [2024-11-18 00:38:09.361684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:05.048 [2024-11-18 00:38:09.361708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:108320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.048 [2024-11-18 00:38:09.361724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:05.048 [2024-11-18 00:38:09.361744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:107552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.048 [2024-11-18 00:38:09.361760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:05.048 [2024-11-18 00:38:09.361780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:107560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.048 [2024-11-18 00:38:09.361795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:05.048 [2024-11-18 00:38:09.361815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:107568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.048 [2024-11-18 00:38:09.361829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:05.048 [2024-11-18 00:38:09.361850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.048 [2024-11-18 00:38:09.361865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:05.048 [2024-11-18 00:38:09.361885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:107584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.048 [2024-11-18 00:38:09.361899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:05.048 [2024-11-18 00:38:09.361919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:107592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.048 [2024-11-18 00:38:09.361934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:05.048 [2024-11-18 00:38:09.361954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:107600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.048 [2024-11-18 00:38:09.361969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:05.048 [2024-11-18 00:38:09.361989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:107608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.048 [2024-11-18 00:38:09.362003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:05.048 [2024-11-18 00:38:09.362024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:107616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.048 [2024-11-18 00:38:09.362040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:05.048 [2024-11-18 00:38:09.362060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:107624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.048 [2024-11-18 00:38:09.362075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:05.048 [2024-11-18 00:38:09.362095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:107632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.048 [2024-11-18 00:38:09.362110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:05.048 [2024-11-18 00:38:09.362135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:107640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.048 [2024-11-18 00:38:09.362150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:05.048 [2024-11-18 00:38:09.362170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:107648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.048 [2024-11-18 00:38:09.362185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:05.048 [2024-11-18 00:38:09.362205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:107656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.048 [2024-11-18 00:38:09.362220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:05.048 [2024-11-18 00:38:09.362240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:107664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.048 [2024-11-18 00:38:09.362255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.048 [2024-11-18 00:38:09.362292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:107672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.049 [2024-11-18 00:38:09.362308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.049 [2024-11-18 00:38:09.362352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:107680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.049 [2024-11-18 00:38:09.362371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:05.049 [2024-11-18 00:38:09.362393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:107688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.049 [2024-11-18 00:38:09.362409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:05.049 [2024-11-18 00:38:09.362431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:107696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.049 [2024-11-18 00:38:09.362448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:05.049 [2024-11-18 00:38:09.362471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:107704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.049 [2024-11-18 00:38:09.362487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:05.049 [2024-11-18 00:38:09.362509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:107712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.049 [2024-11-18 00:38:09.362526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:05.049 [2024-11-18 00:38:09.362548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:107720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.049 [2024-11-18 00:38:09.362564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:05.049 [2024-11-18 00:38:09.362586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:107728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.049 [2024-11-18 00:38:09.362603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:05.049 [2024-11-18 00:38:09.362625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:107736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.049 [2024-11-18 00:38:09.362659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:05.049 [2024-11-18 00:38:09.362684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:107744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.049 [2024-11-18 00:38:09.362700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:05.049 [2024-11-18 00:38:09.362738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:107752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.049 [2024-11-18 00:38:09.362753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:05.049 [2024-11-18 00:38:09.362773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:107760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.049 [2024-11-18 00:38:09.362788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:05.049 [2024-11-18 00:38:09.362808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:107768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.049 [2024-11-18 00:38:09.362823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:05.049 [2024-11-18 00:38:09.362843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:107776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.049 [2024-11-18 00:38:09.362858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:05.049 [2024-11-18 00:38:09.362878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:107784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.049 [2024-11-18 00:38:09.362893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:05.049 [2024-11-18 00:38:09.362913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:107792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.049 [2024-11-18 00:38:09.362928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:05.049 [2024-11-18 00:38:09.362948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:107800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.049 [2024-11-18 00:38:09.362962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:05.049 [2024-11-18 00:38:09.362983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:107808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.049 [2024-11-18 00:38:09.362998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:05.049 [2024-11-18 00:38:09.363018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:107816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.049 [2024-11-18 00:38:09.363033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:05.049 [2024-11-18 00:38:09.363053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:107824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.049 [2024-11-18 00:38:09.363069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:05.049 [2024-11-18 00:38:09.363660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:107832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.049 [2024-11-18 00:38:09.363692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:05.049 [2024-11-18 00:38:09.363719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:107840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.049 [2024-11-18 00:38:09.363736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:05.049 [2024-11-18 00:38:09.363758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:107848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.049 [2024-11-18 00:38:09.363774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:05.049 [2024-11-18 00:38:09.363794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:107856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.049 [2024-11-18 00:38:09.363810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:05.049 [2024-11-18 00:38:09.363841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:107864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.049 [2024-11-18 00:38:09.363856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:05.049 [2024-11-18 00:38:09.363878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:107872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.049 [2024-11-18 00:38:09.363893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:05.049 [2024-11-18 00:38:09.363914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:107880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.049 [2024-11-18 00:38:09.363929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:05.049 [2024-11-18 00:38:09.363950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:107888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.049 [2024-11-18 00:38:09.363966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:05.049 [2024-11-18 00:38:09.364002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:107896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.049 [2024-11-18 00:38:09.364018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:05.049 [2024-11-18 00:38:09.364038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:107904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.049 [2024-11-18 00:38:09.364053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:05.049 [2024-11-18 00:38:09.364073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:107912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.049 [2024-11-18 00:38:09.364090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:05.049 [2024-11-18 00:38:09.364110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:107920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.049 [2024-11-18 00:38:09.364125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:05.049 [2024-11-18 00:38:09.364145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:107928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.049 [2024-11-18 00:38:09.364169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:05.049 [2024-11-18 00:38:09.364190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:107936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.049 [2024-11-18 00:38:09.364205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:05.049 [2024-11-18 00:38:09.364226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:107944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.049 [2024-11-18 00:38:09.364250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:05.049 [2024-11-18 00:38:09.364270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:107952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.049 [2024-11-18 00:38:09.364285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:05.049 [2024-11-18 00:38:09.364334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:107960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.049 [2024-11-18 00:38:09.364353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:05.049 [2024-11-18 00:38:09.364375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:107968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.049 [2024-11-18 00:38:09.364392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:05.049 [2024-11-18 00:38:09.364414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:107976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.049 [2024-11-18 00:38:09.364430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:05.050 [2024-11-18 00:38:09.364453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:107984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.050 [2024-11-18 00:38:09.364469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:05.050 [2024-11-18 00:38:09.364491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:107992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.050 [2024-11-18 00:38:09.364508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:05.050 [2024-11-18 00:38:09.364530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:108000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.050 [2024-11-18 00:38:09.364558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:05.050 [2024-11-18 00:38:09.364580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:108008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.050 [2024-11-18 00:38:09.364610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:05.050 [2024-11-18 00:38:09.364638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:108016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.050 [2024-11-18 00:38:09.364653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:05.050 [2024-11-18 00:38:09.364674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:108024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.050 [2024-11-18 00:38:09.364690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:05.050 [2024-11-18 00:38:09.364726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:108032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.050 [2024-11-18 00:38:09.364742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:05.050 [2024-11-18 00:38:09.364777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:108040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.050 [2024-11-18 00:38:09.364792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:05.050 [2024-11-18 00:38:09.364813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:108048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.050 [2024-11-18 00:38:09.364828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:05.050 [2024-11-18 00:38:09.364849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:108056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.050 [2024-11-18 00:38:09.364864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:05.050 [2024-11-18 00:38:09.364884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:108064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.050 [2024-11-18 00:38:09.364899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:05.050 [2024-11-18 00:38:09.364919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:108072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.050 [2024-11-18 00:38:09.364934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:05.050 [2024-11-18 00:38:09.364954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:108080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.050 [2024-11-18 00:38:09.364970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:05.050 [2024-11-18 00:38:09.364989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:108088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.050 [2024-11-18 00:38:09.365004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:05.050 [2024-11-18 00:38:09.365024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:108096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.050 [2024-11-18 00:38:09.365039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:05.050 [2024-11-18 00:38:09.365059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:108104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.050 [2024-11-18 00:38:09.365074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:05.050 [2024-11-18 00:38:09.365101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:108112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.050 [2024-11-18 00:38:09.365116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:05.050 [2024-11-18 00:38:09.365136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:108120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.050 [2024-11-18 00:38:09.365162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:05.050 [2024-11-18 00:38:09.365186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:108128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.050 [2024-11-18 00:38:09.365202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:05.050 [2024-11-18 00:38:09.365222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:108136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.050 [2024-11-18 00:38:09.365237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:05.050 [2024-11-18 00:38:09.365257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:108144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.050 [2024-11-18 00:38:09.365272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:05.050 [2024-11-18 00:38:09.365308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:108152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.050 [2024-11-18 00:38:09.365335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:05.050 [2024-11-18 00:38:09.365359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:108160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.050 [2024-11-18 00:38:09.365376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:05.050 [2024-11-18 00:38:09.365399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:108168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.050 [2024-11-18 00:38:09.365415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:05.050 [2024-11-18 00:38:09.365437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:108176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.050 [2024-11-18 00:38:09.365454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:05.050 [2024-11-18 00:38:09.365485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:108184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.050 [2024-11-18 00:38:09.365501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:05.050 [2024-11-18 00:38:09.365523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:108192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.050 [2024-11-18 00:38:09.365551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:05.050 [2024-11-18 00:38:09.365573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:108200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.050 [2024-11-18 00:38:09.365589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:05.050 [2024-11-18 00:38:09.365611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:108208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.050 [2024-11-18 00:38:09.365629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:05.050 [2024-11-18 00:38:09.365651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:108216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.050 [2024-11-18 00:38:09.365667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:05.050 [2024-11-18 00:38:09.365705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:108224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.050 [2024-11-18 00:38:09.365725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:05.050 [2024-11-18 00:38:09.365749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:108232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.050 [2024-11-18 00:38:09.365779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:05.050 [2024-11-18 00:38:09.365801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:108240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.050 [2024-11-18 00:38:09.365817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:05.050 [2024-11-18 00:38:09.365852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:108248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.050 [2024-11-18 00:38:09.365868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:05.050 [2024-11-18 00:38:09.365889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:108256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.050 [2024-11-18 00:38:09.365904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:05.051 [2024-11-18 00:38:09.365925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:108264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.051 [2024-11-18 00:38:09.365948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:05.051 [2024-11-18 00:38:09.365969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:108272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.051 [2024-11-18 00:38:09.365984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:05.051 [2024-11-18 00:38:09.366004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:108280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.051 [2024-11-18 00:38:09.366019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:05.051 [2024-11-18 00:38:09.366039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:108288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.051 [2024-11-18 00:38:09.366054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:05.051 [2024-11-18 00:38:09.366074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:108296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.051 [2024-11-18 00:38:09.366090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:05.051 [2024-11-18 00:38:09.366110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:108304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.051 [2024-11-18 00:38:09.366125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:05.051 [2024-11-18 00:38:09.366145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:107304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.051 [2024-11-18 00:38:09.366161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:05.051 [2024-11-18 00:38:09.366180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:107312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.051 [2024-11-18 00:38:09.366199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:05.051 [2024-11-18 00:38:09.366220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.051 [2024-11-18 00:38:09.366236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:05.051 [2024-11-18 00:38:09.366256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:107328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.051 [2024-11-18 00:38:09.366271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:05.051 [2024-11-18 00:38:09.367084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:107336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.051 [2024-11-18 00:38:09.367126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:05.051 [2024-11-18 00:38:09.367154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:107344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.051 [2024-11-18 00:38:09.367171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:05.051 [2024-11-18 00:38:09.367209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:107352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.051 [2024-11-18 00:38:09.367225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:05.051 [2024-11-18 00:38:09.367263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:107360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.051 [2024-11-18 00:38:09.367281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:05.051 [2024-11-18 00:38:09.367303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:107368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.051 [2024-11-18 00:38:09.367334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:05.051 [2024-11-18 00:38:09.367358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:107376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.051 [2024-11-18 00:38:09.367377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:05.051 [2024-11-18 00:38:09.367400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:107384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.051 [2024-11-18 00:38:09.367417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:05.051 [2024-11-18 00:38:09.367440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:107392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.051 [2024-11-18 00:38:09.367456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:05.051 [2024-11-18 00:38:09.367478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:107400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.051 [2024-11-18 00:38:09.367495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:05.051 [2024-11-18 00:38:09.367517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:107408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.051 [2024-11-18 00:38:09.367539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:05.051 [2024-11-18 00:38:09.367562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:107416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.051 [2024-11-18 00:38:09.367580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:05.051 [2024-11-18 00:38:09.367619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:107424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.051 [2024-11-18 00:38:09.367635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:05.051 [2024-11-18 00:38:09.367672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:107432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.051 [2024-11-18 00:38:09.367688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:05.051 [2024-11-18 00:38:09.367709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:107440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.051 [2024-11-18 00:38:09.367725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:05.051 [2024-11-18 00:38:09.367746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:107448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.051 [2024-11-18 00:38:09.367761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:05.051 [2024-11-18 00:38:09.367797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:107456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.051 [2024-11-18 00:38:09.367813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:05.051 [2024-11-18 00:38:09.367833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:107464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.051 [2024-11-18 00:38:09.367848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:05.051 [2024-11-18 00:38:09.367868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:107472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.051 [2024-11-18 00:38:09.367884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:05.051 [2024-11-18 00:38:09.367904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:107480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.051 [2024-11-18 00:38:09.367919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:05.051 [2024-11-18 00:38:09.367939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:107488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.051 [2024-11-18 00:38:09.367954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:05.051 [2024-11-18 00:38:09.367974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:107496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.051 [2024-11-18 00:38:09.367989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:05.051 [2024-11-18 00:38:09.368010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:107504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.051 [2024-11-18 00:38:09.368026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:05.051 [2024-11-18 00:38:09.368051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:107512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.051 [2024-11-18 00:38:09.368067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:05.051 [2024-11-18 00:38:09.368087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:107520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.051 [2024-11-18 00:38:09.368102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:05.051 [2024-11-18 00:38:09.368123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:107528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.051 [2024-11-18 00:38:09.368139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:05.051 [2024-11-18 00:38:09.368159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:107536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.051 [2024-11-18 00:38:09.368174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:05.051 [2024-11-18 00:38:09.368194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:107544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.051 [2024-11-18 00:38:09.368209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:05.051 [2024-11-18 00:38:09.368229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:108312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.051 [2024-11-18 00:38:09.368245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:05.051 [2024-11-18 00:38:09.368265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:108320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.052 [2024-11-18 00:38:09.368281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:05.052 [2024-11-18 00:38:09.368326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:107552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.052 [2024-11-18 00:38:09.368345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:05.052 [2024-11-18 00:38:09.368367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:107560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.052 [2024-11-18 00:38:09.368384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:05.052 [2024-11-18 00:38:09.368406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.052 [2024-11-18 00:38:09.368423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:05.052 [2024-11-18 00:38:09.368445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:107576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.052 [2024-11-18 00:38:09.368461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:05.052 [2024-11-18 00:38:09.368483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:107584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.052 [2024-11-18 00:38:09.368500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:05.052 [2024-11-18 00:38:09.368526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:107592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.052 [2024-11-18 00:38:09.368544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:05.052 [2024-11-18 00:38:09.368566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:107600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.052 [2024-11-18 00:38:09.368583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:05.052 [2024-11-18 00:38:09.368620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:107608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.052 [2024-11-18 00:38:09.368636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:05.052 [2024-11-18 00:38:09.368670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:107616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.052 [2024-11-18 00:38:09.368687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:05.052 [2024-11-18 00:38:09.368708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:107624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.052 [2024-11-18 00:38:09.368723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:05.052 [2024-11-18 00:38:09.368743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:107632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.052 [2024-11-18 00:38:09.368758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:05.052 [2024-11-18 00:38:09.368778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:107640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.052 [2024-11-18 00:38:09.368793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:05.052 [2024-11-18 00:38:09.368813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:107648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.052 [2024-11-18 00:38:09.368828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:05.052 [2024-11-18 00:38:09.368848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:107656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.052 [2024-11-18 00:38:09.368863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:05.052 [2024-11-18 00:38:09.368883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:107664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.052 [2024-11-18 00:38:09.368897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.052 [2024-11-18 00:38:09.368917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:107672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.052 [2024-11-18 00:38:09.368933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.052 [2024-11-18 00:38:09.368953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:107680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.052 [2024-11-18 00:38:09.368968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:05.052 [2024-11-18 00:38:09.368988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:107688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.052 [2024-11-18 00:38:09.369007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:05.052 [2024-11-18 00:38:09.369044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:107696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.052 [2024-11-18 00:38:09.369061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:05.052 [2024-11-18 00:38:09.369082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:107704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.052 [2024-11-18 00:38:09.369097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:05.052 [2024-11-18 00:38:09.369117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:107712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.052 [2024-11-18 00:38:09.369140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:05.052 [2024-11-18 00:38:09.369161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:107720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.052 [2024-11-18 00:38:09.369193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:05.052 [2024-11-18 00:38:09.369215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:107728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.052 [2024-11-18 00:38:09.369246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:05.052 [2024-11-18 00:38:09.369269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:107736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.052 [2024-11-18 00:38:09.369287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:05.052 [2024-11-18 00:38:09.369319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:107744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.052 [2024-11-18 00:38:09.369338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:05.052 [2024-11-18 00:38:09.369361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:107752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.052 [2024-11-18 00:38:09.369378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:05.052 [2024-11-18 00:38:09.369400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:107760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.053 [2024-11-18 00:38:09.369417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:05.053 [2024-11-18 00:38:09.369440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:107768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.053 [2024-11-18 00:38:09.369456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:05.053 [2024-11-18 00:38:09.369478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:107776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.053 [2024-11-18 00:38:09.369495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:05.053 [2024-11-18 00:38:09.369517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:107784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.053 [2024-11-18 00:38:09.369537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:05.053 [2024-11-18 00:38:09.369576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:107792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.053 [2024-11-18 00:38:09.369593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:05.053 [2024-11-18 00:38:09.369614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:107800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.053 [2024-11-18 00:38:09.369645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:05.053 [2024-11-18 00:38:09.369666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:107808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.053 [2024-11-18 00:38:09.369681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:05.053 [2024-11-18 00:38:09.369702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:107816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.053 [2024-11-18 00:38:09.369717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:05.053 [2024-11-18 00:38:09.370259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.053 [2024-11-18 00:38:09.370281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:05.053 [2024-11-18 00:38:09.370335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.053 [2024-11-18 00:38:09.370370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:05.053 [2024-11-18 00:38:09.370395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:107840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.053 [2024-11-18 00:38:09.370413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:05.053 [2024-11-18 00:38:09.370435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:107848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.053 [2024-11-18 00:38:09.370453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:05.053 [2024-11-18 00:38:09.370475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:107856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.053 [2024-11-18 00:38:09.370492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:05.053 [2024-11-18 00:38:09.370514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:107864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.053 [2024-11-18 00:38:09.370531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:05.053 [2024-11-18 00:38:09.370554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:107872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.053 [2024-11-18 00:38:09.370571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:05.053 [2024-11-18 00:38:09.370593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:107880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.053 [2024-11-18 00:38:09.370616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:05.053 [2024-11-18 00:38:09.370640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:107888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.053 [2024-11-18 00:38:09.370674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:05.053 [2024-11-18 00:38:09.370697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:107896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.053 [2024-11-18 00:38:09.370712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:05.053 [2024-11-18 00:38:09.370749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:107904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.053 [2024-11-18 00:38:09.370765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:05.053 [2024-11-18 00:38:09.370785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:107912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.053 [2024-11-18 00:38:09.370801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:05.053 [2024-11-18 00:38:09.370821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:107920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.053 [2024-11-18 00:38:09.370836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:05.053 [2024-11-18 00:38:09.370856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:107928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.053 [2024-11-18 00:38:09.370872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:05.053 [2024-11-18 00:38:09.370893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:107936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.053 [2024-11-18 00:38:09.370908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:05.053 [2024-11-18 00:38:09.370928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:107944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.053 [2024-11-18 00:38:09.370944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:05.053 [2024-11-18 00:38:09.370964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:107952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.053 [2024-11-18 00:38:09.370980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:05.053 [2024-11-18 00:38:09.371000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:107960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.053 [2024-11-18 00:38:09.371015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:05.053 [2024-11-18 00:38:09.371036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:107968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.053 [2024-11-18 00:38:09.371051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:05.053 [2024-11-18 00:38:09.371071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:107976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.053 [2024-11-18 00:38:09.371087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:05.053 [2024-11-18 00:38:09.371111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:107984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.053 [2024-11-18 00:38:09.371127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:05.053 [2024-11-18 00:38:09.371147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:107992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.053 [2024-11-18 00:38:09.371163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:05.053 [2024-11-18 00:38:09.371183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:108000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.053 [2024-11-18 00:38:09.371198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:05.053 [2024-11-18 00:38:09.371219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:108008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.053 [2024-11-18 00:38:09.371234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:05.053 [2024-11-18 00:38:09.371254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:108016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.053 [2024-11-18 00:38:09.371270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:05.053 [2024-11-18 00:38:09.371290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:108024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.053 [2024-11-18 00:38:09.371329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:05.053 [2024-11-18 00:38:09.371352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:108032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.053 [2024-11-18 00:38:09.371369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:05.053 [2024-11-18 00:38:09.371391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:108040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.053 [2024-11-18 00:38:09.371407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:05.053 [2024-11-18 00:38:09.371428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:108048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.053 [2024-11-18 00:38:09.371444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:05.053 [2024-11-18 00:38:09.371465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:108056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.053 [2024-11-18 00:38:09.371482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:05.053 [2024-11-18 00:38:09.371503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:108064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.053 [2024-11-18 00:38:09.371519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:05.054 [2024-11-18 00:38:09.371540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:108072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.054 [2024-11-18 00:38:09.371556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:05.054 [2024-11-18 00:38:09.371582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:108080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.054 [2024-11-18 00:38:09.371613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:05.054 [2024-11-18 00:38:09.371635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:108088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.054 [2024-11-18 00:38:09.371650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:05.054 [2024-11-18 00:38:09.371670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:108096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.054 [2024-11-18 00:38:09.371686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:05.054 [2024-11-18 00:38:09.371706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:108104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.054 [2024-11-18 00:38:09.371721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:05.054 [2024-11-18 00:38:09.371741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:108112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.054 [2024-11-18 00:38:09.371756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:05.054 [2024-11-18 00:38:09.371776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:108120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.054 [2024-11-18 00:38:09.371791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:05.054 [2024-11-18 00:38:09.371811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:108128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.054 [2024-11-18 00:38:09.371826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:05.054 [2024-11-18 00:38:09.371861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:108136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.054 [2024-11-18 00:38:09.371877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:05.054 [2024-11-18 00:38:09.371898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:108144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.054 [2024-11-18 00:38:09.371914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:05.054 [2024-11-18 00:38:09.371934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:108152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.054 [2024-11-18 00:38:09.371950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:05.054 [2024-11-18 00:38:09.371971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:108160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.054 [2024-11-18 00:38:09.371986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:05.054 [2024-11-18 00:38:09.372024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:108168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.054 [2024-11-18 00:38:09.372041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:05.054 [2024-11-18 00:38:09.372069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:108176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.054 [2024-11-18 00:38:09.372091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:05.054 [2024-11-18 00:38:09.372114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:108184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.054 [2024-11-18 00:38:09.372131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:05.054 [2024-11-18 00:38:09.372153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:108192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.054 [2024-11-18 00:38:09.372170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:05.054 [2024-11-18 00:38:09.372192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:108200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.054 [2024-11-18 00:38:09.372208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:05.054 [2024-11-18 00:38:09.372230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:108208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.054 [2024-11-18 00:38:09.372247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:05.054 [2024-11-18 00:38:09.372269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:108216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.054 [2024-11-18 00:38:09.372285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:05.054 [2024-11-18 00:38:09.372308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:108224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.054 [2024-11-18 00:38:09.372332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:05.054 [2024-11-18 00:38:09.372355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:108232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.054 [2024-11-18 00:38:09.372372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:05.054 [2024-11-18 00:38:09.372394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:108240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.054 [2024-11-18 00:38:09.372411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:05.054 [2024-11-18 00:38:09.372433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:108248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.054 [2024-11-18 00:38:09.372451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:05.054 [2024-11-18 00:38:09.372473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:108256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.054 [2024-11-18 00:38:09.372490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:05.054 [2024-11-18 00:38:09.372512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:108264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.054 [2024-11-18 00:38:09.372529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:05.054 [2024-11-18 00:38:09.372551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:108272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.054 [2024-11-18 00:38:09.372574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:05.054 [2024-11-18 00:38:09.372615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:108280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.054 [2024-11-18 00:38:09.372632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:05.054 [2024-11-18 00:38:09.372668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:108288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.054 [2024-11-18 00:38:09.372684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:05.054 [2024-11-18 00:38:09.372705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:108296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.054 [2024-11-18 00:38:09.372720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:05.054 [2024-11-18 00:38:09.372741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:108304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.054 [2024-11-18 00:38:09.372757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:05.054 [2024-11-18 00:38:09.372777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:107304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.054 [2024-11-18 00:38:09.372792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:05.054 [2024-11-18 00:38:09.372813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:107312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.054 [2024-11-18 00:38:09.372828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:05.054 [2024-11-18 00:38:09.372849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:107320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.054 [2024-11-18 00:38:09.372865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:05.054 [2024-11-18 00:38:09.373603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:107328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.054 [2024-11-18 00:38:09.373650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:05.055 [2024-11-18 00:38:09.373678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:107336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.055 [2024-11-18 00:38:09.373696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:05.055 [2024-11-18 00:38:09.373723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:107344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.055 [2024-11-18 00:38:09.373744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:05.055 [2024-11-18 00:38:09.373766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:107352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.055 [2024-11-18 00:38:09.373784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:05.055 [2024-11-18 00:38:09.373805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:107360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.055 [2024-11-18 00:38:09.373822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:05.055 [2024-11-18 00:38:09.373864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:107368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.055 [2024-11-18 00:38:09.373882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:05.055 [2024-11-18 00:38:09.373906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:107376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.055 [2024-11-18 00:38:09.373938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:05.055 [2024-11-18 00:38:09.373961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:107384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.055 [2024-11-18 00:38:09.373977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:05.055 [2024-11-18 00:38:09.374014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:107392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.055 [2024-11-18 00:38:09.374030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:05.055 [2024-11-18 00:38:09.374051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:107400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.055 [2024-11-18 00:38:09.374067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:05.055 [2024-11-18 00:38:09.374088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:107408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.055 [2024-11-18 00:38:09.374103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:05.055 [2024-11-18 00:38:09.374124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:107416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.055 [2024-11-18 00:38:09.374140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:05.055 [2024-11-18 00:38:09.374176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:107424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.055 [2024-11-18 00:38:09.374192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:05.055 [2024-11-18 00:38:09.374212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:107432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.055 [2024-11-18 00:38:09.374227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:05.055 [2024-11-18 00:38:09.374248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:107440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.055 [2024-11-18 00:38:09.374263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:05.055 [2024-11-18 00:38:09.374283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:107448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.055 [2024-11-18 00:38:09.374323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:05.055 [2024-11-18 00:38:09.374349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:107456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.055 [2024-11-18 00:38:09.374367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:05.055 [2024-11-18 00:38:09.374393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:107464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.055 [2024-11-18 00:38:09.374410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:05.055 [2024-11-18 00:38:09.374433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:107472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.055 [2024-11-18 00:38:09.374450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:05.055 [2024-11-18 00:38:09.374473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:107480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.055 [2024-11-18 00:38:09.374489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:05.055 [2024-11-18 00:38:09.374512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:107488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.055 [2024-11-18 00:38:09.374528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:05.055 [2024-11-18 00:38:09.374551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:107496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.055 [2024-11-18 00:38:09.374568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:05.055 [2024-11-18 00:38:09.374591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:107504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.055 [2024-11-18 00:38:09.374622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:05.055 [2024-11-18 00:38:09.374643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:107512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.055 [2024-11-18 00:38:09.374658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:05.055 [2024-11-18 00:38:09.374678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:107520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.055 [2024-11-18 00:38:09.374694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:05.055 [2024-11-18 00:38:09.374714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:107528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.055 [2024-11-18 00:38:09.374729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:05.055 [2024-11-18 00:38:09.374749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:107536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.055 [2024-11-18 00:38:09.374764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:05.055 [2024-11-18 00:38:09.374784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:107544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.055 [2024-11-18 00:38:09.374799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:05.055 [2024-11-18 00:38:09.374819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:108312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.055 [2024-11-18 00:38:09.374834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:05.055 [2024-11-18 00:38:09.374859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:108320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.055 [2024-11-18 00:38:09.374874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:05.055 [2024-11-18 00:38:09.374895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:107552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.055 [2024-11-18 00:38:09.374910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:05.055 [2024-11-18 00:38:09.374931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.055 [2024-11-18 00:38:09.374945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:05.055 [2024-11-18 00:38:09.374965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:107568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.055 [2024-11-18 00:38:09.374980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:05.055 [2024-11-18 00:38:09.375000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:107576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.055 [2024-11-18 00:38:09.375015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:05.055 [2024-11-18 00:38:09.375035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:107584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.055 [2024-11-18 00:38:09.375050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:05.055 [2024-11-18 00:38:09.375071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:107592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.055 [2024-11-18 00:38:09.375086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:05.055 [2024-11-18 00:38:09.375107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:107600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.055 [2024-11-18 00:38:09.375123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:05.055 [2024-11-18 00:38:09.375143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:107608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.055 [2024-11-18 00:38:09.375159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:05.055 [2024-11-18 00:38:09.375179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:107616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.055 [2024-11-18 00:38:09.375194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:05.055 [2024-11-18 00:38:09.375215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:107624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.056 [2024-11-18 00:38:09.375231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:05.056 [2024-11-18 00:38:09.375251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:107632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.056 [2024-11-18 00:38:09.375266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:05.056 [2024-11-18 00:38:09.375287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:107640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.056 [2024-11-18 00:38:09.375329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:05.056 [2024-11-18 00:38:09.375354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:107648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.056 [2024-11-18 00:38:09.375387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:05.056 [2024-11-18 00:38:09.375411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:107656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.056 [2024-11-18 00:38:09.375428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:05.056 [2024-11-18 00:38:09.375450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:107664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.056 [2024-11-18 00:38:09.375467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.056 [2024-11-18 00:38:09.375489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:107672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.056 [2024-11-18 00:38:09.375505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.056 [2024-11-18 00:38:09.375528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:107680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.056 [2024-11-18 00:38:09.375545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:05.056 [2024-11-18 00:38:09.375567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:107688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.056 [2024-11-18 00:38:09.375584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:05.056 [2024-11-18 00:38:09.375606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:107696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.056 [2024-11-18 00:38:09.375623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:05.056 [2024-11-18 00:38:09.375644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:107704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.056 [2024-11-18 00:38:09.375661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:05.056 [2024-11-18 00:38:09.375683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:107712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.056 [2024-11-18 00:38:09.375699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:05.056 [2024-11-18 00:38:09.375723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:107720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.056 [2024-11-18 00:38:09.375739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:05.056 [2024-11-18 00:38:09.375761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:107728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.056 [2024-11-18 00:38:09.375777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:05.056 [2024-11-18 00:38:09.375801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:107736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.056 [2024-11-18 00:38:09.375821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:05.056 [2024-11-18 00:38:09.375859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:107744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.056 [2024-11-18 00:38:09.375876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:05.056 [2024-11-18 00:38:09.375898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:107752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.056 [2024-11-18 00:38:09.375914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:05.056 [2024-11-18 00:38:09.375935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:107760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.056 [2024-11-18 00:38:09.375951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:05.056 [2024-11-18 00:38:09.375972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:107768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.056 [2024-11-18 00:38:09.375988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:05.056 [2024-11-18 00:38:09.376009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:107776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.056 [2024-11-18 00:38:09.376025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:05.056 [2024-11-18 00:38:09.376046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:107784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.056 [2024-11-18 00:38:09.376062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:05.056 [2024-11-18 00:38:09.376084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:107792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.056 [2024-11-18 00:38:09.376099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:05.056 [2024-11-18 00:38:09.376121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:107800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.056 [2024-11-18 00:38:09.376137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:05.056 [2024-11-18 00:38:09.376159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:107808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.056 [2024-11-18 00:38:09.376175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:05.056 [2024-11-18 00:38:09.376796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:107816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.056 [2024-11-18 00:38:09.376820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:05.056 [2024-11-18 00:38:09.376851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:107824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.056 [2024-11-18 00:38:09.376871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:05.056 [2024-11-18 00:38:09.376894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:107832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.056 [2024-11-18 00:38:09.376915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:05.056 [2024-11-18 00:38:09.376939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:107840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.056 [2024-11-18 00:38:09.376956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:05.056 [2024-11-18 00:38:09.376979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:107848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.056 [2024-11-18 00:38:09.377010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:05.056 [2024-11-18 00:38:09.377032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:107856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.056 [2024-11-18 00:38:09.377048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:05.056 [2024-11-18 00:38:09.377085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:107864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.056 [2024-11-18 00:38:09.377101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:05.056 [2024-11-18 00:38:09.377123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:107872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.056 [2024-11-18 00:38:09.377139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:05.056 [2024-11-18 00:38:09.377159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:107880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.056 [2024-11-18 00:38:09.377175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:05.056 [2024-11-18 00:38:09.377195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:107888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.056 [2024-11-18 00:38:09.377226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:05.056 [2024-11-18 00:38:09.377247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:107896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.056 [2024-11-18 00:38:09.377262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:05.056 [2024-11-18 00:38:09.377282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:107904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.056 [2024-11-18 00:38:09.377319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:05.056 [2024-11-18 00:38:09.377344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:107912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.056 [2024-11-18 00:38:09.377359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:05.056 [2024-11-18 00:38:09.377380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:107920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.056 [2024-11-18 00:38:09.377396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:05.056 [2024-11-18 00:38:09.377417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:107928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.056 [2024-11-18 00:38:09.377432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:05.057 [2024-11-18 00:38:09.377457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:107936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.057 [2024-11-18 00:38:09.377473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:05.057 [2024-11-18 00:38:09.377494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:107944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.057 [2024-11-18 00:38:09.377510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:05.057 [2024-11-18 00:38:09.377530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:107952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.057 [2024-11-18 00:38:09.377546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:05.057 [2024-11-18 00:38:09.377566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:107960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.057 [2024-11-18 00:38:09.377582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:05.057 [2024-11-18 00:38:09.377620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:107968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.057 [2024-11-18 00:38:09.377636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:05.057 [2024-11-18 00:38:09.377658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:107976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.057 [2024-11-18 00:38:09.377688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:05.057 [2024-11-18 00:38:09.377710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:107984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.057 [2024-11-18 00:38:09.377740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:05.057 [2024-11-18 00:38:09.377763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:107992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.057 [2024-11-18 00:38:09.377780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:05.057 [2024-11-18 00:38:09.377801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:108000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.057 [2024-11-18 00:38:09.377833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:05.057 [2024-11-18 00:38:09.377857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:108008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.057 [2024-11-18 00:38:09.377874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:05.057 [2024-11-18 00:38:09.377896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:108016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.057 [2024-11-18 00:38:09.377912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:05.057 [2024-11-18 00:38:09.377933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:108024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.057 [2024-11-18 00:38:09.377950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:05.057 [2024-11-18 00:38:09.377976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:108032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.057 [2024-11-18 00:38:09.378009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:05.057 [2024-11-18 00:38:09.378031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:108040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.057 [2024-11-18 00:38:09.378047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:05.057 [2024-11-18 00:38:09.378069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:108048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.057 [2024-11-18 00:38:09.378086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:05.057 [2024-11-18 00:38:09.378124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:108056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.057 [2024-11-18 00:38:09.378141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:05.057 [2024-11-18 00:38:09.378164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:108064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.057 [2024-11-18 00:38:09.378180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:05.057 [2024-11-18 00:38:09.378218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:108072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.057 [2024-11-18 00:38:09.378234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:05.057 [2024-11-18 00:38:09.378257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:108080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.057 [2024-11-18 00:38:09.378272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:05.057 [2024-11-18 00:38:09.378319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:108088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.057 [2024-11-18 00:38:09.378340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:05.057 [2024-11-18 00:38:09.378363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:108096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.057 [2024-11-18 00:38:09.378380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:05.057 [2024-11-18 00:38:09.378402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:108104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.057 [2024-11-18 00:38:09.378418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:05.057 [2024-11-18 00:38:09.378441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:108112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.057 [2024-11-18 00:38:09.378458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:05.057 [2024-11-18 00:38:09.378480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:108120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.057 [2024-11-18 00:38:09.378497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:05.057 [2024-11-18 00:38:09.378519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:108128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.057 [2024-11-18 00:38:09.378539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:05.057 [2024-11-18 00:38:09.378562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:108136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.057 [2024-11-18 00:38:09.378579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:05.057 [2024-11-18 00:38:09.378616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:108144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.057 [2024-11-18 00:38:09.378632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:05.057 [2024-11-18 00:38:09.378667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:108152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.057 [2024-11-18 00:38:09.378684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:05.057 [2024-11-18 00:38:09.378707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:108160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.057 [2024-11-18 00:38:09.378723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:05.057 [2024-11-18 00:38:09.378744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:108168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.057 [2024-11-18 00:38:09.378760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:05.057 [2024-11-18 00:38:09.378796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:108176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.057 [2024-11-18 00:38:09.378812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:05.057 [2024-11-18 00:38:09.378834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:108184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.057 [2024-11-18 00:38:09.378864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:05.057 [2024-11-18 00:38:09.378888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:108192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.057 [2024-11-18 00:38:09.378904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:05.057 [2024-11-18 00:38:09.378926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:108200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.057 [2024-11-18 00:38:09.378943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:05.057 [2024-11-18 00:38:09.378965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:108208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.057 [2024-11-18 00:38:09.378982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:05.057 [2024-11-18 00:38:09.379004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:108216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.057 [2024-11-18 00:38:09.379021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:05.057 [2024-11-18 00:38:09.379058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:108224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.057 [2024-11-18 00:38:09.379078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:05.057 [2024-11-18 00:38:09.379100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:108232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.057 [2024-11-18 00:38:09.379116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:05.057 [2024-11-18 00:38:09.379138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:108240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.057 [2024-11-18 00:38:09.379169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:05.058 [2024-11-18 00:38:09.379192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:108248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.058 [2024-11-18 00:38:09.379222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:05.058 [2024-11-18 00:38:09.379244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:108256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.058 [2024-11-18 00:38:09.379259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:05.058 [2024-11-18 00:38:09.379294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:108264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.058 [2024-11-18 00:38:09.379317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:05.058 [2024-11-18 00:38:09.379355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:108272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.058 [2024-11-18 00:38:09.379372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:05.058 [2024-11-18 00:38:09.379395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:108280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.058 [2024-11-18 00:38:09.379426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:05.058 [2024-11-18 00:38:09.379449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:108288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.058 [2024-11-18 00:38:09.379466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:05.058 [2024-11-18 00:38:09.379488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:108296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.058 [2024-11-18 00:38:09.379505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:05.058 [2024-11-18 00:38:09.379527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:108304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.058 [2024-11-18 00:38:09.379544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:05.058 [2024-11-18 00:38:09.379565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.058 [2024-11-18 00:38:09.379582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:05.058 [2024-11-18 00:38:09.379604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:107312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.058 [2024-11-18 00:38:09.379621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:05.058 [2024-11-18 00:38:09.380483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:107320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.058 [2024-11-18 00:38:09.380507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:05.058 [2024-11-18 00:38:09.380535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:107328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.058 [2024-11-18 00:38:09.380553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:05.058 [2024-11-18 00:38:09.380581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:107336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.058 [2024-11-18 00:38:09.380599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:05.058 [2024-11-18 00:38:09.380621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:107344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.058 [2024-11-18 00:38:09.380638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:05.058 [2024-11-18 00:38:09.380660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:107352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.058 [2024-11-18 00:38:09.380677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:05.058 [2024-11-18 00:38:09.380703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:107360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.058 [2024-11-18 00:38:09.380721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:05.058 [2024-11-18 00:38:09.380743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:107368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.058 [2024-11-18 00:38:09.380761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:05.058 [2024-11-18 00:38:09.380783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:107376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.058 [2024-11-18 00:38:09.380800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:05.058 [2024-11-18 00:38:09.380822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:107384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.058 [2024-11-18 00:38:09.380839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:05.058 [2024-11-18 00:38:09.380861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:107392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.058 [2024-11-18 00:38:09.380877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:05.058 [2024-11-18 00:38:09.380900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:107400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.058 [2024-11-18 00:38:09.380916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:05.058 [2024-11-18 00:38:09.380938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:107408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.058 [2024-11-18 00:38:09.380954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:05.058 [2024-11-18 00:38:09.380981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:107416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.058 [2024-11-18 00:38:09.381013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:05.058 [2024-11-18 00:38:09.381036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:107424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.058 [2024-11-18 00:38:09.381052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:05.058 [2024-11-18 00:38:09.381073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:107432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.058 [2024-11-18 00:38:09.381089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:05.058 [2024-11-18 00:38:09.381110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:107440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.058 [2024-11-18 00:38:09.381126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:05.058 [2024-11-18 00:38:09.381147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:107448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.058 [2024-11-18 00:38:09.381178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:05.058 [2024-11-18 00:38:09.381200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:107456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.058 [2024-11-18 00:38:09.381230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:05.058 [2024-11-18 00:38:09.381251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:107464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.058 [2024-11-18 00:38:09.381266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:05.058 [2024-11-18 00:38:09.381286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:107472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.058 [2024-11-18 00:38:09.381327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:05.058 [2024-11-18 00:38:09.381351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:107480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.058 [2024-11-18 00:38:09.381368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:05.058 [2024-11-18 00:38:09.381407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:107488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.058 [2024-11-18 00:38:09.381425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:05.058 [2024-11-18 00:38:09.381447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:107496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.058 [2024-11-18 00:38:09.381463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:05.058 [2024-11-18 00:38:09.381486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:107504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.058 [2024-11-18 00:38:09.381502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:05.058 [2024-11-18 00:38:09.381529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:107512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.058 [2024-11-18 00:38:09.381547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:05.058 [2024-11-18 00:38:09.381569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:107520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.058 [2024-11-18 00:38:09.381586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:05.058 [2024-11-18 00:38:09.381624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:107528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.058 [2024-11-18 00:38:09.381641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:05.058 [2024-11-18 00:38:09.381662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:107536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.058 [2024-11-18 00:38:09.381678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:05.058 [2024-11-18 00:38:09.381700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:107544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.059 [2024-11-18 00:38:09.381716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:05.059 [2024-11-18 00:38:09.381737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:108312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.059 [2024-11-18 00:38:09.381753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:05.059 [2024-11-18 00:38:09.381775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:108320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.059 [2024-11-18 00:38:09.381791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:05.059 [2024-11-18 00:38:09.381830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.059 [2024-11-18 00:38:09.381847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:05.059 [2024-11-18 00:38:09.381869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:107560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.059 [2024-11-18 00:38:09.381885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:05.059 [2024-11-18 00:38:09.381907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:107568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.059 [2024-11-18 00:38:09.381939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:05.059 [2024-11-18 00:38:09.381961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:107576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.059 [2024-11-18 00:38:09.381977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:05.059 [2024-11-18 00:38:09.381999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:107584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.059 [2024-11-18 00:38:09.382032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:05.059 [2024-11-18 00:38:09.382056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:107592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.059 [2024-11-18 00:38:09.382076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:05.059 [2024-11-18 00:38:09.382116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:107600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.059 [2024-11-18 00:38:09.382132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:05.059 [2024-11-18 00:38:09.382154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:107608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.059 [2024-11-18 00:38:09.382171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:05.059 [2024-11-18 00:38:09.382193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:107616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.059 [2024-11-18 00:38:09.382209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:05.059 [2024-11-18 00:38:09.382230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:107624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.059 [2024-11-18 00:38:09.382246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:05.059 [2024-11-18 00:38:09.382268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:107632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.059 [2024-11-18 00:38:09.382284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:05.059 [2024-11-18 00:38:09.382329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:107640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.059 [2024-11-18 00:38:09.382349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:05.059 [2024-11-18 00:38:09.382371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:107648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.059 [2024-11-18 00:38:09.382388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:05.059 [2024-11-18 00:38:09.382411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:107656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.059 [2024-11-18 00:38:09.382428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:05.059 [2024-11-18 00:38:09.382450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:107664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.059 [2024-11-18 00:38:09.382467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.059 [2024-11-18 00:38:09.382489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:107672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.059 [2024-11-18 00:38:09.382506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.059 [2024-11-18 00:38:09.382529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:107680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.059 [2024-11-18 00:38:09.382546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:05.059 [2024-11-18 00:38:09.382568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:107688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.059 [2024-11-18 00:38:09.382591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:05.059 [2024-11-18 00:38:09.382615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:107696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.059 [2024-11-18 00:38:09.382632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:05.059 [2024-11-18 00:38:09.382654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:107704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.059 [2024-11-18 00:38:09.382670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:05.059 [2024-11-18 00:38:09.382692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:107712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.059 [2024-11-18 00:38:09.382710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:05.059 [2024-11-18 00:38:09.382732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:107720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.059 [2024-11-18 00:38:09.382749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:05.059 [2024-11-18 00:38:09.382771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:107728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.059 [2024-11-18 00:38:09.382803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:05.059 [2024-11-18 00:38:09.382827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:107736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.059 [2024-11-18 00:38:09.382843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:05.059 [2024-11-18 00:38:09.382880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:107744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.059 [2024-11-18 00:38:09.382896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:05.059 [2024-11-18 00:38:09.382931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:107752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.059 [2024-11-18 00:38:09.382947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:05.059 [2024-11-18 00:38:09.382968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:107760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.059 [2024-11-18 00:38:09.382983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:05.059 [2024-11-18 00:38:09.383003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:107768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.059 [2024-11-18 00:38:09.383018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:05.059 [2024-11-18 00:38:09.383038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:107776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.059 [2024-11-18 00:38:09.383053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:05.059 [2024-11-18 00:38:09.383073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:107784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.059 [2024-11-18 00:38:09.383091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:05.059 [2024-11-18 00:38:09.383112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:107792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.059 [2024-11-18 00:38:09.383127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:05.059 [2024-11-18 00:38:09.383148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:107800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.059 [2024-11-18 00:38:09.383163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:05.060 [2024-11-18 00:38:09.383753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.060 [2024-11-18 00:38:09.383775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:05.060 [2024-11-18 00:38:09.383800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.060 [2024-11-18 00:38:09.383818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:05.060 [2024-11-18 00:38:09.383839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:107824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.060 [2024-11-18 00:38:09.383855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:05.060 [2024-11-18 00:38:09.383876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:107832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.060 [2024-11-18 00:38:09.383892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:05.060 [2024-11-18 00:38:09.383912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:107840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.060 [2024-11-18 00:38:09.383928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:05.060 [2024-11-18 00:38:09.383948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:107848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.060 [2024-11-18 00:38:09.383964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:05.060 [2024-11-18 00:38:09.383984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:107856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.060 [2024-11-18 00:38:09.384000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:05.060 [2024-11-18 00:38:09.384021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:107864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.060 [2024-11-18 00:38:09.384037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:05.060 [2024-11-18 00:38:09.384073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:107872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.060 [2024-11-18 00:38:09.384088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:05.060 [2024-11-18 00:38:09.384108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:107880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.060 [2024-11-18 00:38:09.384123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:05.060 [2024-11-18 00:38:09.384148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:107888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.060 [2024-11-18 00:38:09.384164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:05.060 [2024-11-18 00:38:09.384184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:107896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.060 [2024-11-18 00:38:09.384199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:05.060 [2024-11-18 00:38:09.384219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:107904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.060 [2024-11-18 00:38:09.384235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:05.060 [2024-11-18 00:38:09.384256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:107912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.060 [2024-11-18 00:38:09.384272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:05.060 [2024-11-18 00:38:09.384306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:107920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.060 [2024-11-18 00:38:09.384332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:05.060 [2024-11-18 00:38:09.384371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:107928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.060 [2024-11-18 00:38:09.384388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:05.060 [2024-11-18 00:38:09.384411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:107936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.060 [2024-11-18 00:38:09.384427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:05.060 [2024-11-18 00:38:09.384450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:107944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.060 [2024-11-18 00:38:09.384466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:05.060 [2024-11-18 00:38:09.384488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:107952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.060 [2024-11-18 00:38:09.384505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:05.060 [2024-11-18 00:38:09.384527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:107960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.060 [2024-11-18 00:38:09.384543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:05.060 [2024-11-18 00:38:09.384565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:107968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.060 [2024-11-18 00:38:09.384582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:05.060 [2024-11-18 00:38:09.384621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:107976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.060 [2024-11-18 00:38:09.384637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:05.060 [2024-11-18 00:38:09.384677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:107984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.060 [2024-11-18 00:38:09.384694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:05.060 [2024-11-18 00:38:09.384714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:107992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.060 [2024-11-18 00:38:09.384729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:05.060 [2024-11-18 00:38:09.384749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:108000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.060 [2024-11-18 00:38:09.384764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:05.060 [2024-11-18 00:38:09.384785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:108008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.060 [2024-11-18 00:38:09.384800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:05.060 [2024-11-18 00:38:09.384820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:108016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.060 [2024-11-18 00:38:09.384835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:05.060 [2024-11-18 00:38:09.384855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:108024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.060 [2024-11-18 00:38:09.384870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:05.060 [2024-11-18 00:38:09.384890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:108032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.060 [2024-11-18 00:38:09.384905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:05.060 [2024-11-18 00:38:09.384925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:108040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.060 [2024-11-18 00:38:09.384954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:05.060 [2024-11-18 00:38:09.384976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:108048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.060 [2024-11-18 00:38:09.384992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:05.060 [2024-11-18 00:38:09.385013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:108056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.060 [2024-11-18 00:38:09.385029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:05.060 [2024-11-18 00:38:09.385050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:108064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.060 [2024-11-18 00:38:09.385065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:05.060 [2024-11-18 00:38:09.385086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:108072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.060 [2024-11-18 00:38:09.385102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:05.060 [2024-11-18 00:38:09.385123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:108080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.060 [2024-11-18 00:38:09.385142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:05.060 [2024-11-18 00:38:09.385163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:108088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.060 [2024-11-18 00:38:09.385179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:05.060 [2024-11-18 00:38:09.385200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:108096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.060 [2024-11-18 00:38:09.385215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:05.060 [2024-11-18 00:38:09.385236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:108104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.060 [2024-11-18 00:38:09.385252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:05.060 [2024-11-18 00:38:09.385273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:108112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.060 [2024-11-18 00:38:09.385289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:05.061 [2024-11-18 00:38:09.385335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:108120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.061 [2024-11-18 00:38:09.385353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:05.061 [2024-11-18 00:38:09.385391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:108128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.061 [2024-11-18 00:38:09.385408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:05.061 [2024-11-18 00:38:09.385430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:108136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.061 [2024-11-18 00:38:09.385446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:05.061 [2024-11-18 00:38:09.385467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:108144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.061 [2024-11-18 00:38:09.385499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:05.061 [2024-11-18 00:38:09.385522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:108152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.061 [2024-11-18 00:38:09.385538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:05.061 [2024-11-18 00:38:09.385561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:108160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.061 [2024-11-18 00:38:09.385578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:05.061 [2024-11-18 00:38:09.385600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:108168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.061 [2024-11-18 00:38:09.385617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:05.061 [2024-11-18 00:38:09.385639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:108176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.061 [2024-11-18 00:38:09.385660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:05.061 [2024-11-18 00:38:09.385683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:108184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.061 [2024-11-18 00:38:09.385699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:05.061 [2024-11-18 00:38:09.385721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:108192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.061 [2024-11-18 00:38:09.385737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:05.061 [2024-11-18 00:38:09.385759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:108200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.061 [2024-11-18 00:38:09.385776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:05.061 [2024-11-18 00:38:09.385813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:108208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.061 [2024-11-18 00:38:09.385828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:05.061 [2024-11-18 00:38:09.385864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:108216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.061 [2024-11-18 00:38:09.385880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:05.061 [2024-11-18 00:38:09.385901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:108224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.061 [2024-11-18 00:38:09.385916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:05.061 [2024-11-18 00:38:09.385935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:108232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.061 [2024-11-18 00:38:09.385950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:05.061 [2024-11-18 00:38:09.385971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:108240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.061 [2024-11-18 00:38:09.385987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:05.061 [2024-11-18 00:38:09.386008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:108248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.061 [2024-11-18 00:38:09.386023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:05.061 [2024-11-18 00:38:09.386043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:108256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.061 [2024-11-18 00:38:09.386059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:05.061 [2024-11-18 00:38:09.386079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:108264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.061 [2024-11-18 00:38:09.386094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:05.061 [2024-11-18 00:38:09.386114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:108272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.061 [2024-11-18 00:38:09.386128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:05.061 [2024-11-18 00:38:09.386153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:108280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.061 [2024-11-18 00:38:09.386169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:05.061 [2024-11-18 00:38:09.386189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:108288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.061 [2024-11-18 00:38:09.386204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:05.061 [2024-11-18 00:38:09.386224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:108296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.061 [2024-11-18 00:38:09.386239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:05.061 [2024-11-18 00:38:09.386259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:108304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.061 [2024-11-18 00:38:09.386274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:05.061 [2024-11-18 00:38:09.386318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:107304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.061 [2024-11-18 00:38:09.386336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:05.061 [2024-11-18 00:38:09.387094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:107312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.061 [2024-11-18 00:38:09.387136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:05.061 [2024-11-18 00:38:09.387163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:107320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.061 [2024-11-18 00:38:09.387197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:05.061 [2024-11-18 00:38:09.387221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:107328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.061 [2024-11-18 00:38:09.387238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:05.061 [2024-11-18 00:38:09.387261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:107336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.061 [2024-11-18 00:38:09.387278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:05.061 [2024-11-18 00:38:09.387300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:107344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.061 [2024-11-18 00:38:09.387326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:05.061 [2024-11-18 00:38:09.387350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:107352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.061 [2024-11-18 00:38:09.387368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:05.061 [2024-11-18 00:38:09.387390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:107360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.061 [2024-11-18 00:38:09.387407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:05.061 [2024-11-18 00:38:09.387435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:107368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.061 [2024-11-18 00:38:09.387453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:05.061 [2024-11-18 00:38:09.387476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:107376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.061 [2024-11-18 00:38:09.387493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:05.061 [2024-11-18 00:38:09.387515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:107384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.061 [2024-11-18 00:38:09.387532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:05.061 [2024-11-18 00:38:09.387554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:107392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.061 [2024-11-18 00:38:09.387570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:05.061 [2024-11-18 00:38:09.387593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:107400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.061 [2024-11-18 00:38:09.387609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:05.061 [2024-11-18 00:38:09.387647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:107408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.061 [2024-11-18 00:38:09.387663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:05.061 [2024-11-18 00:38:09.387699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:107416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.062 [2024-11-18 00:38:09.387716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:05.062 [2024-11-18 00:38:09.387737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:107424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.062 [2024-11-18 00:38:09.387752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:05.062 [2024-11-18 00:38:09.387773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:107432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.062 [2024-11-18 00:38:09.387789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:05.062 [2024-11-18 00:38:09.387810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:107440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.062 [2024-11-18 00:38:09.387825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:05.062 [2024-11-18 00:38:09.387846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:107448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.062 [2024-11-18 00:38:09.387861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:05.062 [2024-11-18 00:38:09.387882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:107456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.062 [2024-11-18 00:38:09.387899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:05.062 [2024-11-18 00:38:09.387923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:107464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.062 [2024-11-18 00:38:09.387939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:05.062 [2024-11-18 00:38:09.387960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:107472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.062 [2024-11-18 00:38:09.387975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:05.062 [2024-11-18 00:38:09.387996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:107480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.062 [2024-11-18 00:38:09.388012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:05.062 [2024-11-18 00:38:09.388048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:107488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.062 [2024-11-18 00:38:09.388064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:05.062 [2024-11-18 00:38:09.388085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:107496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.062 [2024-11-18 00:38:09.388115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:05.062 [2024-11-18 00:38:09.388138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:107504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.062 [2024-11-18 00:38:09.388154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:05.062 [2024-11-18 00:38:09.388174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:107512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.062 [2024-11-18 00:38:09.388190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:05.062 [2024-11-18 00:38:09.388211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:107520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.062 [2024-11-18 00:38:09.388226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:05.062 [2024-11-18 00:38:09.388247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:107528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.062 [2024-11-18 00:38:09.388262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:05.062 [2024-11-18 00:38:09.388283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:107536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.062 [2024-11-18 00:38:09.388322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:05.062 [2024-11-18 00:38:09.388348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:107544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.062 [2024-11-18 00:38:09.388366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:05.062 [2024-11-18 00:38:09.388388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:108312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.062 [2024-11-18 00:38:09.388404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:05.062 [2024-11-18 00:38:09.388426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.062 [2024-11-18 00:38:09.388447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:05.062 [2024-11-18 00:38:09.388470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:107552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.062 [2024-11-18 00:38:09.388487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:05.062 [2024-11-18 00:38:09.388509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:107560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.062 [2024-11-18 00:38:09.388525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:05.062 [2024-11-18 00:38:09.388547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:107568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.062 [2024-11-18 00:38:09.388564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:05.062 [2024-11-18 00:38:09.388586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:107576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.062 [2024-11-18 00:38:09.388618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:05.062 [2024-11-18 00:38:09.388641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:107584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.062 [2024-11-18 00:38:09.388657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:05.062 [2024-11-18 00:38:09.388693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:107592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.062 [2024-11-18 00:38:09.388708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:05.062 [2024-11-18 00:38:09.388728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:107600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.062 [2024-11-18 00:38:09.388759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:05.062 [2024-11-18 00:38:09.388780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:107608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.062 [2024-11-18 00:38:09.388796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:05.062 [2024-11-18 00:38:09.388817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:107616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.062 [2024-11-18 00:38:09.388833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:05.062 [2024-11-18 00:38:09.388853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:107624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.062 [2024-11-18 00:38:09.388869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:05.062 [2024-11-18 00:38:09.388890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:107632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.062 [2024-11-18 00:38:09.388906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:05.062 [2024-11-18 00:38:09.388926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:107640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.062 [2024-11-18 00:38:09.388945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:05.062 [2024-11-18 00:38:09.388967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:107648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.062 [2024-11-18 00:38:09.388983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:05.062 [2024-11-18 00:38:09.389004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:107656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.062 [2024-11-18 00:38:09.389019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:05.062 [2024-11-18 00:38:09.389040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:107664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.062 [2024-11-18 00:38:09.389056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.062 [2024-11-18 00:38:09.389076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:107672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.062 [2024-11-18 00:38:09.389091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.062 [2024-11-18 00:38:09.389112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:107680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.062 [2024-11-18 00:38:09.389128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:05.062 [2024-11-18 00:38:09.389148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:107688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.062 [2024-11-18 00:38:09.389163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:05.062 [2024-11-18 00:38:09.389184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:107696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.062 [2024-11-18 00:38:09.389199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:05.062 [2024-11-18 00:38:09.389220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:107704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.062 [2024-11-18 00:38:09.389250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:05.062 [2024-11-18 00:38:09.389271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:107712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.063 [2024-11-18 00:38:09.389287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:05.063 [2024-11-18 00:38:09.389333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:107720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.063 [2024-11-18 00:38:09.389353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:05.063 [2024-11-18 00:38:09.389376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:107728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.063 [2024-11-18 00:38:09.389393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:05.063 [2024-11-18 00:38:09.389415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:107736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.063 [2024-11-18 00:38:09.389436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:05.063 [2024-11-18 00:38:09.389459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:107744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.063 [2024-11-18 00:38:09.389477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:05.063 [2024-11-18 00:38:09.389499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:107752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.063 [2024-11-18 00:38:09.389516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:05.063 [2024-11-18 00:38:09.389538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:107760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.063 [2024-11-18 00:38:09.389555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:05.063 [2024-11-18 00:38:09.389577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:107768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.063 [2024-11-18 00:38:09.389593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:05.063 [2024-11-18 00:38:09.389630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:107776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.063 [2024-11-18 00:38:09.389645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:05.063 [2024-11-18 00:38:09.389666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:107784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.063 [2024-11-18 00:38:09.389681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:05.063 [2024-11-18 00:38:09.389702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:107792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.063 [2024-11-18 00:38:09.389717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:05.063 [2024-11-18 00:38:09.389964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:107800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.063 [2024-11-18 00:38:09.389987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:05.063 [2024-11-18 00:38:09.390031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:107808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.063 [2024-11-18 00:38:09.390051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:05.063 [2024-11-18 00:38:09.390078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:107816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.063 [2024-11-18 00:38:09.390109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:05.063 [2024-11-18 00:38:09.390135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:107824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.063 [2024-11-18 00:38:09.390151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:05.063 [2024-11-18 00:38:09.390176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:107832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.063 [2024-11-18 00:38:09.390192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:05.063 [2024-11-18 00:38:09.390222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:107840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.063 [2024-11-18 00:38:09.390253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:05.063 [2024-11-18 00:38:09.390280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:107848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.063 [2024-11-18 00:38:09.390317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:05.063 [2024-11-18 00:38:09.390348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:107856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.063 [2024-11-18 00:38:09.390366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:05.063 [2024-11-18 00:38:09.390392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:107864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.063 [2024-11-18 00:38:09.390410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:05.063 [2024-11-18 00:38:09.390436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:107872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.063 [2024-11-18 00:38:09.390453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:05.063 [2024-11-18 00:38:09.390479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:107880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.063 [2024-11-18 00:38:09.390496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:05.063 [2024-11-18 00:38:09.390522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:107888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.063 [2024-11-18 00:38:09.390555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:05.063 [2024-11-18 00:38:09.390580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:107896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.063 [2024-11-18 00:38:09.390611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:05.063 [2024-11-18 00:38:09.390636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:107904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.063 [2024-11-18 00:38:09.390652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:05.063 [2024-11-18 00:38:09.390675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:107912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.063 [2024-11-18 00:38:09.390691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:05.063 [2024-11-18 00:38:09.390714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:107920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.063 [2024-11-18 00:38:09.390729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:05.063 [2024-11-18 00:38:09.390753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:107928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.063 [2024-11-18 00:38:09.390768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:05.063 [2024-11-18 00:38:09.390812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:107936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.063 [2024-11-18 00:38:09.390829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:05.063 [2024-11-18 00:38:09.390853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:107944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.063 [2024-11-18 00:38:09.390869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:05.063 [2024-11-18 00:38:09.390894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:107952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.063 [2024-11-18 00:38:09.390910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:05.063 [2024-11-18 00:38:09.390934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:107960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.063 [2024-11-18 00:38:09.390950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:05.063 [2024-11-18 00:38:09.390974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:107968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.063 [2024-11-18 00:38:09.390990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:05.063 [2024-11-18 00:38:09.391014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:107976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.063 [2024-11-18 00:38:09.391030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:05.063 [2024-11-18 00:38:09.391054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:107984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.064 [2024-11-18 00:38:09.391070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:05.064 [2024-11-18 00:38:09.391094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:107992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.064 [2024-11-18 00:38:09.391110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:05.064 [2024-11-18 00:38:09.391134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:108000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.064 [2024-11-18 00:38:09.391150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:05.064 [2024-11-18 00:38:09.391174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:108008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.064 [2024-11-18 00:38:09.391190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:05.064 [2024-11-18 00:38:09.391215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:108016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.064 [2024-11-18 00:38:09.391230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:05.064 [2024-11-18 00:38:09.391255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:108024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.064 [2024-11-18 00:38:09.391270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:05.064 [2024-11-18 00:38:09.391317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:108032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.064 [2024-11-18 00:38:09.391341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:05.064 [2024-11-18 00:38:09.391369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:108040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.064 [2024-11-18 00:38:09.391386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:05.064 [2024-11-18 00:38:09.391412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:108048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.064 [2024-11-18 00:38:09.391429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:05.064 [2024-11-18 00:38:09.391455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:108056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.064 [2024-11-18 00:38:09.391472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:05.064 [2024-11-18 00:38:09.391498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:108064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.064 [2024-11-18 00:38:09.391514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:05.064 [2024-11-18 00:38:09.391540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:108072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.064 [2024-11-18 00:38:09.391557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:05.064 [2024-11-18 00:38:09.391583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:108080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.064 [2024-11-18 00:38:09.391615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:05.064 [2024-11-18 00:38:09.391640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:108088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.064 [2024-11-18 00:38:09.391656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:05.064 [2024-11-18 00:38:09.391680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:108096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.064 [2024-11-18 00:38:09.391696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:05.064 [2024-11-18 00:38:09.391721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:108104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.064 [2024-11-18 00:38:09.391736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:05.064 [2024-11-18 00:38:09.391760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:108112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.064 [2024-11-18 00:38:09.391776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:05.064 [2024-11-18 00:38:09.391801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:108120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.064 [2024-11-18 00:38:09.391816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:05.064 [2024-11-18 00:38:09.391841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:108128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.064 [2024-11-18 00:38:09.391863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:05.064 [2024-11-18 00:38:09.391889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:108136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.064 [2024-11-18 00:38:09.391905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:05.064 [2024-11-18 00:38:09.391930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:108144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.064 [2024-11-18 00:38:09.391945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:05.064 [2024-11-18 00:38:09.391970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:108152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.064 [2024-11-18 00:38:09.391985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:05.064 [2024-11-18 00:38:09.392010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:108160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.064 [2024-11-18 00:38:09.392040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:05.064 [2024-11-18 00:38:09.392065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:108168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.064 [2024-11-18 00:38:09.392081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:05.064 [2024-11-18 00:38:09.392105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:108176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.064 [2024-11-18 00:38:09.392120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:05.064 [2024-11-18 00:38:09.392144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:108184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.064 [2024-11-18 00:38:09.392159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:05.064 [2024-11-18 00:38:09.392183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:108192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.064 [2024-11-18 00:38:09.392199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:05.064 [2024-11-18 00:38:09.392223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:108200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.064 [2024-11-18 00:38:09.392238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:05.064 [2024-11-18 00:38:09.392262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:108208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.064 [2024-11-18 00:38:09.392277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:05.064 [2024-11-18 00:38:09.392324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:108216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.064 [2024-11-18 00:38:09.392342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:05.064 [2024-11-18 00:38:09.392369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:108224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.064 [2024-11-18 00:38:09.392389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:05.064 [2024-11-18 00:38:09.392415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:108232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.064 [2024-11-18 00:38:09.392432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:05.064 [2024-11-18 00:38:09.392459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:108240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.064 [2024-11-18 00:38:09.392476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:05.064 [2024-11-18 00:38:09.392501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:108248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.064 [2024-11-18 00:38:09.392518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:05.064 [2024-11-18 00:38:09.392543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:108256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.064 [2024-11-18 00:38:09.392559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:05.064 [2024-11-18 00:38:09.392584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:108264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.064 [2024-11-18 00:38:09.392615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:05.064 [2024-11-18 00:38:09.392640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:108272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.064 [2024-11-18 00:38:09.392670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:05.064 [2024-11-18 00:38:09.392695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:108280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.064 [2024-11-18 00:38:09.392711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:05.064 [2024-11-18 00:38:09.392735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:108288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.065 [2024-11-18 00:38:09.392750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:05.065 [2024-11-18 00:38:09.392773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:108296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.065 [2024-11-18 00:38:09.392788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:05.065 [2024-11-18 00:38:09.392812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:108304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.065 [2024-11-18 00:38:09.392828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:05.065 [2024-11-18 00:38:09.392991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:107304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.065 [2024-11-18 00:38:09.393011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:05.065 8006.31 IOPS, 31.27 MiB/s [2024-11-17T23:38:28.887Z] 7535.35 IOPS, 29.43 MiB/s [2024-11-17T23:38:28.887Z] 7116.72 IOPS, 27.80 MiB/s [2024-11-17T23:38:28.887Z] 6742.16 IOPS, 26.34 MiB/s [2024-11-17T23:38:28.887Z] 6717.15 IOPS, 26.24 MiB/s [2024-11-17T23:38:28.887Z] 6789.48 IOPS, 26.52 MiB/s [2024-11-17T23:38:28.887Z] 6870.41 IOPS, 26.84 MiB/s [2024-11-17T23:38:28.887Z] 7053.83 IOPS, 27.55 MiB/s [2024-11-17T23:38:28.887Z] 7225.38 IOPS, 28.22 MiB/s [2024-11-17T23:38:28.887Z] 7384.44 IOPS, 28.85 MiB/s [2024-11-17T23:38:28.887Z] 7414.08 IOPS, 28.96 MiB/s [2024-11-17T23:38:28.887Z] 7441.44 IOPS, 29.07 MiB/s [2024-11-17T23:38:28.887Z] 7464.57 IOPS, 29.16 MiB/s [2024-11-17T23:38:28.887Z] 7539.17 IOPS, 29.45 MiB/s [2024-11-17T23:38:28.887Z] 7660.93 IOPS, 29.93 MiB/s [2024-11-17T23:38:28.887Z] 7777.81 IOPS, 30.38 MiB/s [2024-11-17T23:38:28.887Z] [2024-11-18 00:38:25.857417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:20624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.065 [2024-11-18 00:38:25.857512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:05.065 [2024-11-18 00:38:25.857553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.065 [2024-11-18 00:38:25.857581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:05.065 [2024-11-18 00:38:25.857606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:20656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.065 [2024-11-18 00:38:25.857650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:05.065 [2024-11-18 00:38:25.857675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.065 [2024-11-18 00:38:25.857692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:05.065 [2024-11-18 00:38:25.857714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:20688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.065 [2024-11-18 00:38:25.857731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:05.065 [2024-11-18 00:38:25.857754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.065 [2024-11-18 00:38:25.857770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:05.065 [2024-11-18 00:38:25.857806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:20720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.065 [2024-11-18 00:38:25.857824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:05.065 [2024-11-18 00:38:25.857845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:20736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.065 [2024-11-18 00:38:25.857862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:05.065 [2024-11-18 00:38:25.857883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.065 [2024-11-18 00:38:25.857900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:05.065 [2024-11-18 00:38:25.860107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:20768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.065 [2024-11-18 00:38:25.860137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:05.065 [2024-11-18 00:38:25.860181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:20784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.065 [2024-11-18 00:38:25.860210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:05.065 [2024-11-18 00:38:25.860233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:20800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.065 [2024-11-18 00:38:25.860260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:05.065 [2024-11-18 00:38:25.860298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:20816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.065 [2024-11-18 00:38:25.860323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:05.065 [2024-11-18 00:38:25.860364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:20832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.065 [2024-11-18 00:38:25.860381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:05.065 [2024-11-18 00:38:25.860404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.065 [2024-11-18 00:38:25.860421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:05.065 [2024-11-18 00:38:25.860456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:20864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.065 [2024-11-18 00:38:25.860475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:05.065 [2024-11-18 00:38:25.860497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:20880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.065 [2024-11-18 00:38:25.860514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:05.065 [2024-11-18 00:38:25.860536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:20896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.065 [2024-11-18 00:38:25.860553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:05.065 [2024-11-18 00:38:25.860576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:20912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.065 [2024-11-18 00:38:25.860592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:05.065 [2024-11-18 00:38:25.860614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:20928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.065 [2024-11-18 00:38:25.860632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:05.065 [2024-11-18 00:38:25.860654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.065 [2024-11-18 00:38:25.860671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:05.065 [2024-11-18 00:38:25.860694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:20960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.065 [2024-11-18 00:38:25.860710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:05.065 [2024-11-18 00:38:25.860733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:20976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.065 [2024-11-18 00:38:25.860750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:05.065 [2024-11-18 00:38:25.860772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:20992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.065 [2024-11-18 00:38:25.860789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:05.065 [2024-11-18 00:38:25.860830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:21008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.065 [2024-11-18 00:38:25.860848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:05.065 [2024-11-18 00:38:25.860887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.065 [2024-11-18 00:38:25.860905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:05.065 [2024-11-18 00:38:25.860926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:21040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.065 [2024-11-18 00:38:25.860943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:05.065 [2024-11-18 00:38:25.862662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.065 [2024-11-18 00:38:25.862688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:05.065 [2024-11-18 00:38:25.862715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:20584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.065 [2024-11-18 00:38:25.862732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:05.065 [2024-11-18 00:38:25.862754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:20616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.065 [2024-11-18 00:38:25.862769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:05.065 [2024-11-18 00:38:25.862790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.065 [2024-11-18 00:38:25.862806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:05.065 [2024-11-18 00:38:25.862826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:21072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.065 [2024-11-18 00:38:25.862842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:05.065 [2024-11-18 00:38:25.862863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:21088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.066 [2024-11-18 00:38:25.862879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:05.066 [2024-11-18 00:38:25.862900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.066 [2024-11-18 00:38:25.862916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:05.066 [2024-11-18 00:38:25.862937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.066 [2024-11-18 00:38:25.862953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:05.066 [2024-11-18 00:38:25.862973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.066 [2024-11-18 00:38:25.862990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:05.066 [2024-11-18 00:38:25.863016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.066 [2024-11-18 00:38:25.863032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:05.066 [2024-11-18 00:38:25.863071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:21176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.066 [2024-11-18 00:38:25.863088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:05.066 [2024-11-18 00:38:25.863111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.066 [2024-11-18 00:38:25.863128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:05.066 [2024-11-18 00:38:25.863149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:21208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.066 [2024-11-18 00:38:25.863166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:05.066 [2024-11-18 00:38:25.863188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:21224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.066 [2024-11-18 00:38:25.863205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:05.066 [2024-11-18 00:38:25.863227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.066 [2024-11-18 00:38:25.863244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:05.066 [2024-11-18 00:38:25.863266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.066 [2024-11-18 00:38:25.863283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:05.066 [2024-11-18 00:38:25.863305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.066 [2024-11-18 00:38:25.863332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:05.066 [2024-11-18 00:38:25.863356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.066 [2024-11-18 00:38:25.863373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:05.066 [2024-11-18 00:38:25.863395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.066 [2024-11-18 00:38:25.863413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:05.066 [2024-11-18 00:38:25.863450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:21320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.066 [2024-11-18 00:38:25.863467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:05.066 [2024-11-18 00:38:25.863489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.066 [2024-11-18 00:38:25.863505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:05.066 [2024-11-18 00:38:25.863526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.066 [2024-11-18 00:38:25.863546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:05.066 [2024-11-18 00:38:25.863568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.066 [2024-11-18 00:38:25.863585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:05.066 [2024-11-18 00:38:25.863622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.066 [2024-11-18 00:38:25.863638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:05.066 [2024-11-18 00:38:25.863658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.066 [2024-11-18 00:38:25.863674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:05.066 [2024-11-18 00:38:25.863694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:21416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.066 [2024-11-18 00:38:25.863710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:05.066 [2024-11-18 00:38:25.863730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.066 [2024-11-18 00:38:25.863746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:05.066 [2024-11-18 00:38:25.863766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:21448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.066 [2024-11-18 00:38:25.863782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:05.066 [2024-11-18 00:38:25.863803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.066 [2024-11-18 00:38:25.863819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:05.066 [2024-11-18 00:38:25.863839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.066 [2024-11-18 00:38:25.863855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.066 [2024-11-18 00:38:25.863876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:20656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.066 [2024-11-18 00:38:25.863892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.066 [2024-11-18 00:38:25.863912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:20688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.066 [2024-11-18 00:38:25.863928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:05.066 [2024-11-18 00:38:25.863948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:20720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.066 [2024-11-18 00:38:25.863965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:05.066 [2024-11-18 00:38:25.863985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.066 [2024-11-18 00:38:25.864004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:05.066 [2024-11-18 00:38:25.864026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.066 [2024-11-18 00:38:25.864042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:05.066 [2024-11-18 00:38:25.864063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.066 [2024-11-18 00:38:25.864078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:05.066 [2024-11-18 00:38:25.864099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:20784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.066 [2024-11-18 00:38:25.864114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:05.066 [2024-11-18 00:38:25.864134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:20816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.066 [2024-11-18 00:38:25.864149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:05.066 [2024-11-18 00:38:25.864170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:20848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.066 [2024-11-18 00:38:25.864185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:05.066 [2024-11-18 00:38:25.864205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:20880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.066 [2024-11-18 00:38:25.864221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:05.066 [2024-11-18 00:38:25.864241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:20912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.066 [2024-11-18 00:38:25.864257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:05.066 [2024-11-18 00:38:25.864277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:20944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.066 [2024-11-18 00:38:25.864292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:05.066 [2024-11-18 00:38:25.864336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.066 [2024-11-18 00:38:25.864355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:05.066 [2024-11-18 00:38:25.864378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.066 [2024-11-18 00:38:25.864394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:05.066 [2024-11-18 00:38:25.864415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.067 [2024-11-18 00:38:25.864431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:05.067 [2024-11-18 00:38:25.866304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:20648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.067 [2024-11-18 00:38:25.866363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:05.067 [2024-11-18 00:38:25.866398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:20680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.067 [2024-11-18 00:38:25.866418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:05.067 [2024-11-18 00:38:25.866441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:20712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.067 [2024-11-18 00:38:25.866458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:05.067 [2024-11-18 00:38:25.866497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.067 [2024-11-18 00:38:25.866514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:05.067 [2024-11-18 00:38:25.866536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:21480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.067 [2024-11-18 00:38:25.866552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:05.067 [2024-11-18 00:38:25.866574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.067 [2024-11-18 00:38:25.866590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:05.067 [2024-11-18 00:38:25.866611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:21512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.067 [2024-11-18 00:38:25.866643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:05.067 [2024-11-18 00:38:25.866664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:21528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.067 [2024-11-18 00:38:25.866680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:05.067 [2024-11-18 00:38:25.866702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:21544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.067 [2024-11-18 00:38:25.866718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:05.067 [2024-11-18 00:38:25.866739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:20776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.067 [2024-11-18 00:38:25.866770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:05.067 [2024-11-18 00:38:25.866793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:20808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.067 [2024-11-18 00:38:25.866824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:05.067 [2024-11-18 00:38:25.866849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.067 [2024-11-18 00:38:25.866866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:05.067 [2024-11-18 00:38:25.866888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.067 [2024-11-18 00:38:25.866905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:05.067 [2024-11-18 00:38:25.866933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:20904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.067 [2024-11-18 00:38:25.866950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:05.067 [2024-11-18 00:38:25.866974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:20936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.067 [2024-11-18 00:38:25.866991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:05.067 [2024-11-18 00:38:25.867013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:20968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.067 [2024-11-18 00:38:25.867041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:05.067 [2024-11-18 00:38:25.867068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.067 [2024-11-18 00:38:25.867086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:05.067 [2024-11-18 00:38:25.867109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.067 [2024-11-18 00:38:25.867126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:05.067 [2024-11-18 00:38:25.867164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:21064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.067 [2024-11-18 00:38:25.867180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:05.067 [2024-11-18 00:38:25.867202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.067 [2024-11-18 00:38:25.867219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:05.067 [2024-11-18 00:38:25.867240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.067 [2024-11-18 00:38:25.867257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:05.067 [2024-11-18 00:38:25.867278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.067 [2024-11-18 00:38:25.867294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:05.067 [2024-11-18 00:38:25.867330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.067 [2024-11-18 00:38:25.867354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:05.067 [2024-11-18 00:38:25.867377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.067 [2024-11-18 00:38:25.867393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:05.067 [2024-11-18 00:38:25.867414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:21248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.067 [2024-11-18 00:38:25.867431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:05.067 [2024-11-18 00:38:25.867453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.067 [2024-11-18 00:38:25.867474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:05.067 [2024-11-18 00:38:25.867497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.067 [2024-11-18 00:38:25.867514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:05.067 [2024-11-18 00:38:25.867536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.067 [2024-11-18 00:38:25.867553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:05.067 [2024-11-18 00:38:25.867927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:21120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.067 [2024-11-18 00:38:25.867953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:05.067 [2024-11-18 00:38:25.867981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.067 [2024-11-18 00:38:25.867999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:05.067 [2024-11-18 00:38:25.868023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:21192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.067 [2024-11-18 00:38:25.868062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:05.067 [2024-11-18 00:38:25.868087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:21224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.067 [2024-11-18 00:38:25.868105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:05.067 [2024-11-18 00:38:25.868128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.067 [2024-11-18 00:38:25.868145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:05.067 [2024-11-18 00:38:25.868168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:21288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.067 [2024-11-18 00:38:25.868185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:05.067 [2024-11-18 00:38:25.868208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:21320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.067 [2024-11-18 00:38:25.868225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:05.067 [2024-11-18 00:38:25.868248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.067 [2024-11-18 00:38:25.868265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:05.067 [2024-11-18 00:38:25.868287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.067 [2024-11-18 00:38:25.868303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:05.067 [2024-11-18 00:38:25.868349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.067 [2024-11-18 00:38:25.868373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:05.068 [2024-11-18 00:38:25.868396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:21448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.068 [2024-11-18 00:38:25.868413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:05.068 [2024-11-18 00:38:25.868434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:20624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.068 [2024-11-18 00:38:25.868450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:05.068 [2024-11-18 00:38:25.868471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.068 [2024-11-18 00:38:25.868488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:05.068 [2024-11-18 00:38:25.868510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.068 [2024-11-18 00:38:25.868525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:05.068 [2024-11-18 00:38:25.868546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:20592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.068 [2024-11-18 00:38:25.868562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:05.068 [2024-11-18 00:38:25.868584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.068 [2024-11-18 00:38:25.868600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:05.068 [2024-11-18 00:38:25.868621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.068 [2024-11-18 00:38:25.868653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:05.068 [2024-11-18 00:38:25.869217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.068 [2024-11-18 00:38:25.869251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:05.068 [2024-11-18 00:38:25.869282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.068 [2024-11-18 00:38:25.869301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:05.068 [2024-11-18 00:38:25.869333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.068 [2024-11-18 00:38:25.869352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:05.068 [2024-11-18 00:38:25.869375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:21312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.068 [2024-11-18 00:38:25.869392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:05.068 [2024-11-18 00:38:25.869414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.068 [2024-11-18 00:38:25.869431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:05.068 [2024-11-18 00:38:25.869458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.068 [2024-11-18 00:38:25.869475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:05.068 [2024-11-18 00:38:25.869498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.068 [2024-11-18 00:38:25.869515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:05.068 [2024-11-18 00:38:25.869538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.068 [2024-11-18 00:38:25.869554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:05.068 [2024-11-18 00:38:25.869577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:21424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.068 [2024-11-18 00:38:25.869594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:05.068 [2024-11-18 00:38:25.869632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.068 [2024-11-18 00:38:25.869649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:05.068 [2024-11-18 00:38:25.869670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.068 [2024-11-18 00:38:25.869686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:05.068 [2024-11-18 00:38:25.869706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.068 [2024-11-18 00:38:25.869749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:05.068 [2024-11-18 00:38:25.869774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:20768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.068 [2024-11-18 00:38:25.869791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:05.068 [2024-11-18 00:38:25.869813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:20832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.068 [2024-11-18 00:38:25.869829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:05.068 [2024-11-18 00:38:25.869861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:20896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.068 [2024-11-18 00:38:25.869880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:05.068 [2024-11-18 00:38:25.869901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.068 [2024-11-18 00:38:25.869918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:05.068 [2024-11-18 00:38:25.869940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.068 [2024-11-18 00:38:25.869956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:05.068 [2024-11-18 00:38:25.869997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.068 [2024-11-18 00:38:25.870015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:05.068 [2024-11-18 00:38:25.870038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.068 [2024-11-18 00:38:25.870055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:05.068 [2024-11-18 00:38:25.870078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:20744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.068 [2024-11-18 00:38:25.870094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:05.068 [2024-11-18 00:38:25.870116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.068 [2024-11-18 00:38:25.870132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:05.068 [2024-11-18 00:38:25.870155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.068 [2024-11-18 00:38:25.870171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:05.068 [2024-11-18 00:38:25.870194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.068 [2024-11-18 00:38:25.870211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:05.068 [2024-11-18 00:38:25.870234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.068 [2024-11-18 00:38:25.870250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:05.068 [2024-11-18 00:38:25.870273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.068 [2024-11-18 00:38:25.870289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:05.068 [2024-11-18 00:38:25.870317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:20968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.068 [2024-11-18 00:38:25.870351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:05.068 [2024-11-18 00:38:25.870374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.068 [2024-11-18 00:38:25.870391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:05.068 [2024-11-18 00:38:25.870413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:21096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.069 [2024-11-18 00:38:25.870429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:05.069 [2024-11-18 00:38:25.870450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.069 [2024-11-18 00:38:25.870465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:05.069 [2024-11-18 00:38:25.870491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.069 [2024-11-18 00:38:25.870508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:05.069 [2024-11-18 00:38:25.870530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:20584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.069 [2024-11-18 00:38:25.870546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:05.069 [2024-11-18 00:38:25.870568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:21088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.069 [2024-11-18 00:38:25.870599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:05.069 [2024-11-18 00:38:25.871401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.069 [2024-11-18 00:38:25.871425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:05.069 [2024-11-18 00:38:25.871451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.069 [2024-11-18 00:38:25.871496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:05.069 [2024-11-18 00:38:25.871524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:21120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.069 [2024-11-18 00:38:25.871542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:05.069 [2024-11-18 00:38:25.871564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.069 [2024-11-18 00:38:25.871596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:05.069 [2024-11-18 00:38:25.871619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.069 [2024-11-18 00:38:25.871635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:05.069 [2024-11-18 00:38:25.871656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:21320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.069 [2024-11-18 00:38:25.871672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:05.069 [2024-11-18 00:38:25.871694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:21384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.069 [2024-11-18 00:38:25.871710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:05.069 [2024-11-18 00:38:25.871731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.069 [2024-11-18 00:38:25.871758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:05.069 [2024-11-18 00:38:25.871782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.069 [2024-11-18 00:38:25.871798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:05.069 [2024-11-18 00:38:25.871819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.069 [2024-11-18 00:38:25.871843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:05.069 [2024-11-18 00:38:25.871866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:20880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.069 [2024-11-18 00:38:25.871883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:05.069 [2024-11-18 00:38:25.873321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:21008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.069 [2024-11-18 00:38:25.873346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:05.069 [2024-11-18 00:38:25.873373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:21312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.069 [2024-11-18 00:38:25.873392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:05.069 [2024-11-18 00:38:25.873415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.069 [2024-11-18 00:38:25.873432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:05.069 [2024-11-18 00:38:25.873454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:21576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.069 [2024-11-18 00:38:25.873470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:05.069 [2024-11-18 00:38:25.873493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:21456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.069 [2024-11-18 00:38:25.873510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:05.069 [2024-11-18 00:38:25.873532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.069 [2024-11-18 00:38:25.873548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:05.069 [2024-11-18 00:38:25.873571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:20832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.069 [2024-11-18 00:38:25.873587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:05.069 [2024-11-18 00:38:25.873624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.069 [2024-11-18 00:38:25.873640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:05.069 [2024-11-18 00:38:25.873662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.069 [2024-11-18 00:38:25.873677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:05.069 [2024-11-18 00:38:25.873697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.069 [2024-11-18 00:38:25.873713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:05.069 [2024-11-18 00:38:25.873734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.069 [2024-11-18 00:38:25.873753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:05.069 [2024-11-18 00:38:25.873776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.069 [2024-11-18 00:38:25.873792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:05.069 [2024-11-18 00:38:25.873813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:20968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.069 [2024-11-18 00:38:25.873829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:05.069 [2024-11-18 00:38:25.873850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.069 [2024-11-18 00:38:25.873866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:05.069 [2024-11-18 00:38:25.873886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.069 [2024-11-18 00:38:25.873902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:05.069 [2024-11-18 00:38:25.873922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.069 [2024-11-18 00:38:25.873946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:05.069 [2024-11-18 00:38:25.873968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.069 [2024-11-18 00:38:25.873984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:05.069 [2024-11-18 00:38:25.874005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.069 [2024-11-18 00:38:25.874020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:05.069 [2024-11-18 00:38:25.874041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.069 [2024-11-18 00:38:25.874056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:05.069 [2024-11-18 00:38:25.874077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:21304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.069 [2024-11-18 00:38:25.874093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:05.069 [2024-11-18 00:38:25.874130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:21368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.069 [2024-11-18 00:38:25.874145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:05.069 [2024-11-18 00:38:25.874165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.069 [2024-11-18 00:38:25.874180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:05.069 [2024-11-18 00:38:25.874200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:20656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.069 [2024-11-18 00:38:25.874216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:05.070 [2024-11-18 00:38:25.874240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.070 [2024-11-18 00:38:25.874256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:05.070 [2024-11-18 00:38:25.874276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.070 [2024-11-18 00:38:25.874305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.070 [2024-11-18 00:38:25.874338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.070 [2024-11-18 00:38:25.874357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.070 [2024-11-18 00:38:25.874379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.070 [2024-11-18 00:38:25.874396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:05.070 [2024-11-18 00:38:25.874419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.070 [2024-11-18 00:38:25.874437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:05.070 [2024-11-18 00:38:25.876736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.070 [2024-11-18 00:38:25.876779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:05.070 [2024-11-18 00:38:25.876807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.070 [2024-11-18 00:38:25.876825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:05.070 [2024-11-18 00:38:25.876849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.070 [2024-11-18 00:38:25.876866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:05.070 [2024-11-18 00:38:25.876888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:21632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.070 [2024-11-18 00:38:25.876905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:05.070 [2024-11-18 00:38:25.876934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.070 [2024-11-18 00:38:25.876950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:05.070 [2024-11-18 00:38:25.876973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:21664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.070 [2024-11-18 00:38:25.876990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:05.070 [2024-11-18 00:38:25.877011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.070 [2024-11-18 00:38:25.877028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:05.070 [2024-11-18 00:38:25.877072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.070 [2024-11-18 00:38:25.877090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:05.070 [2024-11-18 00:38:25.877112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.070 [2024-11-18 00:38:25.877143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:05.070 [2024-11-18 00:38:25.877165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:21728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.070 [2024-11-18 00:38:25.877206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:05.070 [2024-11-18 00:38:25.877231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.070 [2024-11-18 00:38:25.877247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:05.070 [2024-11-18 00:38:25.877268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.070 [2024-11-18 00:38:25.877284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:05.070 [2024-11-18 00:38:25.877331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.070 [2024-11-18 00:38:25.877351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:05.070 [2024-11-18 00:38:25.877374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.070 [2024-11-18 00:38:25.877391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:05.070 [2024-11-18 00:38:25.877412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.070 [2024-11-18 00:38:25.877429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:05.070 [2024-11-18 00:38:25.877453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:21312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.070 [2024-11-18 00:38:25.877470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:05.070 [2024-11-18 00:38:25.877493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.070 [2024-11-18 00:38:25.877509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:05.070 [2024-11-18 00:38:25.877531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.070 [2024-11-18 00:38:25.877548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:05.070 [2024-11-18 00:38:25.877570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.070 [2024-11-18 00:38:25.877587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:05.070 [2024-11-18 00:38:25.877609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.070 [2024-11-18 00:38:25.877637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:05.070 [2024-11-18 00:38:25.877660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:20840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.070 [2024-11-18 00:38:25.877678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:05.070 [2024-11-18 00:38:25.877701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.070 [2024-11-18 00:38:25.877718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:05.070 [2024-11-18 00:38:25.877740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:21088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.070 [2024-11-18 00:38:25.877757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:05.070 [2024-11-18 00:38:25.877779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:21176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.070 [2024-11-18 00:38:25.877796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:05.070 [2024-11-18 00:38:25.877819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.070 [2024-11-18 00:38:25.877851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:05.070 [2024-11-18 00:38:25.877873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.070 [2024-11-18 00:38:25.877890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:05.070 [2024-11-18 00:38:25.877911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.070 [2024-11-18 00:38:25.877927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:05.070 [2024-11-18 00:38:25.877949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.070 [2024-11-18 00:38:25.877965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:05.070 [2024-11-18 00:38:25.878001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.070 [2024-11-18 00:38:25.878017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:05.070 [2024-11-18 00:38:25.878038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.070 [2024-11-18 00:38:25.878054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:05.070 [2024-11-18 00:38:25.878075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.070 [2024-11-18 00:38:25.878091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:05.070 [2024-11-18 00:38:25.878814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.070 [2024-11-18 00:38:25.878843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:05.070 [2024-11-18 00:38:25.878869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.070 [2024-11-18 00:38:25.878887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:05.070 [2024-11-18 00:38:25.878924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.070 [2024-11-18 00:38:25.878953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:05.071 [2024-11-18 00:38:25.878990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.071 [2024-11-18 00:38:25.879007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:05.071 [2024-11-18 00:38:25.879028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:21856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.071 [2024-11-18 00:38:25.879044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:05.071 [2024-11-18 00:38:25.879066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:21872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.071 [2024-11-18 00:38:25.879083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:05.071 [2024-11-18 00:38:25.879104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:21888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.071 [2024-11-18 00:38:25.879120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:05.071 [2024-11-18 00:38:25.879141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:21904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.071 [2024-11-18 00:38:25.879157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:05.071 [2024-11-18 00:38:25.879178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:21920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.071 [2024-11-18 00:38:25.879194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:05.071 [2024-11-18 00:38:25.879215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.071 [2024-11-18 00:38:25.879230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:05.071 [2024-11-18 00:38:25.879251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.071 [2024-11-18 00:38:25.879282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:05.071 [2024-11-18 00:38:25.879303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:21968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.071 [2024-11-18 00:38:25.879353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:05.071 [2024-11-18 00:38:25.879379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.071 [2024-11-18 00:38:25.879397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:05.071 [2024-11-18 00:38:25.881209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.071 [2024-11-18 00:38:25.881234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:05.071 [2024-11-18 00:38:25.881261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.071 [2024-11-18 00:38:25.881279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:05.071 [2024-11-18 00:38:25.881325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:20624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.071 [2024-11-18 00:38:25.881355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:05.071 [2024-11-18 00:38:25.881379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:20816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.071 [2024-11-18 00:38:25.881396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:05.071 [2024-11-18 00:38:25.881419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.071 [2024-11-18 00:38:25.881435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:05.071 [2024-11-18 00:38:25.881458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:20912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.071 [2024-11-18 00:38:25.881475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:05.071 [2024-11-18 00:38:25.881497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.071 [2024-11-18 00:38:25.881514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:05.071 [2024-11-18 00:38:25.881536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.071 [2024-11-18 00:38:25.881553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:05.071 [2024-11-18 00:38:25.881576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.071 [2024-11-18 00:38:25.881608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:05.071 [2024-11-18 00:38:25.881630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.071 [2024-11-18 00:38:25.881649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:05.071 [2024-11-18 00:38:25.881686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:21760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.071 [2024-11-18 00:38:25.881702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:05.071 [2024-11-18 00:38:25.881723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:20976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.071 [2024-11-18 00:38:25.881739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:05.071 [2024-11-18 00:38:25.881780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:21312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.071 [2024-11-18 00:38:25.881797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:05.071 [2024-11-18 00:38:25.881818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.071 [2024-11-18 00:38:25.881833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:05.071 [2024-11-18 00:38:25.881860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.071 [2024-11-18 00:38:25.881876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:05.071 [2024-11-18 00:38:25.881896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.071 [2024-11-18 00:38:25.881912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:05.071 [2024-11-18 00:38:25.881932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.071 [2024-11-18 00:38:25.881948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:05.071 [2024-11-18 00:38:25.881968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:21432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.071 [2024-11-18 00:38:25.881983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:05.071 [2024-11-18 00:38:25.882003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.071 [2024-11-18 00:38:25.882019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:05.071 [2024-11-18 00:38:25.882040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.071 [2024-11-18 00:38:25.882056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:05.071 [2024-11-18 00:38:25.882076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:21496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.071 [2024-11-18 00:38:25.882092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:05.071 [2024-11-18 00:38:25.882112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.071 [2024-11-18 00:38:25.882128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:05.071 [2024-11-18 00:38:25.882148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:21808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.071 [2024-11-18 00:38:25.882163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:05.071 [2024-11-18 00:38:25.882183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:21840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.071 [2024-11-18 00:38:25.882198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:05.071 [2024-11-18 00:38:25.882218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.071 [2024-11-18 00:38:25.882237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:05.071 [2024-11-18 00:38:25.882259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.071 [2024-11-18 00:38:25.882274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:05.071 [2024-11-18 00:38:25.882316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:21936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.071 [2024-11-18 00:38:25.882336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:05.071 [2024-11-18 00:38:25.882367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:21968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.071 [2024-11-18 00:38:25.882384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:05.071 [2024-11-18 00:38:25.884228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.072 [2024-11-18 00:38:25.884278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:05.072 [2024-11-18 00:38:25.884355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.072 [2024-11-18 00:38:25.884377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:05.072 [2024-11-18 00:38:25.884402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.072 [2024-11-18 00:38:25.884419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:05.072 [2024-11-18 00:38:25.884442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.072 [2024-11-18 00:38:25.884459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:05.072 [2024-11-18 00:38:25.884481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.072 [2024-11-18 00:38:25.884498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:05.072 [2024-11-18 00:38:25.884520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.072 [2024-11-18 00:38:25.884537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:05.072 [2024-11-18 00:38:25.884559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.072 [2024-11-18 00:38:25.884577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:05.072 [2024-11-18 00:38:25.884600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.072 [2024-11-18 00:38:25.884616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:05.072 [2024-11-18 00:38:25.885828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.072 [2024-11-18 00:38:25.885858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:05.072 [2024-11-18 00:38:25.885886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.072 [2024-11-18 00:38:25.885905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:05.072 [2024-11-18 00:38:25.885928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.072 [2024-11-18 00:38:25.885946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:05.072 [2024-11-18 00:38:25.885969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.072 [2024-11-18 00:38:25.885986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:05.072 [2024-11-18 00:38:25.886009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.072 [2024-11-18 00:38:25.886027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:05.072 [2024-11-18 00:38:25.886049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.072 [2024-11-18 00:38:25.886067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:05.072 [2024-11-18 00:38:25.886089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:21624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.072 [2024-11-18 00:38:25.886107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:05.072 [2024-11-18 00:38:25.886129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.072 [2024-11-18 00:38:25.886146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:05.072 [2024-11-18 00:38:25.886168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.072 [2024-11-18 00:38:25.886186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:05.072 [2024-11-18 00:38:25.886208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:21720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.072 [2024-11-18 00:38:25.886225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:05.072 [2024-11-18 00:38:25.886248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.072 [2024-11-18 00:38:25.886265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:05.072 [2024-11-18 00:38:25.886287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.072 [2024-11-18 00:38:25.886304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:05.072 [2024-11-18 00:38:25.886336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.072 [2024-11-18 00:38:25.886357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:05.072 [2024-11-18 00:38:25.886385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:21352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.072 [2024-11-18 00:38:25.886403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:05.072 [2024-11-18 00:38:25.886425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.072 [2024-11-18 00:38:25.886442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:05.072 [2024-11-18 00:38:25.886465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.072 [2024-11-18 00:38:25.886483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:05.072 [2024-11-18 00:38:25.886505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.072 [2024-11-18 00:38:25.886522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:05.072 [2024-11-18 00:38:25.886544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:21728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.072 [2024-11-18 00:38:25.886561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:05.072 [2024-11-18 00:38:25.886583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:20976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.072 [2024-11-18 00:38:25.886612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:05.072 [2024-11-18 00:38:25.886650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.072 [2024-11-18 00:38:25.886667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:05.072 [2024-11-18 00:38:25.886690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:21096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.072 [2024-11-18 00:38:25.886722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:05.072 [2024-11-18 00:38:25.886746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.072 [2024-11-18 00:38:25.886763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:05.072 [2024-11-18 00:38:25.886785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.072 [2024-11-18 00:38:25.886802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:05.072 [2024-11-18 00:38:25.886824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.072 [2024-11-18 00:38:25.886841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:05.072 [2024-11-18 00:38:25.886863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:21840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.072 [2024-11-18 00:38:25.886880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:05.072 [2024-11-18 00:38:25.886906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.072 [2024-11-18 00:38:25.886923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:05.072 [2024-11-18 00:38:25.886946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.072 [2024-11-18 00:38:25.886964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:05.072 [2024-11-18 00:38:25.886986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:21192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.073 [2024-11-18 00:38:25.887003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:05.073 [2024-11-18 00:38:25.887025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:21800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.073 [2024-11-18 00:38:25.887042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:05.073 [2024-11-18 00:38:25.887064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.073 [2024-11-18 00:38:25.887081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:05.073 [2024-11-18 00:38:25.887103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.073 [2024-11-18 00:38:25.887120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:05.073 [2024-11-18 00:38:25.887142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.073 [2024-11-18 00:38:25.887160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:05.073 [2024-11-18 00:38:25.887183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.073 [2024-11-18 00:38:25.887213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:05.073 [2024-11-18 00:38:25.887235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.073 [2024-11-18 00:38:25.887251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:05.073 [2024-11-18 00:38:25.887272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:22240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.073 [2024-11-18 00:38:25.887288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:05.073 [2024-11-18 00:38:25.887338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.073 [2024-11-18 00:38:25.887357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:05.073 [2024-11-18 00:38:25.887379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.073 [2024-11-18 00:38:25.887396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:05.073 [2024-11-18 00:38:25.887418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.073 [2024-11-18 00:38:25.887439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:05.073 [2024-11-18 00:38:25.887462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.073 [2024-11-18 00:38:25.887478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:05.073 [2024-11-18 00:38:25.887500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:22272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.073 [2024-11-18 00:38:25.887517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:05.073 [2024-11-18 00:38:25.887538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.073 [2024-11-18 00:38:25.887556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:05.073 [2024-11-18 00:38:25.887578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:21648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.073 [2024-11-18 00:38:25.887595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:05.073 [2024-11-18 00:38:25.887631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.073 [2024-11-18 00:38:25.887647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:05.073 [2024-11-18 00:38:25.887667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.073 [2024-11-18 00:38:25.887688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:05.073 [2024-11-18 00:38:25.887710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.073 [2024-11-18 00:38:25.887726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.073 [2024-11-18 00:38:25.887745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.073 [2024-11-18 00:38:25.887761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.073 [2024-11-18 00:38:25.887782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.073 [2024-11-18 00:38:25.887798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:05.073 [2024-11-18 00:38:25.887819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.073 [2024-11-18 00:38:25.887834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:05.073 [2024-11-18 00:38:25.887855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.073 [2024-11-18 00:38:25.887871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:05.073 [2024-11-18 00:38:25.890101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.073 [2024-11-18 00:38:25.890131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:05.073 [2024-11-18 00:38:25.890160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.073 [2024-11-18 00:38:25.890178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:05.073 [2024-11-18 00:38:25.890215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.073 [2024-11-18 00:38:25.890231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:05.073 [2024-11-18 00:38:25.890252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:22296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.073 [2024-11-18 00:38:25.890267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:05.073 [2024-11-18 00:38:25.890289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.073 [2024-11-18 00:38:25.890330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:05.073 [2024-11-18 00:38:25.890366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.073 [2024-11-18 00:38:25.890383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:05.073 [2024-11-18 00:38:25.890405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.073 [2024-11-18 00:38:25.890422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:05.073 [2024-11-18 00:38:25.890445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:22360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.073 [2024-11-18 00:38:25.890461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:05.073 [2024-11-18 00:38:25.890483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:22368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.073 [2024-11-18 00:38:25.890500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:05.073 [2024-11-18 00:38:25.890523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:22384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.073 [2024-11-18 00:38:25.890540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:05.073 [2024-11-18 00:38:25.890563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.073 [2024-11-18 00:38:25.890580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:05.073 [2024-11-18 00:38:25.890617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:22024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.073 [2024-11-18 00:38:25.890634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:05.073 [2024-11-18 00:38:25.890656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:22056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.073 [2024-11-18 00:38:25.890687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:05.073 [2024-11-18 00:38:25.890724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:22088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.073 [2024-11-18 00:38:25.890763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:05.073 [2024-11-18 00:38:25.890787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.073 [2024-11-18 00:38:25.890803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:05.073 [2024-11-18 00:38:25.890824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.073 [2024-11-18 00:38:25.890839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:05.073 [2024-11-18 00:38:25.890860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.073 [2024-11-18 00:38:25.890876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:05.074 [2024-11-18 00:38:25.890896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.074 [2024-11-18 00:38:25.890912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:05.074 [2024-11-18 00:38:25.890933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.074 [2024-11-18 00:38:25.890948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:05.074 [2024-11-18 00:38:25.890968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.074 [2024-11-18 00:38:25.891000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:05.074 [2024-11-18 00:38:25.891023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.074 [2024-11-18 00:38:25.891039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:05.074 [2024-11-18 00:38:25.891077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:20912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.074 [2024-11-18 00:38:25.891094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:05.074 [2024-11-18 00:38:25.891126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.074 [2024-11-18 00:38:25.891146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:05.074 [2024-11-18 00:38:25.891169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.074 [2024-11-18 00:38:25.891185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:05.074 [2024-11-18 00:38:25.891207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.074 [2024-11-18 00:38:25.891224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:05.074 [2024-11-18 00:38:25.891254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.074 [2024-11-18 00:38:25.891272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:05.074 [2024-11-18 00:38:25.891294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:21904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.074 [2024-11-18 00:38:25.891321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:05.074 [2024-11-18 00:38:25.891347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.074 [2024-11-18 00:38:25.891364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:05.074 [2024-11-18 00:38:25.891386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.074 [2024-11-18 00:38:25.891418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:05.074 [2024-11-18 00:38:25.891440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.074 [2024-11-18 00:38:25.891457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:05.074 [2024-11-18 00:38:25.892501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.074 [2024-11-18 00:38:25.892524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:05.074 [2024-11-18 00:38:25.892550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.074 [2024-11-18 00:38:25.892567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:05.074 [2024-11-18 00:38:25.892603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.074 [2024-11-18 00:38:25.892620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:05.074 [2024-11-18 00:38:25.892642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.074 [2024-11-18 00:38:25.892658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:05.074 [2024-11-18 00:38:25.892705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:21648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.074 [2024-11-18 00:38:25.892725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:05.074 [2024-11-18 00:38:25.892756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:21776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.074 [2024-11-18 00:38:25.892775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:05.074 [2024-11-18 00:38:25.892797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.074 [2024-11-18 00:38:25.892813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:05.074 [2024-11-18 00:38:25.892838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.074 [2024-11-18 00:38:25.892855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:05.074 [2024-11-18 00:38:25.892877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.074 [2024-11-18 00:38:25.892893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:05.074 [2024-11-18 00:38:25.892913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.074 [2024-11-18 00:38:25.892929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:05.074 [2024-11-18 00:38:25.892950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.074 [2024-11-18 00:38:25.892965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:05.074 [2024-11-18 00:38:25.892986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.075 [2024-11-18 00:38:25.893018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:05.075 [2024-11-18 00:38:25.893039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:22408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.075 [2024-11-18 00:38:25.893069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:05.075 [2024-11-18 00:38:25.893093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.075 [2024-11-18 00:38:25.893111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:05.075 [2024-11-18 00:38:25.893133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.075 [2024-11-18 00:38:25.893150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:05.075 [2024-11-18 00:38:25.893172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.075 [2024-11-18 00:38:25.893189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:05.075 [2024-11-18 00:38:25.893212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:22472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.075 [2024-11-18 00:38:25.893229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:05.075 [2024-11-18 00:38:25.893251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.075 [2024-11-18 00:38:25.893268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:05.075 [2024-11-18 00:38:25.893290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.075 [2024-11-18 00:38:25.893308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:05.075 [2024-11-18 00:38:25.893339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:22520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.075 [2024-11-18 00:38:25.893367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:05.075 [2024-11-18 00:38:25.893390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.075 [2024-11-18 00:38:25.893407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:05.075 [2024-11-18 00:38:25.893430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.075 [2024-11-18 00:38:25.893447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:05.075 [2024-11-18 00:38:25.893470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:22184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.075 [2024-11-18 00:38:25.893486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:05.075 [2024-11-18 00:38:25.894191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.075 [2024-11-18 00:38:25.894214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:05.075 [2024-11-18 00:38:25.894255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.075 [2024-11-18 00:38:25.894272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:05.075 [2024-11-18 00:38:25.894293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:22280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.075 [2024-11-18 00:38:25.894335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:05.075 [2024-11-18 00:38:25.894384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.075 [2024-11-18 00:38:25.894401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:05.075 [2024-11-18 00:38:25.894435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:22296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.075 [2024-11-18 00:38:25.894455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:05.075 [2024-11-18 00:38:25.894478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.075 [2024-11-18 00:38:25.894495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:05.075 [2024-11-18 00:38:25.894518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:22360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.075 [2024-11-18 00:38:25.894536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:05.075 [2024-11-18 00:38:25.894558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.075 [2024-11-18 00:38:25.894575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:05.075 [2024-11-18 00:38:25.894597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:22024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.075 [2024-11-18 00:38:25.894628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:05.075 [2024-11-18 00:38:25.894651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:22088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.075 [2024-11-18 00:38:25.894669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:05.075 [2024-11-18 00:38:25.894691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.075 [2024-11-18 00:38:25.894708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:05.075 [2024-11-18 00:38:25.894730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:21656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.075 [2024-11-18 00:38:25.894746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:05.075 [2024-11-18 00:38:25.894768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.075 [2024-11-18 00:38:25.894785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:05.075 [2024-11-18 00:38:25.894807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.075 [2024-11-18 00:38:25.894835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:05.075 [2024-11-18 00:38:25.894876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:20704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.075 [2024-11-18 00:38:25.894894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:05.075 [2024-11-18 00:38:25.894930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.075 [2024-11-18 00:38:25.894947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:05.075 [2024-11-18 00:38:25.894970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.075 [2024-11-18 00:38:25.894985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:05.075 [2024-11-18 00:38:25.895007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.075 [2024-11-18 00:38:25.895023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:05.075 [2024-11-18 00:38:25.895522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.075 [2024-11-18 00:38:25.895547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:05.075 [2024-11-18 00:38:25.895574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:22552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.075 [2024-11-18 00:38:25.895592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:05.075 [2024-11-18 00:38:25.895621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.075 [2024-11-18 00:38:25.895638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:05.075 [2024-11-18 00:38:25.895666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:22584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.075 [2024-11-18 00:38:25.895700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:05.075 [2024-11-18 00:38:25.895721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.075 [2024-11-18 00:38:25.895736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:05.075 [2024-11-18 00:38:25.895757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:22616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.075 [2024-11-18 00:38:25.895772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:05.075 [2024-11-18 00:38:25.895793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:22632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.075 [2024-11-18 00:38:25.895809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:05.075 [2024-11-18 00:38:25.895828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.075 [2024-11-18 00:38:25.895843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:05.075 [2024-11-18 00:38:25.895864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.075 [2024-11-18 00:38:25.895880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:05.076 [2024-11-18 00:38:25.895900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.076 [2024-11-18 00:38:25.895915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:05.076 [2024-11-18 00:38:25.895936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:22048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.076 [2024-11-18 00:38:25.895951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:05.076 [2024-11-18 00:38:25.895999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.076 [2024-11-18 00:38:25.896018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:05.076 [2024-11-18 00:38:25.896039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.076 [2024-11-18 00:38:25.896055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:05.076 [2024-11-18 00:38:25.896075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:22424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.076 [2024-11-18 00:38:25.896091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:05.076 [2024-11-18 00:38:25.896112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.076 [2024-11-18 00:38:25.896144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:05.076 [2024-11-18 00:38:25.896171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.076 [2024-11-18 00:38:25.896204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:05.076 [2024-11-18 00:38:25.896228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.076 [2024-11-18 00:38:25.896245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:05.076 [2024-11-18 00:38:25.896268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:21872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.076 [2024-11-18 00:38:25.896285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:05.076 [2024-11-18 00:38:25.897882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.076 [2024-11-18 00:38:25.897905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:05.076 [2024-11-18 00:38:25.897931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.076 [2024-11-18 00:38:25.897949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:05.076 [2024-11-18 00:38:25.897970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.076 [2024-11-18 00:38:25.898000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:05.076 [2024-11-18 00:38:25.898021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.076 [2024-11-18 00:38:25.898037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:05.076 [2024-11-18 00:38:25.898063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.076 [2024-11-18 00:38:25.898079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:05.076 [2024-11-18 00:38:25.898099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:22248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.076 [2024-11-18 00:38:25.898114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:05.076 [2024-11-18 00:38:25.898134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.076 [2024-11-18 00:38:25.898149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:05.076 [2024-11-18 00:38:25.898170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.076 [2024-11-18 00:38:25.898185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:05.076 [2024-11-18 00:38:25.898205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.076 [2024-11-18 00:38:25.898221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:05.076 [2024-11-18 00:38:25.898240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.076 [2024-11-18 00:38:25.898260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:05.076 [2024-11-18 00:38:25.898282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.076 [2024-11-18 00:38:25.898320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:05.076 [2024-11-18 00:38:25.898346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:20912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.076 [2024-11-18 00:38:25.898377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:05.076 [2024-11-18 00:38:25.898401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.076 [2024-11-18 00:38:25.898418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:05.076 [2024-11-18 00:38:25.898439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:22192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.076 [2024-11-18 00:38:25.898456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:05.076 [2024-11-18 00:38:25.898478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.076 [2024-11-18 00:38:25.898494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:05.076 [2024-11-18 00:38:25.898516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.076 [2024-11-18 00:38:25.898532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:05.076 [2024-11-18 00:38:25.898554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:22552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.076 [2024-11-18 00:38:25.898570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:05.076 [2024-11-18 00:38:25.898592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.076 [2024-11-18 00:38:25.898625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:05.076 [2024-11-18 00:38:25.898648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.076 [2024-11-18 00:38:25.898678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:05.076 [2024-11-18 00:38:25.898700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.076 [2024-11-18 00:38:25.898715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:05.076 [2024-11-18 00:38:25.898740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:21776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.076 [2024-11-18 00:38:25.898756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:05.076 [2024-11-18 00:38:25.898776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.076 [2024-11-18 00:38:25.898794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:05.076 [2024-11-18 00:38:25.898815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.076 [2024-11-18 00:38:25.898831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:05.076 [2024-11-18 00:38:25.898850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:22488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.076 [2024-11-18 00:38:25.898866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:05.076 [2024-11-18 00:38:25.898886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.076 [2024-11-18 00:38:25.898901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:05.076 [2024-11-18 00:38:25.900981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.076 [2024-11-18 00:38:25.901019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:05.076 [2024-11-18 00:38:25.901060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.076 [2024-11-18 00:38:25.901077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:05.076 [2024-11-18 00:38:25.901099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:22680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.076 [2024-11-18 00:38:25.901115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:05.076 [2024-11-18 00:38:25.901135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:22696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.076 [2024-11-18 00:38:25.901150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:05.076 [2024-11-18 00:38:25.901170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.077 [2024-11-18 00:38:25.901185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:05.077 [2024-11-18 00:38:25.901220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:22080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.077 [2024-11-18 00:38:25.901237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:05.077 [2024-11-18 00:38:25.901258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:22432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.077 [2024-11-18 00:38:25.901289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:05.077 [2024-11-18 00:38:25.901320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:22464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.077 [2024-11-18 00:38:25.901339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:05.077 [2024-11-18 00:38:25.901372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.077 [2024-11-18 00:38:25.901392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:05.077 [2024-11-18 00:38:25.901421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:22728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.077 [2024-11-18 00:38:25.901438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.077 [2024-11-18 00:38:25.901461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.077 [2024-11-18 00:38:25.901477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.077 [2024-11-18 00:38:25.901501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:22760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.077 [2024-11-18 00:38:25.901518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:05.077 [2024-11-18 00:38:25.901540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.077 [2024-11-18 00:38:25.901557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:05.077 [2024-11-18 00:38:25.901608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.077 [2024-11-18 00:38:25.901626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:05.077 [2024-11-18 00:38:25.901648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.077 [2024-11-18 00:38:25.901680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:05.077 [2024-11-18 00:38:25.901702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.077 [2024-11-18 00:38:25.901718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:05.077 [2024-11-18 00:38:25.901739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.077 [2024-11-18 00:38:25.901755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:05.077 [2024-11-18 00:38:25.901775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:22336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.077 [2024-11-18 00:38:25.901791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:05.077 [2024-11-18 00:38:25.901812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:22096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.077 [2024-11-18 00:38:25.901843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:05.077 [2024-11-18 00:38:25.901865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.077 [2024-11-18 00:38:25.901880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:05.077 [2024-11-18 00:38:25.901900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.077 [2024-11-18 00:38:25.901915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:05.077 [2024-11-18 00:38:25.901940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:22088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.077 [2024-11-18 00:38:25.901956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:05.077 [2024-11-18 00:38:25.901976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:20912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.077 [2024-11-18 00:38:25.901991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:05.077 [2024-11-18 00:38:25.902011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.077 [2024-11-18 00:38:25.902027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:05.077 [2024-11-18 00:38:25.902047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.077 [2024-11-18 00:38:25.902076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:05.077 [2024-11-18 00:38:25.902099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:22584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.077 [2024-11-18 00:38:25.902115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:05.077 [2024-11-18 00:38:25.902153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.077 [2024-11-18 00:38:25.902170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:05.077 [2024-11-18 00:38:25.902193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:22152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.077 [2024-11-18 00:38:25.902210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:05.077 [2024-11-18 00:38:25.903353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:22488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.077 [2024-11-18 00:38:25.903393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:05.077 [2024-11-18 00:38:25.903421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.077 [2024-11-18 00:38:25.903439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:05.077 [2024-11-18 00:38:25.903476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:22368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.077 [2024-11-18 00:38:25.903494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:05.077 [2024-11-18 00:38:25.903516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:22112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.077 [2024-11-18 00:38:25.903533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:05.077 [2024-11-18 00:38:25.903556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.077 [2024-11-18 00:38:25.903573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:05.077 [2024-11-18 00:38:25.903595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.077 [2024-11-18 00:38:25.903616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:05.077 [2024-11-18 00:38:25.903640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.077 [2024-11-18 00:38:25.903658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:05.077 [2024-11-18 00:38:25.903680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.077 [2024-11-18 00:38:25.903697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:05.077 [2024-11-18 00:38:25.903719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.077 [2024-11-18 00:38:25.903737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:05.077 [2024-11-18 00:38:25.903760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.078 [2024-11-18 00:38:25.903777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:05.078 [2024-11-18 00:38:25.903799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:22912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.078 [2024-11-18 00:38:25.903816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:05.078 [2024-11-18 00:38:25.903838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.078 [2024-11-18 00:38:25.903855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:05.078 [2024-11-18 00:38:25.903877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.078 [2024-11-18 00:38:25.903894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:05.078 [2024-11-18 00:38:25.903916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.078 [2024-11-18 00:38:25.903933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:05.078 [2024-11-18 00:38:25.903956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.078 [2024-11-18 00:38:25.903987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:05.078 [2024-11-18 00:38:25.904010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.078 [2024-11-18 00:38:25.904026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:05.078 [2024-11-18 00:38:25.904046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.078 [2024-11-18 00:38:25.904077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:05.078 [2024-11-18 00:38:25.904099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.078 [2024-11-18 00:38:25.904118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:05.078 [2024-11-18 00:38:25.904140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.078 [2024-11-18 00:38:25.904155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:05.078 [2024-11-18 00:38:25.904175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:22536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.078 [2024-11-18 00:38:25.904190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:05.078 [2024-11-18 00:38:25.904210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.078 [2024-11-18 00:38:25.904225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:05.078 [2024-11-18 00:38:25.904246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.078 [2024-11-18 00:38:25.904261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:05.078 [2024-11-18 00:38:25.904281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.078 [2024-11-18 00:38:25.904297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:05.078 [2024-11-18 00:38:25.904340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.078 [2024-11-18 00:38:25.904358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:05.078 [2024-11-18 00:38:25.904380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.078 [2024-11-18 00:38:25.904396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:05.078 [2024-11-18 00:38:25.904416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.078 [2024-11-18 00:38:25.904432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:05.078 [2024-11-18 00:38:25.904453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:22792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.078 [2024-11-18 00:38:25.904469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:05.078 [2024-11-18 00:38:25.904490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.078 [2024-11-18 00:38:25.904505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:05.078 [2024-11-18 00:38:25.904526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.078 [2024-11-18 00:38:25.904542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:05.078 [2024-11-18 00:38:25.904562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:22248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.078 [2024-11-18 00:38:25.904578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:05.078 [2024-11-18 00:38:25.904603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:22088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.078 [2024-11-18 00:38:25.904634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:05.078 [2024-11-18 00:38:25.904656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.078 [2024-11-18 00:38:25.904671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:05.078 [2024-11-18 00:38:25.905836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.078 [2024-11-18 00:38:25.905860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:05.078 [2024-11-18 00:38:25.905888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.078 [2024-11-18 00:38:25.905907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:05.078 [2024-11-18 00:38:25.905931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:22360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.078 [2024-11-18 00:38:25.905948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:05.078 [2024-11-18 00:38:25.905971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.078 [2024-11-18 00:38:25.905988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:05.078 [2024-11-18 00:38:25.906009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:22984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.078 [2024-11-18 00:38:25.906041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:05.078 [2024-11-18 00:38:25.906065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.078 [2024-11-18 00:38:25.906082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:05.078 [2024-11-18 00:38:25.906118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:23016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.078 [2024-11-18 00:38:25.906134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:05.078 [2024-11-18 00:38:25.906154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:23032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.078 [2024-11-18 00:38:25.906170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:05.078 [2024-11-18 00:38:25.906190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:23048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.078 [2024-11-18 00:38:25.906206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:05.078 [2024-11-18 00:38:25.906226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:23064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.078 [2024-11-18 00:38:25.906242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:05.078 [2024-11-18 00:38:25.907285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:23080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.078 [2024-11-18 00:38:25.907308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:05.078 [2024-11-18 00:38:25.907363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.078 [2024-11-18 00:38:25.907381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:05.078 [2024-11-18 00:38:25.907403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:22112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.078 [2024-11-18 00:38:25.907419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:05.078 [2024-11-18 00:38:25.907441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:22832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.078 [2024-11-18 00:38:25.907457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:05.078 [2024-11-18 00:38:25.907479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.078 [2024-11-18 00:38:25.907495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:05.078 [2024-11-18 00:38:25.907517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.078 [2024-11-18 00:38:25.907534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:05.079 [2024-11-18 00:38:25.907570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:22928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.079 [2024-11-18 00:38:25.907587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:05.079 [2024-11-18 00:38:25.907608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.079 [2024-11-18 00:38:25.907638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:05.079 [2024-11-18 00:38:25.907660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:22608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.079 [2024-11-18 00:38:25.907675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:05.079 [2024-11-18 00:38:25.907695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.079 [2024-11-18 00:38:25.907710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:05.079 [2024-11-18 00:38:25.907730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.079 [2024-11-18 00:38:25.907745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:05.079 [2024-11-18 00:38:25.907772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.079 [2024-11-18 00:38:25.907788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:05.079 [2024-11-18 00:38:25.907809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:22464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.079 [2024-11-18 00:38:25.907829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:05.079 [2024-11-18 00:38:25.907865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.079 [2024-11-18 00:38:25.907882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:05.079 [2024-11-18 00:38:25.907904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.079 [2024-11-18 00:38:25.907936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:05.079 [2024-11-18 00:38:25.907960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.079 [2024-11-18 00:38:25.907977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:05.079 [2024-11-18 00:38:25.907999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.079 [2024-11-18 00:38:25.908016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:05.079 [2024-11-18 00:38:25.908038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.079 [2024-11-18 00:38:25.908055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:05.079 [2024-11-18 00:38:25.908077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.079 [2024-11-18 00:38:25.908094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:05.079 [2024-11-18 00:38:25.908117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.079 [2024-11-18 00:38:25.908134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:05.079 [2024-11-18 00:38:25.908156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.079 [2024-11-18 00:38:25.908173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:05.079 [2024-11-18 00:38:25.908196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:23112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.079 [2024-11-18 00:38:25.908213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:05.079 [2024-11-18 00:38:25.908236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:23128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.079 [2024-11-18 00:38:25.908269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:05.079 [2024-11-18 00:38:25.908291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.079 [2024-11-18 00:38:25.908307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:05.079 [2024-11-18 00:38:25.908352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:22704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.079 [2024-11-18 00:38:25.908389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:05.079 [2024-11-18 00:38:25.908413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.079 [2024-11-18 00:38:25.908429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:05.079 [2024-11-18 00:38:25.908449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.079 [2024-11-18 00:38:25.908465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:05.079 [2024-11-18 00:38:25.908487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:22800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.079 [2024-11-18 00:38:25.908503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:05.079 [2024-11-18 00:38:25.908524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:22152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.079 [2024-11-18 00:38:25.908540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:05.079 [2024-11-18 00:38:25.908560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:22968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.079 [2024-11-18 00:38:25.908576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:05.079 [2024-11-18 00:38:25.908597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:23000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.079 [2024-11-18 00:38:25.908613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:05.079 [2024-11-18 00:38:25.908634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:23032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.079 [2024-11-18 00:38:25.908650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:05.079 [2024-11-18 00:38:25.908685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.079 [2024-11-18 00:38:25.908702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:05.079 [2024-11-18 00:38:25.910573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.079 [2024-11-18 00:38:25.910598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:05.079 [2024-11-18 00:38:25.910626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.079 [2024-11-18 00:38:25.910644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:05.079 [2024-11-18 00:38:25.910684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.079 [2024-11-18 00:38:25.910700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:05.079 [2024-11-18 00:38:25.910737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.079 [2024-11-18 00:38:25.910752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:05.079 [2024-11-18 00:38:25.910779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.079 [2024-11-18 00:38:25.910795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:05.079 [2024-11-18 00:38:25.910815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.079 [2024-11-18 00:38:25.910830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:05.079 [2024-11-18 00:38:25.910851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.079 [2024-11-18 00:38:25.910894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:05.079 [2024-11-18 00:38:25.910918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.079 [2024-11-18 00:38:25.910935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:05.079 [2024-11-18 00:38:25.910956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.079 [2024-11-18 00:38:25.910972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:05.079 [2024-11-18 00:38:25.910994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.079 [2024-11-18 00:38:25.911010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:05.079 [2024-11-18 00:38:25.911031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:22832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.079 [2024-11-18 00:38:25.911046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:05.079 [2024-11-18 00:38:25.911084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.080 [2024-11-18 00:38:25.911101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:05.080 [2024-11-18 00:38:25.911139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:22544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.080 [2024-11-18 00:38:25.911157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:05.080 [2024-11-18 00:38:25.911178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:22408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.080 [2024-11-18 00:38:25.911196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:05.080 [2024-11-18 00:38:25.911218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.080 [2024-11-18 00:38:25.911235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:05.080 [2024-11-18 00:38:25.911257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.080 [2024-11-18 00:38:25.911274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:05.080 [2024-11-18 00:38:25.911300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.080 [2024-11-18 00:38:25.911328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:05.080 [2024-11-18 00:38:25.911368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:22600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.080 [2024-11-18 00:38:25.911385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:05.080 [2024-11-18 00:38:25.911407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.080 [2024-11-18 00:38:25.911423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:05.080 [2024-11-18 00:38:25.911461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:23112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.080 [2024-11-18 00:38:25.911477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:05.080 [2024-11-18 00:38:25.911498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.080 [2024-11-18 00:38:25.911514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:05.080 [2024-11-18 00:38:25.911535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.080 [2024-11-18 00:38:25.911550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:05.080 [2024-11-18 00:38:25.911572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.080 [2024-11-18 00:38:25.911602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:05.080 [2024-11-18 00:38:25.911624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.080 [2024-11-18 00:38:25.911639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:05.080 [2024-11-18 00:38:25.911676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:23032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.080 [2024-11-18 00:38:25.911694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:05.080 [2024-11-18 00:38:25.912604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.080 [2024-11-18 00:38:25.912627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:05.080 [2024-11-18 00:38:25.912652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.080 [2024-11-18 00:38:25.912669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:05.080 [2024-11-18 00:38:25.912705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.080 [2024-11-18 00:38:25.912722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:05.080 [2024-11-18 00:38:25.912743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:22936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.080 [2024-11-18 00:38:25.912790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:05.080 [2024-11-18 00:38:25.912815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.080 [2024-11-18 00:38:25.912832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:05.080 [2024-11-18 00:38:25.912853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:23256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.080 [2024-11-18 00:38:25.912869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:05.080 [2024-11-18 00:38:25.912890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.080 [2024-11-18 00:38:25.912906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:05.080 [2024-11-18 00:38:25.912926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.080 [2024-11-18 00:38:25.912943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:05.080 [2024-11-18 00:38:25.912982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.080 [2024-11-18 00:38:25.912998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:05.080 [2024-11-18 00:38:25.913394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.080 [2024-11-18 00:38:25.913419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.080 [2024-11-18 00:38:25.913461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:22712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.080 [2024-11-18 00:38:25.913481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.080 [2024-11-18 00:38:25.913504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.080 [2024-11-18 00:38:25.913537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:05.080 [2024-11-18 00:38:25.913560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.080 [2024-11-18 00:38:25.913577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:05.080 [2024-11-18 00:38:25.913599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:23312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.080 [2024-11-18 00:38:25.913616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:05.080 [2024-11-18 00:38:25.913638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.080 [2024-11-18 00:38:25.913655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:05.080 [2024-11-18 00:38:25.913677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.080 [2024-11-18 00:38:25.913699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:05.080 [2024-11-18 00:38:25.913722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:23360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.080 [2024-11-18 00:38:25.913739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:05.080 [2024-11-18 00:38:25.913762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.080 [2024-11-18 00:38:25.913779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:05.080 [2024-11-18 00:38:25.913801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:23160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.080 [2024-11-18 00:38:25.913832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:05.080 [2024-11-18 00:38:25.913855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.080 [2024-11-18 00:38:25.913882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:05.080 [2024-11-18 00:38:25.913905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.080 [2024-11-18 00:38:25.913922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:05.080 [2024-11-18 00:38:25.913943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.080 [2024-11-18 00:38:25.913959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:05.080 [2024-11-18 00:38:25.913980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:22896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.080 [2024-11-18 00:38:25.914007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:05.080 [2024-11-18 00:38:25.914030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.080 [2024-11-18 00:38:25.914046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:05.080 [2024-11-18 00:38:25.914067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.080 [2024-11-18 00:38:25.914082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:05.081 [2024-11-18 00:38:25.914103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.081 [2024-11-18 00:38:25.914119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:05.081 [2024-11-18 00:38:25.914140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.081 [2024-11-18 00:38:25.914156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:05.081 [2024-11-18 00:38:25.914176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:22736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.081 [2024-11-18 00:38:25.914207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:05.081 [2024-11-18 00:38:25.914234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.081 [2024-11-18 00:38:25.914250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:05.081 [2024-11-18 00:38:25.915797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:22976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.081 [2024-11-18 00:38:25.915819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:05.081 [2024-11-18 00:38:25.915844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:23008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.081 [2024-11-18 00:38:25.915861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:05.081 [2024-11-18 00:38:25.915883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:23040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.081 [2024-11-18 00:38:25.915898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:05.081 [2024-11-18 00:38:25.915919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.081 [2024-11-18 00:38:25.915934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:05.081 [2024-11-18 00:38:25.915955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.081 [2024-11-18 00:38:25.915972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:05.081 [2024-11-18 00:38:25.915991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:22880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.081 [2024-11-18 00:38:25.916007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:05.081 [2024-11-18 00:38:25.916027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:22944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.081 [2024-11-18 00:38:25.916043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:05.081 [2024-11-18 00:38:25.916064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.081 [2024-11-18 00:38:25.916079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:05.081 [2024-11-18 00:38:25.916099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.081 [2024-11-18 00:38:25.916115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:05.081 [2024-11-18 00:38:25.916135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:22872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.081 [2024-11-18 00:38:25.916151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:05.081 [2024-11-18 00:38:25.916171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.081 [2024-11-18 00:38:25.916187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:05.081 [2024-11-18 00:38:25.916212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.081 [2024-11-18 00:38:25.916244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:05.081 [2024-11-18 00:38:25.916267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:23288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.081 [2024-11-18 00:38:25.916283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:05.081 [2024-11-18 00:38:25.916304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:23104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.081 [2024-11-18 00:38:25.916356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:05.081 [2024-11-18 00:38:25.916380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:22712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.081 [2024-11-18 00:38:25.916397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:05.081 [2024-11-18 00:38:25.916419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:22328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.081 [2024-11-18 00:38:25.916435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:05.081 [2024-11-18 00:38:25.916456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.081 [2024-11-18 00:38:25.916473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:05.081 [2024-11-18 00:38:25.916495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:23360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.081 [2024-11-18 00:38:25.916511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:05.081 [2024-11-18 00:38:25.916532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:23160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.081 [2024-11-18 00:38:25.916564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:05.081 [2024-11-18 00:38:25.916586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.081 [2024-11-18 00:38:25.916602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:05.081 [2024-11-18 00:38:25.916637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:22896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.081 [2024-11-18 00:38:25.916653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:05.081 [2024-11-18 00:38:25.916674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.081 [2024-11-18 00:38:25.916689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:05.081 [2024-11-18 00:38:25.916709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.081 [2024-11-18 00:38:25.916724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:05.081 [2024-11-18 00:38:25.916745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.081 [2024-11-18 00:38:25.916764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:05.081 [2024-11-18 00:38:25.919016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.081 [2024-11-18 00:38:25.919039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:05.081 [2024-11-18 00:38:25.919064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.081 [2024-11-18 00:38:25.919081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:05.081 [2024-11-18 00:38:25.919102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:23400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.081 [2024-11-18 00:38:25.919134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:05.081 [2024-11-18 00:38:25.919167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.081 [2024-11-18 00:38:25.919202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:05.081 [2024-11-18 00:38:25.919225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.081 [2024-11-18 00:38:25.919243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:05.081 [2024-11-18 00:38:25.919265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:23448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.081 [2024-11-18 00:38:25.919283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:05.081 [2024-11-18 00:38:25.919306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:23464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.081 [2024-11-18 00:38:25.919334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:05.081 [2024-11-18 00:38:25.919367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.081 [2024-11-18 00:38:25.919384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:05.081 [2024-11-18 00:38:25.919406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:23496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.081 [2024-11-18 00:38:25.919423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:05.081 [2024-11-18 00:38:25.919445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:23512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.082 [2024-11-18 00:38:25.919462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:05.082 [2024-11-18 00:38:25.919500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:23528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.082 [2024-11-18 00:38:25.919516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:05.082 [2024-11-18 00:38:25.919537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.082 [2024-11-18 00:38:25.919557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:05.082 [2024-11-18 00:38:25.919579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:23168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.082 [2024-11-18 00:38:25.919596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:05.082 [2024-11-18 00:38:25.919616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:23200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.082 [2024-11-18 00:38:25.919646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:05.082 [2024-11-18 00:38:25.919668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.082 [2024-11-18 00:38:25.919683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:05.082 [2024-11-18 00:38:25.919703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:22864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.082 [2024-11-18 00:38:25.919718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:05.082 [2024-11-18 00:38:25.919739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.082 [2024-11-18 00:38:25.919754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:05.082 [2024-11-18 00:38:25.919775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.082 [2024-11-18 00:38:25.919790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:05.082 [2024-11-18 00:38:25.919810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.082 [2024-11-18 00:38:25.919826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:05.082 [2024-11-18 00:38:25.919846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.082 [2024-11-18 00:38:25.919861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:05.082 [2024-11-18 00:38:25.919881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.082 [2024-11-18 00:38:25.919896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:05.082 [2024-11-18 00:38:25.919916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:23256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.082 [2024-11-18 00:38:25.919933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:05.082 [2024-11-18 00:38:25.919953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.082 [2024-11-18 00:38:25.919969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:05.082 [2024-11-18 00:38:25.919989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.082 [2024-11-18 00:38:25.920008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:05.082 [2024-11-18 00:38:25.920029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:23360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.082 [2024-11-18 00:38:25.920060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:05.082 [2024-11-18 00:38:25.920082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:23224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.082 [2024-11-18 00:38:25.920098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:05.082 [2024-11-18 00:38:25.920137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.082 [2024-11-18 00:38:25.920154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:05.082 [2024-11-18 00:38:25.920176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.082 [2024-11-18 00:38:25.920193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:05.082 [2024-11-18 00:38:25.920215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.082 [2024-11-18 00:38:25.920232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:05.082 [2024-11-18 00:38:25.920254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.082 [2024-11-18 00:38:25.920270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:05.082 [2024-11-18 00:38:25.920292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.082 [2024-11-18 00:38:25.920317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:05.082 [2024-11-18 00:38:25.920342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.082 [2024-11-18 00:38:25.920360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:05.082 [2024-11-18 00:38:25.920966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.082 [2024-11-18 00:38:25.920990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:05.082 [2024-11-18 00:38:25.921034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:23296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.082 [2024-11-18 00:38:25.921055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:05.082 [2024-11-18 00:38:25.921078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.082 [2024-11-18 00:38:25.921111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:05.082 [2024-11-18 00:38:25.921133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:23584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.082 [2024-11-18 00:38:25.921150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:05.082 [2024-11-18 00:38:25.921205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.082 [2024-11-18 00:38:25.921239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:05.082 [2024-11-18 00:38:25.921261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:23616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.082 [2024-11-18 00:38:25.921277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:05.082 [2024-11-18 00:38:25.921323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:23632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.082 [2024-11-18 00:38:25.921342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:05.082 [2024-11-18 00:38:25.921364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:23648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.082 [2024-11-18 00:38:25.921380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:05.082 [2024-11-18 00:38:25.921401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:23336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.082 [2024-11-18 00:38:25.921418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:05.082 [2024-11-18 00:38:25.921439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:23368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.082 [2024-11-18 00:38:25.921456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:05.082 [2024-11-18 00:38:25.922078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.082 [2024-11-18 00:38:25.922100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:05.082 [2024-11-18 00:38:25.922124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.082 [2024-11-18 00:38:25.922141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:05.082 [2024-11-18 00:38:25.922162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:23032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.082 [2024-11-18 00:38:25.922177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:05.083 [2024-11-18 00:38:25.922197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.083 [2024-11-18 00:38:25.922212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:05.083 [2024-11-18 00:38:25.922232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:23680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.083 [2024-11-18 00:38:25.922247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:05.083 [2024-11-18 00:38:25.922267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.083 [2024-11-18 00:38:25.922283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:05.083 7842.09 IOPS, 30.63 MiB/s [2024-11-17T23:38:28.905Z] 7864.03 IOPS, 30.72 MiB/s [2024-11-17T23:38:28.905Z] 7880.29 IOPS, 30.78 MiB/s [2024-11-17T23:38:28.905Z] Received shutdown signal, test time was about 34.409994 seconds 00:33:05.083 00:33:05.083 Latency(us) 00:33:05.083 [2024-11-17T23:38:28.905Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:05.083 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:33:05.083 Verification LBA range: start 0x0 length 0x4000 00:33:05.083 Nvme0n1 : 34.41 7883.90 30.80 0.00 0.00 16206.17 191.91 4076242.11 00:33:05.083 [2024-11-17T23:38:28.905Z] =================================================================================================================== 00:33:05.083 [2024-11-17T23:38:28.905Z] Total : 7883.90 30.80 0.00 0.00 16206.17 191.91 4076242.11 00:33:05.083 00:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:05.341 00:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:33:05.341 00:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:05.341 00:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:33:05.341 00:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:05.341 00:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:33:05.341 00:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:05.341 00:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:33:05.341 00:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:05.341 00:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:05.341 rmmod nvme_tcp 00:33:05.341 rmmod nvme_fabrics 00:33:05.341 rmmod nvme_keyring 00:33:05.341 00:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:05.341 00:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:33:05.341 00:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:33:05.341 00:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 370070 ']' 00:33:05.341 00:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 370070 00:33:05.341 00:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 370070 ']' 00:33:05.341 00:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 370070 00:33:05.341 00:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:33:05.341 00:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:05.341 00:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 370070 00:33:05.341 00:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:05.341 00:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:05.341 00:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 370070' 00:33:05.341 killing process with pid 370070 00:33:05.341 00:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 370070 00:33:05.341 00:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 370070 00:33:05.599 00:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:05.599 00:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:05.599 00:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:05.599 00:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:33:05.599 00:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:33:05.599 00:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:33:05.599 00:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:05.599 00:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:05.599 00:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:05.599 00:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:05.599 00:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:05.599 00:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:08.140 00:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:08.141 00:33:08.141 real 0m43.253s 00:33:08.141 user 2m9.305s 00:33:08.141 sys 0m11.926s 00:33:08.141 00:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:08.141 00:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:08.141 ************************************ 00:33:08.141 END TEST nvmf_host_multipath_status 00:33:08.141 ************************************ 00:33:08.141 00:38:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:08.141 00:38:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:08.141 00:38:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:08.141 00:38:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.141 ************************************ 00:33:08.141 START TEST nvmf_discovery_remove_ifc 00:33:08.141 ************************************ 00:33:08.141 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:08.141 * Looking for test storage... 00:33:08.141 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:08.141 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:08.141 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:33:08.141 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:08.141 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:08.141 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:08.141 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:08.141 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:08.141 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:33:08.141 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:33:08.141 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:33:08.141 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:33:08.141 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:33:08.141 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:33:08.141 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:33:08.141 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:08.141 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:33:08.141 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:33:08.141 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:08.141 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:08.141 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:33:08.141 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:33:08.141 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:08.141 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:33:08.141 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:33:08.141 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:33:08.141 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:33:08.141 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:08.141 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:33:08.141 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:33:08.141 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:08.141 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:08.141 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:33:08.141 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:08.141 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:08.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:08.141 --rc genhtml_branch_coverage=1 00:33:08.141 --rc genhtml_function_coverage=1 00:33:08.141 --rc genhtml_legend=1 00:33:08.141 --rc geninfo_all_blocks=1 00:33:08.141 --rc geninfo_unexecuted_blocks=1 00:33:08.141 00:33:08.141 ' 00:33:08.141 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:08.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:08.141 --rc genhtml_branch_coverage=1 00:33:08.141 --rc genhtml_function_coverage=1 00:33:08.141 --rc genhtml_legend=1 00:33:08.141 --rc geninfo_all_blocks=1 00:33:08.141 --rc geninfo_unexecuted_blocks=1 00:33:08.141 00:33:08.141 ' 00:33:08.141 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:08.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:08.141 --rc genhtml_branch_coverage=1 00:33:08.141 --rc genhtml_function_coverage=1 00:33:08.141 --rc genhtml_legend=1 00:33:08.141 --rc geninfo_all_blocks=1 00:33:08.141 --rc geninfo_unexecuted_blocks=1 00:33:08.141 00:33:08.141 ' 00:33:08.141 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:08.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:08.141 --rc genhtml_branch_coverage=1 00:33:08.141 --rc genhtml_function_coverage=1 00:33:08.141 --rc genhtml_legend=1 00:33:08.141 --rc geninfo_all_blocks=1 00:33:08.141 --rc geninfo_unexecuted_blocks=1 00:33:08.141 00:33:08.141 ' 00:33:08.141 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:08.141 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:33:08.141 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:08.141 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:08.141 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:08.141 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:08.141 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:08.141 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:08.141 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:08.141 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:08.141 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:08.141 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:08.141 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:08.141 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:08.141 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:08.141 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:08.141 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:08.141 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:08.142 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:08.142 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:33:08.142 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:08.142 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:08.142 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:08.142 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.142 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.142 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.142 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:33:08.142 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.142 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:33:08.142 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:08.142 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:08.142 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:08.142 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:08.142 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:08.142 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:08.142 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:08.142 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:08.142 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:08.142 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:08.142 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:33:08.142 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:33:08.142 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:33:08.142 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:33:08.142 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:33:08.142 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:33:08.142 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:33:08.142 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:08.142 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:08.142 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:08.142 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:08.142 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:08.142 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:08.142 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:08.142 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:08.142 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:08.142 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:08.142 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:33:08.142 00:38:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:10.051 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:10.051 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:10.051 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:10.051 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:10.051 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:10.052 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:10.052 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:10.052 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:10.052 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:10.052 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:10.052 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:10.052 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:10.052 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:10.052 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:10.052 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:10.052 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:10.052 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:10.052 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:10.052 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:10.052 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:10.052 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:10.052 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:10.052 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:10.052 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:10.310 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:10.310 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:10.310 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:10.310 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:10.310 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:10.310 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:33:10.310 00:33:10.310 --- 10.0.0.2 ping statistics --- 00:33:10.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:10.310 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:33:10.310 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:10.310 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:10.310 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:33:10.310 00:33:10.310 --- 10.0.0.1 ping statistics --- 00:33:10.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:10.310 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:33:10.310 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:10.310 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:33:10.310 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:10.310 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:10.310 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:10.310 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:10.310 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:10.310 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:10.310 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:10.310 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:33:10.310 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:10.310 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:10.310 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:10.310 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=376687 00:33:10.310 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:10.310 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 376687 00:33:10.310 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 376687 ']' 00:33:10.310 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:10.310 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:10.310 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:10.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:10.310 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:10.310 00:38:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:10.310 [2024-11-18 00:38:33.982874] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:33:10.310 [2024-11-18 00:38:33.982967] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:10.310 [2024-11-18 00:38:34.055267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:10.310 [2024-11-18 00:38:34.101400] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:10.310 [2024-11-18 00:38:34.101450] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:10.310 [2024-11-18 00:38:34.101473] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:10.310 [2024-11-18 00:38:34.101484] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:10.310 [2024-11-18 00:38:34.101493] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:10.310 [2024-11-18 00:38:34.102083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:10.568 00:38:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:10.568 00:38:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:33:10.568 00:38:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:10.568 00:38:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:10.568 00:38:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:10.568 00:38:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:10.568 00:38:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:33:10.568 00:38:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.568 00:38:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:10.568 [2024-11-18 00:38:34.249494] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:10.568 [2024-11-18 00:38:34.257710] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:10.568 null0 00:33:10.568 [2024-11-18 00:38:34.289608] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:10.568 00:38:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.568 00:38:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=376713 00:33:10.568 00:38:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:33:10.568 00:38:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 376713 /tmp/host.sock 00:33:10.568 00:38:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 376713 ']' 00:33:10.568 00:38:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:33:10.568 00:38:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:10.568 00:38:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:33:10.568 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:33:10.568 00:38:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:10.569 00:38:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:10.569 [2024-11-18 00:38:34.354454] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:33:10.569 [2024-11-18 00:38:34.354523] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid376713 ] 00:33:10.826 [2024-11-18 00:38:34.421854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:10.826 [2024-11-18 00:38:34.466717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:10.826 00:38:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:10.826 00:38:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:33:10.826 00:38:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:10.826 00:38:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:33:10.826 00:38:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.826 00:38:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:10.826 00:38:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.826 00:38:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:33:10.826 00:38:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.826 00:38:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:11.084 00:38:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.084 00:38:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:33:11.084 00:38:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.084 00:38:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:12.014 [2024-11-18 00:38:35.696755] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:12.014 [2024-11-18 00:38:35.696791] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:12.014 [2024-11-18 00:38:35.696818] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:12.014 [2024-11-18 00:38:35.823210] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:33:12.271 [2024-11-18 00:38:36.038594] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:33:12.272 [2024-11-18 00:38:36.039684] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1bd3c00:1 started. 00:33:12.272 [2024-11-18 00:38:36.041348] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:12.272 [2024-11-18 00:38:36.041401] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:12.272 [2024-11-18 00:38:36.041433] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:12.272 [2024-11-18 00:38:36.041454] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:12.272 [2024-11-18 00:38:36.041484] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:12.272 00:38:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.272 00:38:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:33:12.272 00:38:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:12.272 00:38:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:12.272 [2024-11-18 00:38:36.045859] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1bd3c00 was disconnected and freed. delete nvme_qpair. 00:33:12.272 00:38:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:12.272 00:38:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.272 00:38:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:12.272 00:38:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:12.272 00:38:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:12.272 00:38:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.272 00:38:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:33:12.272 00:38:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:33:12.529 00:38:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:33:12.529 00:38:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:33:12.529 00:38:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:12.529 00:38:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:12.529 00:38:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:12.529 00:38:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.529 00:38:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:12.529 00:38:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:12.529 00:38:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:12.529 00:38:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.529 00:38:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:12.529 00:38:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:13.461 00:38:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:13.461 00:38:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:13.461 00:38:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:13.462 00:38:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.462 00:38:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:13.462 00:38:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:13.462 00:38:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:13.462 00:38:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.462 00:38:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:13.462 00:38:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:14.835 00:38:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:14.835 00:38:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:14.835 00:38:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.835 00:38:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:14.835 00:38:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:14.835 00:38:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:14.835 00:38:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:14.835 00:38:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.835 00:38:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:14.835 00:38:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:15.774 00:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:15.774 00:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:15.774 00:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:15.774 00:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:15.774 00:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.774 00:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:15.774 00:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:15.774 00:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.774 00:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:15.774 00:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:16.706 00:38:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:16.706 00:38:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:16.706 00:38:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:16.706 00:38:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.706 00:38:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:16.706 00:38:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:16.707 00:38:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:16.707 00:38:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.707 00:38:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:16.707 00:38:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:17.639 00:38:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:17.639 00:38:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:17.639 00:38:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:17.639 00:38:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.639 00:38:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:17.639 00:38:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:17.639 00:38:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:17.639 00:38:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.639 00:38:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:17.639 00:38:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:17.932 [2024-11-18 00:38:41.482759] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:33:17.932 [2024-11-18 00:38:41.482830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:17.932 [2024-11-18 00:38:41.482854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.932 [2024-11-18 00:38:41.482875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:17.932 [2024-11-18 00:38:41.482889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.932 [2024-11-18 00:38:41.482904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:17.932 [2024-11-18 00:38:41.482917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.932 [2024-11-18 00:38:41.482932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:17.932 [2024-11-18 00:38:41.482946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.932 [2024-11-18 00:38:41.482987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:17.932 [2024-11-18 00:38:41.483003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.932 [2024-11-18 00:38:41.483016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb0400 is same with the state(6) to be set 00:33:17.932 [2024-11-18 00:38:41.492778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bb0400 (9): Bad file descriptor 00:33:17.932 [2024-11-18 00:38:41.502821] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:17.932 [2024-11-18 00:38:41.502844] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:17.932 [2024-11-18 00:38:41.502854] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:17.932 [2024-11-18 00:38:41.502862] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:17.933 [2024-11-18 00:38:41.502900] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:18.969 00:38:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:18.969 00:38:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:18.969 00:38:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:18.969 00:38:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.970 00:38:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:18.970 00:38:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:18.970 00:38:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:18.970 [2024-11-18 00:38:42.509435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:33:18.970 [2024-11-18 00:38:42.509498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bb0400 with addr=10.0.0.2, port=4420 00:33:18.970 [2024-11-18 00:38:42.509527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb0400 is same with the state(6) to be set 00:33:18.970 [2024-11-18 00:38:42.509575] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bb0400 (9): Bad file descriptor 00:33:18.970 [2024-11-18 00:38:42.510024] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:33:18.970 [2024-11-18 00:38:42.510066] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:18.970 [2024-11-18 00:38:42.510084] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:18.970 [2024-11-18 00:38:42.510101] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:18.970 [2024-11-18 00:38:42.510114] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:18.970 [2024-11-18 00:38:42.510124] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:18.970 [2024-11-18 00:38:42.510131] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:18.970 [2024-11-18 00:38:42.510144] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:18.970 [2024-11-18 00:38:42.510153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:18.970 00:38:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.970 00:38:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:18.970 00:38:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:19.992 [2024-11-18 00:38:43.512642] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:19.992 [2024-11-18 00:38:43.512681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:19.992 [2024-11-18 00:38:43.512706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:19.992 [2024-11-18 00:38:43.512720] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:19.992 [2024-11-18 00:38:43.512734] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:33:19.992 [2024-11-18 00:38:43.512746] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:19.992 [2024-11-18 00:38:43.512756] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:19.992 [2024-11-18 00:38:43.512764] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:19.992 [2024-11-18 00:38:43.512803] bdev_nvme.c:7135:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:33:19.992 [2024-11-18 00:38:43.512862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:19.992 [2024-11-18 00:38:43.512886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.992 [2024-11-18 00:38:43.512907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:19.992 [2024-11-18 00:38:43.512921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.992 [2024-11-18 00:38:43.512935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:19.992 [2024-11-18 00:38:43.512950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.992 [2024-11-18 00:38:43.512966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:19.992 [2024-11-18 00:38:43.512980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.992 [2024-11-18 00:38:43.512995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:19.992 [2024-11-18 00:38:43.513007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.992 [2024-11-18 00:38:43.513022] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:33:19.992 [2024-11-18 00:38:43.513126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b9fb40 (9): Bad file descriptor 00:33:19.992 [2024-11-18 00:38:43.514149] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:33:19.992 [2024-11-18 00:38:43.514171] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:33:19.992 00:38:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:19.992 00:38:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:19.992 00:38:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:19.992 00:38:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.992 00:38:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:19.992 00:38:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:19.993 00:38:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:19.993 00:38:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.993 00:38:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:33:19.993 00:38:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:19.993 00:38:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:19.993 00:38:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:33:19.993 00:38:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:19.993 00:38:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:19.993 00:38:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:19.993 00:38:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.993 00:38:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:19.993 00:38:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:19.993 00:38:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:19.993 00:38:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.993 00:38:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:19.993 00:38:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:20.926 00:38:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:20.926 00:38:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:20.926 00:38:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:20.926 00:38:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.926 00:38:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:20.926 00:38:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:20.926 00:38:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:20.926 00:38:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.926 00:38:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:20.926 00:38:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:21.859 [2024-11-18 00:38:45.573453] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:21.859 [2024-11-18 00:38:45.573479] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:21.859 [2024-11-18 00:38:45.573502] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:21.859 [2024-11-18 00:38:45.659772] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:33:22.116 00:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:22.116 00:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:22.116 00:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:22.116 00:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.116 00:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:22.116 00:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:22.116 00:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:22.116 00:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.116 00:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:22.116 00:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:22.116 [2024-11-18 00:38:45.762611] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:33:22.116 [2024-11-18 00:38:45.763407] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1bb2720:1 started. 00:33:22.117 [2024-11-18 00:38:45.764770] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:22.117 [2024-11-18 00:38:45.764811] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:22.117 [2024-11-18 00:38:45.764840] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:22.117 [2024-11-18 00:38:45.764860] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:33:22.117 [2024-11-18 00:38:45.764872] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:22.117 [2024-11-18 00:38:45.771574] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1bb2720 was disconnected and freed. delete nvme_qpair. 00:33:23.048 00:38:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:23.048 00:38:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:23.048 00:38:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:23.048 00:38:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:23.048 00:38:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.048 00:38:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:23.048 00:38:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:23.048 00:38:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.048 00:38:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:33:23.048 00:38:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:33:23.048 00:38:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 376713 00:33:23.048 00:38:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 376713 ']' 00:33:23.048 00:38:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 376713 00:33:23.048 00:38:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:33:23.048 00:38:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:23.048 00:38:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 376713 00:33:23.048 00:38:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:23.048 00:38:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:23.048 00:38:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 376713' 00:33:23.048 killing process with pid 376713 00:33:23.048 00:38:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 376713 00:33:23.048 00:38:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 376713 00:33:23.306 00:38:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:33:23.306 00:38:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:23.306 00:38:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:33:23.306 00:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:23.306 00:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:33:23.306 00:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:23.306 00:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:23.306 rmmod nvme_tcp 00:33:23.306 rmmod nvme_fabrics 00:33:23.306 rmmod nvme_keyring 00:33:23.306 00:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:23.306 00:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:33:23.306 00:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:33:23.306 00:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 376687 ']' 00:33:23.306 00:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 376687 00:33:23.306 00:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 376687 ']' 00:33:23.306 00:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 376687 00:33:23.306 00:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:33:23.306 00:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:23.306 00:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 376687 00:33:23.306 00:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:23.306 00:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:23.306 00:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 376687' 00:33:23.306 killing process with pid 376687 00:33:23.306 00:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 376687 00:33:23.306 00:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 376687 00:33:23.565 00:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:23.565 00:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:23.565 00:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:23.565 00:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:33:23.565 00:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:33:23.565 00:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:23.565 00:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:33:23.565 00:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:23.565 00:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:23.565 00:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:23.565 00:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:23.565 00:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:26.100 00:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:26.100 00:33:26.100 real 0m17.882s 00:33:26.100 user 0m25.879s 00:33:26.100 sys 0m2.967s 00:33:26.100 00:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:26.100 00:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:26.100 ************************************ 00:33:26.100 END TEST nvmf_discovery_remove_ifc 00:33:26.100 ************************************ 00:33:26.100 00:38:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:26.100 00:38:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:26.100 00:38:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:26.100 00:38:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.100 ************************************ 00:33:26.100 START TEST nvmf_identify_kernel_target 00:33:26.100 ************************************ 00:33:26.100 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:26.100 * Looking for test storage... 00:33:26.100 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:26.100 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:26.100 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:33:26.100 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:26.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:26.101 --rc genhtml_branch_coverage=1 00:33:26.101 --rc genhtml_function_coverage=1 00:33:26.101 --rc genhtml_legend=1 00:33:26.101 --rc geninfo_all_blocks=1 00:33:26.101 --rc geninfo_unexecuted_blocks=1 00:33:26.101 00:33:26.101 ' 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:26.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:26.101 --rc genhtml_branch_coverage=1 00:33:26.101 --rc genhtml_function_coverage=1 00:33:26.101 --rc genhtml_legend=1 00:33:26.101 --rc geninfo_all_blocks=1 00:33:26.101 --rc geninfo_unexecuted_blocks=1 00:33:26.101 00:33:26.101 ' 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:26.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:26.101 --rc genhtml_branch_coverage=1 00:33:26.101 --rc genhtml_function_coverage=1 00:33:26.101 --rc genhtml_legend=1 00:33:26.101 --rc geninfo_all_blocks=1 00:33:26.101 --rc geninfo_unexecuted_blocks=1 00:33:26.101 00:33:26.101 ' 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:26.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:26.101 --rc genhtml_branch_coverage=1 00:33:26.101 --rc genhtml_function_coverage=1 00:33:26.101 --rc genhtml_legend=1 00:33:26.101 --rc geninfo_all_blocks=1 00:33:26.101 --rc geninfo_unexecuted_blocks=1 00:33:26.101 00:33:26.101 ' 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:26.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:26.101 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:26.102 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:26.102 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:26.102 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:26.102 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:26.102 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:26.102 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:26.102 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:26.102 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:26.102 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:33:26.102 00:38:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:33:28.007 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:28.007 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:33:28.007 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:28.007 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:28.007 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:28.007 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:28.007 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:28.007 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:33:28.007 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:28.007 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:33:28.007 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:33:28.007 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:33:28.007 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:33:28.007 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:33:28.007 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:33:28.007 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:28.007 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:28.007 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:28.007 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:28.007 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:28.007 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:28.007 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:28.007 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:28.007 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:28.007 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:28.007 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:28.007 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:28.007 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:28.007 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:28.008 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:28.008 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:28.008 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:28.008 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:28.008 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:28.008 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:28.008 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:28.008 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:28.008 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:28.008 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:28.008 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:28.008 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:28.008 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:28.008 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:28.008 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:28.008 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:28.008 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:28.008 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:28.008 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:28.008 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:28.008 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:28.008 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:28.008 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:28.008 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:28.008 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:28.008 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:28.008 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:28.008 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:28.008 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:28.008 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:28.008 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:28.008 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:28.008 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:28.008 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:28.008 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:28.008 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:28.008 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:28.008 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:28.008 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:28.008 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:28.008 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:28.008 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:28.008 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:28.008 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:28.008 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:33:28.008 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:28.008 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:28.008 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:28.008 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:28.008 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:28.008 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:28.008 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:28.008 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:28.008 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:28.008 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:28.008 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:28.008 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:28.008 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:28.008 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:28.008 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:28.008 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:28.008 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:28.008 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:28.269 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:28.269 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:28.269 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:28.269 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:28.269 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:28.269 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:28.269 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:28.269 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:28.269 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:28.269 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.286 ms 00:33:28.269 00:33:28.269 --- 10.0.0.2 ping statistics --- 00:33:28.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:28.269 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:33:28.269 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:28.269 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:28.269 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:33:28.269 00:33:28.269 --- 10.0.0.1 ping statistics --- 00:33:28.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:28.269 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:33:28.269 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:28.269 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:33:28.269 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:28.269 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:28.269 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:28.269 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:28.269 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:28.269 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:28.269 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:28.269 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:33:28.269 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:33:28.269 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:33:28.269 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:28.269 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:28.269 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:28.269 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:28.269 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:28.269 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:28.269 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:28.269 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:28.269 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:28.269 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:33:28.269 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:33:28.269 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:33:28.269 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:33:28.269 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:28.269 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:28.269 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:28.269 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:33:28.269 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:33:28.269 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:33:28.269 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:28.269 00:38:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:29.204 Waiting for block devices as requested 00:33:29.204 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:33:29.462 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:29.462 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:29.720 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:29.720 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:29.720 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:29.979 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:29.979 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:29.980 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:29.980 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:30.239 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:30.239 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:30.239 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:30.239 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:30.498 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:30.498 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:30.498 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:30.756 00:38:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:33:30.756 00:38:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:30.756 00:38:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:33:30.756 00:38:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:33:30.756 00:38:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:30.756 00:38:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:33:30.756 00:38:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:33:30.756 00:38:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:33:30.756 00:38:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:30.756 No valid GPT data, bailing 00:33:30.756 00:38:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:30.756 00:38:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:33:30.756 00:38:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:33:30.756 00:38:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:33:30.756 00:38:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:33:30.756 00:38:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:30.756 00:38:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:30.756 00:38:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:30.756 00:38:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:33:30.756 00:38:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:33:30.756 00:38:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:33:30.756 00:38:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:33:30.756 00:38:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:33:30.756 00:38:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:33:30.756 00:38:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:33:30.756 00:38:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:33:30.756 00:38:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:30.757 00:38:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:33:30.757 00:33:30.757 Discovery Log Number of Records 2, Generation counter 2 00:33:30.757 =====Discovery Log Entry 0====== 00:33:30.757 trtype: tcp 00:33:30.757 adrfam: ipv4 00:33:30.757 subtype: current discovery subsystem 00:33:30.757 treq: not specified, sq flow control disable supported 00:33:30.757 portid: 1 00:33:30.757 trsvcid: 4420 00:33:30.757 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:30.757 traddr: 10.0.0.1 00:33:30.757 eflags: none 00:33:30.757 sectype: none 00:33:30.757 =====Discovery Log Entry 1====== 00:33:30.757 trtype: tcp 00:33:30.757 adrfam: ipv4 00:33:30.757 subtype: nvme subsystem 00:33:30.757 treq: not specified, sq flow control disable supported 00:33:30.757 portid: 1 00:33:30.757 trsvcid: 4420 00:33:30.757 subnqn: nqn.2016-06.io.spdk:testnqn 00:33:30.757 traddr: 10.0.0.1 00:33:30.757 eflags: none 00:33:30.757 sectype: none 00:33:30.757 00:38:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:33:30.757 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:33:31.016 ===================================================== 00:33:31.016 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:33:31.016 ===================================================== 00:33:31.016 Controller Capabilities/Features 00:33:31.016 ================================ 00:33:31.016 Vendor ID: 0000 00:33:31.016 Subsystem Vendor ID: 0000 00:33:31.016 Serial Number: c2d0492f4add0b574b01 00:33:31.016 Model Number: Linux 00:33:31.016 Firmware Version: 6.8.9-20 00:33:31.016 Recommended Arb Burst: 0 00:33:31.016 IEEE OUI Identifier: 00 00 00 00:33:31.016 Multi-path I/O 00:33:31.016 May have multiple subsystem ports: No 00:33:31.016 May have multiple controllers: No 00:33:31.016 Associated with SR-IOV VF: No 00:33:31.016 Max Data Transfer Size: Unlimited 00:33:31.016 Max Number of Namespaces: 0 00:33:31.016 Max Number of I/O Queues: 1024 00:33:31.016 NVMe Specification Version (VS): 1.3 00:33:31.016 NVMe Specification Version (Identify): 1.3 00:33:31.016 Maximum Queue Entries: 1024 00:33:31.016 Contiguous Queues Required: No 00:33:31.016 Arbitration Mechanisms Supported 00:33:31.016 Weighted Round Robin: Not Supported 00:33:31.016 Vendor Specific: Not Supported 00:33:31.016 Reset Timeout: 7500 ms 00:33:31.016 Doorbell Stride: 4 bytes 00:33:31.016 NVM Subsystem Reset: Not Supported 00:33:31.016 Command Sets Supported 00:33:31.016 NVM Command Set: Supported 00:33:31.016 Boot Partition: Not Supported 00:33:31.016 Memory Page Size Minimum: 4096 bytes 00:33:31.016 Memory Page Size Maximum: 4096 bytes 00:33:31.016 Persistent Memory Region: Not Supported 00:33:31.016 Optional Asynchronous Events Supported 00:33:31.016 Namespace Attribute Notices: Not Supported 00:33:31.016 Firmware Activation Notices: Not Supported 00:33:31.016 ANA Change Notices: Not Supported 00:33:31.016 PLE Aggregate Log Change Notices: Not Supported 00:33:31.016 LBA Status Info Alert Notices: Not Supported 00:33:31.016 EGE Aggregate Log Change Notices: Not Supported 00:33:31.016 Normal NVM Subsystem Shutdown event: Not Supported 00:33:31.016 Zone Descriptor Change Notices: Not Supported 00:33:31.016 Discovery Log Change Notices: Supported 00:33:31.016 Controller Attributes 00:33:31.016 128-bit Host Identifier: Not Supported 00:33:31.016 Non-Operational Permissive Mode: Not Supported 00:33:31.016 NVM Sets: Not Supported 00:33:31.016 Read Recovery Levels: Not Supported 00:33:31.016 Endurance Groups: Not Supported 00:33:31.016 Predictable Latency Mode: Not Supported 00:33:31.016 Traffic Based Keep ALive: Not Supported 00:33:31.016 Namespace Granularity: Not Supported 00:33:31.016 SQ Associations: Not Supported 00:33:31.016 UUID List: Not Supported 00:33:31.016 Multi-Domain Subsystem: Not Supported 00:33:31.016 Fixed Capacity Management: Not Supported 00:33:31.016 Variable Capacity Management: Not Supported 00:33:31.016 Delete Endurance Group: Not Supported 00:33:31.016 Delete NVM Set: Not Supported 00:33:31.016 Extended LBA Formats Supported: Not Supported 00:33:31.016 Flexible Data Placement Supported: Not Supported 00:33:31.016 00:33:31.016 Controller Memory Buffer Support 00:33:31.016 ================================ 00:33:31.016 Supported: No 00:33:31.016 00:33:31.016 Persistent Memory Region Support 00:33:31.016 ================================ 00:33:31.016 Supported: No 00:33:31.016 00:33:31.016 Admin Command Set Attributes 00:33:31.016 ============================ 00:33:31.016 Security Send/Receive: Not Supported 00:33:31.016 Format NVM: Not Supported 00:33:31.016 Firmware Activate/Download: Not Supported 00:33:31.016 Namespace Management: Not Supported 00:33:31.016 Device Self-Test: Not Supported 00:33:31.016 Directives: Not Supported 00:33:31.016 NVMe-MI: Not Supported 00:33:31.016 Virtualization Management: Not Supported 00:33:31.016 Doorbell Buffer Config: Not Supported 00:33:31.016 Get LBA Status Capability: Not Supported 00:33:31.016 Command & Feature Lockdown Capability: Not Supported 00:33:31.016 Abort Command Limit: 1 00:33:31.016 Async Event Request Limit: 1 00:33:31.016 Number of Firmware Slots: N/A 00:33:31.016 Firmware Slot 1 Read-Only: N/A 00:33:31.016 Firmware Activation Without Reset: N/A 00:33:31.016 Multiple Update Detection Support: N/A 00:33:31.016 Firmware Update Granularity: No Information Provided 00:33:31.016 Per-Namespace SMART Log: No 00:33:31.016 Asymmetric Namespace Access Log Page: Not Supported 00:33:31.016 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:33:31.016 Command Effects Log Page: Not Supported 00:33:31.016 Get Log Page Extended Data: Supported 00:33:31.016 Telemetry Log Pages: Not Supported 00:33:31.016 Persistent Event Log Pages: Not Supported 00:33:31.016 Supported Log Pages Log Page: May Support 00:33:31.016 Commands Supported & Effects Log Page: Not Supported 00:33:31.016 Feature Identifiers & Effects Log Page:May Support 00:33:31.016 NVMe-MI Commands & Effects Log Page: May Support 00:33:31.016 Data Area 4 for Telemetry Log: Not Supported 00:33:31.016 Error Log Page Entries Supported: 1 00:33:31.016 Keep Alive: Not Supported 00:33:31.016 00:33:31.016 NVM Command Set Attributes 00:33:31.016 ========================== 00:33:31.016 Submission Queue Entry Size 00:33:31.016 Max: 1 00:33:31.016 Min: 1 00:33:31.016 Completion Queue Entry Size 00:33:31.016 Max: 1 00:33:31.016 Min: 1 00:33:31.016 Number of Namespaces: 0 00:33:31.016 Compare Command: Not Supported 00:33:31.016 Write Uncorrectable Command: Not Supported 00:33:31.016 Dataset Management Command: Not Supported 00:33:31.016 Write Zeroes Command: Not Supported 00:33:31.016 Set Features Save Field: Not Supported 00:33:31.016 Reservations: Not Supported 00:33:31.016 Timestamp: Not Supported 00:33:31.016 Copy: Not Supported 00:33:31.016 Volatile Write Cache: Not Present 00:33:31.016 Atomic Write Unit (Normal): 1 00:33:31.016 Atomic Write Unit (PFail): 1 00:33:31.016 Atomic Compare & Write Unit: 1 00:33:31.016 Fused Compare & Write: Not Supported 00:33:31.016 Scatter-Gather List 00:33:31.016 SGL Command Set: Supported 00:33:31.016 SGL Keyed: Not Supported 00:33:31.016 SGL Bit Bucket Descriptor: Not Supported 00:33:31.016 SGL Metadata Pointer: Not Supported 00:33:31.016 Oversized SGL: Not Supported 00:33:31.016 SGL Metadata Address: Not Supported 00:33:31.016 SGL Offset: Supported 00:33:31.016 Transport SGL Data Block: Not Supported 00:33:31.016 Replay Protected Memory Block: Not Supported 00:33:31.016 00:33:31.016 Firmware Slot Information 00:33:31.016 ========================= 00:33:31.016 Active slot: 0 00:33:31.016 00:33:31.016 00:33:31.017 Error Log 00:33:31.017 ========= 00:33:31.017 00:33:31.017 Active Namespaces 00:33:31.017 ================= 00:33:31.017 Discovery Log Page 00:33:31.017 ================== 00:33:31.017 Generation Counter: 2 00:33:31.017 Number of Records: 2 00:33:31.017 Record Format: 0 00:33:31.017 00:33:31.017 Discovery Log Entry 0 00:33:31.017 ---------------------- 00:33:31.017 Transport Type: 3 (TCP) 00:33:31.017 Address Family: 1 (IPv4) 00:33:31.017 Subsystem Type: 3 (Current Discovery Subsystem) 00:33:31.017 Entry Flags: 00:33:31.017 Duplicate Returned Information: 0 00:33:31.017 Explicit Persistent Connection Support for Discovery: 0 00:33:31.017 Transport Requirements: 00:33:31.017 Secure Channel: Not Specified 00:33:31.017 Port ID: 1 (0x0001) 00:33:31.017 Controller ID: 65535 (0xffff) 00:33:31.017 Admin Max SQ Size: 32 00:33:31.017 Transport Service Identifier: 4420 00:33:31.017 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:33:31.017 Transport Address: 10.0.0.1 00:33:31.017 Discovery Log Entry 1 00:33:31.017 ---------------------- 00:33:31.017 Transport Type: 3 (TCP) 00:33:31.017 Address Family: 1 (IPv4) 00:33:31.017 Subsystem Type: 2 (NVM Subsystem) 00:33:31.017 Entry Flags: 00:33:31.017 Duplicate Returned Information: 0 00:33:31.017 Explicit Persistent Connection Support for Discovery: 0 00:33:31.017 Transport Requirements: 00:33:31.017 Secure Channel: Not Specified 00:33:31.017 Port ID: 1 (0x0001) 00:33:31.017 Controller ID: 65535 (0xffff) 00:33:31.017 Admin Max SQ Size: 32 00:33:31.017 Transport Service Identifier: 4420 00:33:31.017 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:33:31.017 Transport Address: 10.0.0.1 00:33:31.017 00:38:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:31.017 get_feature(0x01) failed 00:33:31.017 get_feature(0x02) failed 00:33:31.017 get_feature(0x04) failed 00:33:31.017 ===================================================== 00:33:31.017 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:31.017 ===================================================== 00:33:31.017 Controller Capabilities/Features 00:33:31.017 ================================ 00:33:31.017 Vendor ID: 0000 00:33:31.017 Subsystem Vendor ID: 0000 00:33:31.017 Serial Number: f403dfcdaedfc3af1632 00:33:31.017 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:33:31.017 Firmware Version: 6.8.9-20 00:33:31.017 Recommended Arb Burst: 6 00:33:31.017 IEEE OUI Identifier: 00 00 00 00:33:31.017 Multi-path I/O 00:33:31.017 May have multiple subsystem ports: Yes 00:33:31.017 May have multiple controllers: Yes 00:33:31.017 Associated with SR-IOV VF: No 00:33:31.017 Max Data Transfer Size: Unlimited 00:33:31.017 Max Number of Namespaces: 1024 00:33:31.017 Max Number of I/O Queues: 128 00:33:31.017 NVMe Specification Version (VS): 1.3 00:33:31.017 NVMe Specification Version (Identify): 1.3 00:33:31.017 Maximum Queue Entries: 1024 00:33:31.017 Contiguous Queues Required: No 00:33:31.017 Arbitration Mechanisms Supported 00:33:31.017 Weighted Round Robin: Not Supported 00:33:31.017 Vendor Specific: Not Supported 00:33:31.017 Reset Timeout: 7500 ms 00:33:31.017 Doorbell Stride: 4 bytes 00:33:31.017 NVM Subsystem Reset: Not Supported 00:33:31.017 Command Sets Supported 00:33:31.017 NVM Command Set: Supported 00:33:31.017 Boot Partition: Not Supported 00:33:31.017 Memory Page Size Minimum: 4096 bytes 00:33:31.017 Memory Page Size Maximum: 4096 bytes 00:33:31.017 Persistent Memory Region: Not Supported 00:33:31.017 Optional Asynchronous Events Supported 00:33:31.017 Namespace Attribute Notices: Supported 00:33:31.017 Firmware Activation Notices: Not Supported 00:33:31.017 ANA Change Notices: Supported 00:33:31.017 PLE Aggregate Log Change Notices: Not Supported 00:33:31.017 LBA Status Info Alert Notices: Not Supported 00:33:31.017 EGE Aggregate Log Change Notices: Not Supported 00:33:31.017 Normal NVM Subsystem Shutdown event: Not Supported 00:33:31.017 Zone Descriptor Change Notices: Not Supported 00:33:31.017 Discovery Log Change Notices: Not Supported 00:33:31.017 Controller Attributes 00:33:31.017 128-bit Host Identifier: Supported 00:33:31.017 Non-Operational Permissive Mode: Not Supported 00:33:31.017 NVM Sets: Not Supported 00:33:31.017 Read Recovery Levels: Not Supported 00:33:31.017 Endurance Groups: Not Supported 00:33:31.017 Predictable Latency Mode: Not Supported 00:33:31.017 Traffic Based Keep ALive: Supported 00:33:31.017 Namespace Granularity: Not Supported 00:33:31.017 SQ Associations: Not Supported 00:33:31.017 UUID List: Not Supported 00:33:31.017 Multi-Domain Subsystem: Not Supported 00:33:31.017 Fixed Capacity Management: Not Supported 00:33:31.017 Variable Capacity Management: Not Supported 00:33:31.017 Delete Endurance Group: Not Supported 00:33:31.017 Delete NVM Set: Not Supported 00:33:31.017 Extended LBA Formats Supported: Not Supported 00:33:31.017 Flexible Data Placement Supported: Not Supported 00:33:31.017 00:33:31.017 Controller Memory Buffer Support 00:33:31.017 ================================ 00:33:31.017 Supported: No 00:33:31.017 00:33:31.017 Persistent Memory Region Support 00:33:31.017 ================================ 00:33:31.017 Supported: No 00:33:31.017 00:33:31.017 Admin Command Set Attributes 00:33:31.017 ============================ 00:33:31.017 Security Send/Receive: Not Supported 00:33:31.017 Format NVM: Not Supported 00:33:31.017 Firmware Activate/Download: Not Supported 00:33:31.017 Namespace Management: Not Supported 00:33:31.017 Device Self-Test: Not Supported 00:33:31.017 Directives: Not Supported 00:33:31.017 NVMe-MI: Not Supported 00:33:31.017 Virtualization Management: Not Supported 00:33:31.017 Doorbell Buffer Config: Not Supported 00:33:31.017 Get LBA Status Capability: Not Supported 00:33:31.017 Command & Feature Lockdown Capability: Not Supported 00:33:31.017 Abort Command Limit: 4 00:33:31.017 Async Event Request Limit: 4 00:33:31.017 Number of Firmware Slots: N/A 00:33:31.017 Firmware Slot 1 Read-Only: N/A 00:33:31.017 Firmware Activation Without Reset: N/A 00:33:31.017 Multiple Update Detection Support: N/A 00:33:31.017 Firmware Update Granularity: No Information Provided 00:33:31.017 Per-Namespace SMART Log: Yes 00:33:31.017 Asymmetric Namespace Access Log Page: Supported 00:33:31.017 ANA Transition Time : 10 sec 00:33:31.017 00:33:31.017 Asymmetric Namespace Access Capabilities 00:33:31.017 ANA Optimized State : Supported 00:33:31.017 ANA Non-Optimized State : Supported 00:33:31.017 ANA Inaccessible State : Supported 00:33:31.017 ANA Persistent Loss State : Supported 00:33:31.017 ANA Change State : Supported 00:33:31.017 ANAGRPID is not changed : No 00:33:31.017 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:33:31.017 00:33:31.017 ANA Group Identifier Maximum : 128 00:33:31.017 Number of ANA Group Identifiers : 128 00:33:31.017 Max Number of Allowed Namespaces : 1024 00:33:31.017 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:33:31.017 Command Effects Log Page: Supported 00:33:31.017 Get Log Page Extended Data: Supported 00:33:31.017 Telemetry Log Pages: Not Supported 00:33:31.017 Persistent Event Log Pages: Not Supported 00:33:31.017 Supported Log Pages Log Page: May Support 00:33:31.017 Commands Supported & Effects Log Page: Not Supported 00:33:31.017 Feature Identifiers & Effects Log Page:May Support 00:33:31.017 NVMe-MI Commands & Effects Log Page: May Support 00:33:31.017 Data Area 4 for Telemetry Log: Not Supported 00:33:31.017 Error Log Page Entries Supported: 128 00:33:31.017 Keep Alive: Supported 00:33:31.017 Keep Alive Granularity: 1000 ms 00:33:31.017 00:33:31.017 NVM Command Set Attributes 00:33:31.017 ========================== 00:33:31.017 Submission Queue Entry Size 00:33:31.017 Max: 64 00:33:31.017 Min: 64 00:33:31.017 Completion Queue Entry Size 00:33:31.017 Max: 16 00:33:31.017 Min: 16 00:33:31.017 Number of Namespaces: 1024 00:33:31.017 Compare Command: Not Supported 00:33:31.017 Write Uncorrectable Command: Not Supported 00:33:31.017 Dataset Management Command: Supported 00:33:31.017 Write Zeroes Command: Supported 00:33:31.017 Set Features Save Field: Not Supported 00:33:31.017 Reservations: Not Supported 00:33:31.017 Timestamp: Not Supported 00:33:31.017 Copy: Not Supported 00:33:31.017 Volatile Write Cache: Present 00:33:31.017 Atomic Write Unit (Normal): 1 00:33:31.017 Atomic Write Unit (PFail): 1 00:33:31.017 Atomic Compare & Write Unit: 1 00:33:31.017 Fused Compare & Write: Not Supported 00:33:31.017 Scatter-Gather List 00:33:31.017 SGL Command Set: Supported 00:33:31.017 SGL Keyed: Not Supported 00:33:31.018 SGL Bit Bucket Descriptor: Not Supported 00:33:31.018 SGL Metadata Pointer: Not Supported 00:33:31.018 Oversized SGL: Not Supported 00:33:31.018 SGL Metadata Address: Not Supported 00:33:31.018 SGL Offset: Supported 00:33:31.018 Transport SGL Data Block: Not Supported 00:33:31.018 Replay Protected Memory Block: Not Supported 00:33:31.018 00:33:31.018 Firmware Slot Information 00:33:31.018 ========================= 00:33:31.018 Active slot: 0 00:33:31.018 00:33:31.018 Asymmetric Namespace Access 00:33:31.018 =========================== 00:33:31.018 Change Count : 0 00:33:31.018 Number of ANA Group Descriptors : 1 00:33:31.018 ANA Group Descriptor : 0 00:33:31.018 ANA Group ID : 1 00:33:31.018 Number of NSID Values : 1 00:33:31.018 Change Count : 0 00:33:31.018 ANA State : 1 00:33:31.018 Namespace Identifier : 1 00:33:31.018 00:33:31.018 Commands Supported and Effects 00:33:31.018 ============================== 00:33:31.018 Admin Commands 00:33:31.018 -------------- 00:33:31.018 Get Log Page (02h): Supported 00:33:31.018 Identify (06h): Supported 00:33:31.018 Abort (08h): Supported 00:33:31.018 Set Features (09h): Supported 00:33:31.018 Get Features (0Ah): Supported 00:33:31.018 Asynchronous Event Request (0Ch): Supported 00:33:31.018 Keep Alive (18h): Supported 00:33:31.018 I/O Commands 00:33:31.018 ------------ 00:33:31.018 Flush (00h): Supported 00:33:31.018 Write (01h): Supported LBA-Change 00:33:31.018 Read (02h): Supported 00:33:31.018 Write Zeroes (08h): Supported LBA-Change 00:33:31.018 Dataset Management (09h): Supported 00:33:31.018 00:33:31.018 Error Log 00:33:31.018 ========= 00:33:31.018 Entry: 0 00:33:31.018 Error Count: 0x3 00:33:31.018 Submission Queue Id: 0x0 00:33:31.018 Command Id: 0x5 00:33:31.018 Phase Bit: 0 00:33:31.018 Status Code: 0x2 00:33:31.018 Status Code Type: 0x0 00:33:31.018 Do Not Retry: 1 00:33:31.018 Error Location: 0x28 00:33:31.018 LBA: 0x0 00:33:31.018 Namespace: 0x0 00:33:31.018 Vendor Log Page: 0x0 00:33:31.018 ----------- 00:33:31.018 Entry: 1 00:33:31.018 Error Count: 0x2 00:33:31.018 Submission Queue Id: 0x0 00:33:31.018 Command Id: 0x5 00:33:31.018 Phase Bit: 0 00:33:31.018 Status Code: 0x2 00:33:31.018 Status Code Type: 0x0 00:33:31.018 Do Not Retry: 1 00:33:31.018 Error Location: 0x28 00:33:31.018 LBA: 0x0 00:33:31.018 Namespace: 0x0 00:33:31.018 Vendor Log Page: 0x0 00:33:31.018 ----------- 00:33:31.018 Entry: 2 00:33:31.018 Error Count: 0x1 00:33:31.018 Submission Queue Id: 0x0 00:33:31.018 Command Id: 0x4 00:33:31.018 Phase Bit: 0 00:33:31.018 Status Code: 0x2 00:33:31.018 Status Code Type: 0x0 00:33:31.018 Do Not Retry: 1 00:33:31.018 Error Location: 0x28 00:33:31.018 LBA: 0x0 00:33:31.018 Namespace: 0x0 00:33:31.018 Vendor Log Page: 0x0 00:33:31.018 00:33:31.018 Number of Queues 00:33:31.018 ================ 00:33:31.018 Number of I/O Submission Queues: 128 00:33:31.018 Number of I/O Completion Queues: 128 00:33:31.018 00:33:31.018 ZNS Specific Controller Data 00:33:31.018 ============================ 00:33:31.018 Zone Append Size Limit: 0 00:33:31.018 00:33:31.018 00:33:31.018 Active Namespaces 00:33:31.018 ================= 00:33:31.018 get_feature(0x05) failed 00:33:31.018 Namespace ID:1 00:33:31.018 Command Set Identifier: NVM (00h) 00:33:31.018 Deallocate: Supported 00:33:31.018 Deallocated/Unwritten Error: Not Supported 00:33:31.018 Deallocated Read Value: Unknown 00:33:31.018 Deallocate in Write Zeroes: Not Supported 00:33:31.018 Deallocated Guard Field: 0xFFFF 00:33:31.018 Flush: Supported 00:33:31.018 Reservation: Not Supported 00:33:31.018 Namespace Sharing Capabilities: Multiple Controllers 00:33:31.018 Size (in LBAs): 1953525168 (931GiB) 00:33:31.018 Capacity (in LBAs): 1953525168 (931GiB) 00:33:31.018 Utilization (in LBAs): 1953525168 (931GiB) 00:33:31.018 UUID: 5a3c43a7-702b-46d2-ba19-e23a522002b2 00:33:31.018 Thin Provisioning: Not Supported 00:33:31.018 Per-NS Atomic Units: Yes 00:33:31.018 Atomic Boundary Size (Normal): 0 00:33:31.018 Atomic Boundary Size (PFail): 0 00:33:31.018 Atomic Boundary Offset: 0 00:33:31.018 NGUID/EUI64 Never Reused: No 00:33:31.018 ANA group ID: 1 00:33:31.018 Namespace Write Protected: No 00:33:31.018 Number of LBA Formats: 1 00:33:31.018 Current LBA Format: LBA Format #00 00:33:31.018 LBA Format #00: Data Size: 512 Metadata Size: 0 00:33:31.018 00:33:31.018 00:38:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:33:31.018 00:38:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:31.018 00:38:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:33:31.018 00:38:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:31.018 00:38:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:33:31.018 00:38:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:31.018 00:38:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:31.018 rmmod nvme_tcp 00:33:31.018 rmmod nvme_fabrics 00:33:31.018 00:38:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:31.018 00:38:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:33:31.018 00:38:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:33:31.018 00:38:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:33:31.018 00:38:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:31.018 00:38:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:31.018 00:38:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:31.018 00:38:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:33:31.018 00:38:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:33:31.018 00:38:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:31.018 00:38:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:33:31.018 00:38:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:31.018 00:38:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:31.018 00:38:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:31.018 00:38:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:31.018 00:38:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:33.552 00:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:33.552 00:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:33:33.552 00:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:33:33.552 00:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:33:33.552 00:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:33.552 00:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:33.552 00:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:33.552 00:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:33.552 00:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:33:33.552 00:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:33:33.552 00:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:34.489 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:34.489 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:34.489 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:34.489 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:34.489 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:34.489 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:34.489 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:34.489 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:34.489 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:34.489 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:34.489 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:34.489 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:34.489 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:34.489 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:34.489 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:34.489 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:35.424 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:33:35.682 00:33:35.682 real 0m9.926s 00:33:35.682 user 0m2.122s 00:33:35.682 sys 0m3.625s 00:33:35.682 00:38:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:35.682 00:38:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:33:35.682 ************************************ 00:33:35.682 END TEST nvmf_identify_kernel_target 00:33:35.682 ************************************ 00:33:35.682 00:38:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:33:35.682 00:38:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:35.682 00:38:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:35.682 00:38:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:35.682 ************************************ 00:33:35.682 START TEST nvmf_auth_host 00:33:35.682 ************************************ 00:33:35.682 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:33:35.682 * Looking for test storage... 00:33:35.682 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:35.682 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:35.682 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:33:35.682 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:35.682 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:35.682 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:35.682 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:35.682 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:35.682 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:33:35.682 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:35.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:35.683 --rc genhtml_branch_coverage=1 00:33:35.683 --rc genhtml_function_coverage=1 00:33:35.683 --rc genhtml_legend=1 00:33:35.683 --rc geninfo_all_blocks=1 00:33:35.683 --rc geninfo_unexecuted_blocks=1 00:33:35.683 00:33:35.683 ' 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:35.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:35.683 --rc genhtml_branch_coverage=1 00:33:35.683 --rc genhtml_function_coverage=1 00:33:35.683 --rc genhtml_legend=1 00:33:35.683 --rc geninfo_all_blocks=1 00:33:35.683 --rc geninfo_unexecuted_blocks=1 00:33:35.683 00:33:35.683 ' 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:35.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:35.683 --rc genhtml_branch_coverage=1 00:33:35.683 --rc genhtml_function_coverage=1 00:33:35.683 --rc genhtml_legend=1 00:33:35.683 --rc geninfo_all_blocks=1 00:33:35.683 --rc geninfo_unexecuted_blocks=1 00:33:35.683 00:33:35.683 ' 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:35.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:35.683 --rc genhtml_branch_coverage=1 00:33:35.683 --rc genhtml_function_coverage=1 00:33:35.683 --rc genhtml_legend=1 00:33:35.683 --rc geninfo_all_blocks=1 00:33:35.683 --rc geninfo_unexecuted_blocks=1 00:33:35.683 00:33:35.683 ' 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:35.683 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:33:35.683 00:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.232 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:38.232 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:33:38.232 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:38.232 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:38.232 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:38.232 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:38.232 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:38.232 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:33:38.232 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:38.232 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:33:38.232 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:33:38.232 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:33:38.232 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:33:38.232 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:33:38.232 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:33:38.232 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:38.232 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:38.232 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:38.232 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:38.232 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:38.232 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:38.232 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:38.232 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:38.232 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:38.232 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:38.232 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:38.232 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:38.232 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:38.232 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:38.232 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:38.232 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:38.232 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:38.232 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:38.232 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:38.232 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:38.232 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:38.232 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:38.232 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:38.232 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:38.232 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:38.232 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:38.232 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:38.232 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:38.232 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:38.232 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:38.232 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:38.232 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:38.232 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:38.232 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:38.232 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:38.232 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:38.232 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:38.232 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:38.232 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:38.232 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:38.232 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:38.232 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:38.232 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:38.232 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:38.232 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:38.232 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:38.232 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:38.232 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:38.232 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:38.232 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:38.232 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:38.233 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:38.233 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:38.233 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:33:38.233 00:33:38.233 --- 10.0.0.2 ping statistics --- 00:33:38.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:38.233 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:38.233 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:38.233 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:33:38.233 00:33:38.233 --- 10.0.0.1 ping statistics --- 00:33:38.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:38.233 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=384034 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 384034 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 384034 ']' 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=05d1ba52b197d34cdada2c071b37282d 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.q6s 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 05d1ba52b197d34cdada2c071b37282d 0 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 05d1ba52b197d34cdada2c071b37282d 0 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=05d1ba52b197d34cdada2c071b37282d 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:33:38.233 00:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:38.233 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.q6s 00:33:38.233 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.q6s 00:33:38.233 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.q6s 00:33:38.233 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:33:38.233 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:38.233 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:38.233 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:38.233 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:33:38.233 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:33:38.233 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:33:38.233 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e1dda93d5f98d65bf790b17bffa06378d823e1c8aa5eff54fee4e029edad60dd 00:33:38.233 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:33:38.233 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.HZJ 00:33:38.233 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e1dda93d5f98d65bf790b17bffa06378d823e1c8aa5eff54fee4e029edad60dd 3 00:33:38.233 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e1dda93d5f98d65bf790b17bffa06378d823e1c8aa5eff54fee4e029edad60dd 3 00:33:38.233 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:38.233 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:38.233 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e1dda93d5f98d65bf790b17bffa06378d823e1c8aa5eff54fee4e029edad60dd 00:33:38.233 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:33:38.233 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:38.493 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.HZJ 00:33:38.493 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.HZJ 00:33:38.493 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.HZJ 00:33:38.493 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:33:38.493 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:38.493 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:38.493 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:38.493 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:33:38.493 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:33:38.493 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:38.493 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c9b906e71fe254927c18eb46a614dadaa43a0d35b0523605 00:33:38.493 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:33:38.493 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.5Og 00:33:38.493 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c9b906e71fe254927c18eb46a614dadaa43a0d35b0523605 0 00:33:38.493 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c9b906e71fe254927c18eb46a614dadaa43a0d35b0523605 0 00:33:38.493 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:38.493 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:38.493 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c9b906e71fe254927c18eb46a614dadaa43a0d35b0523605 00:33:38.493 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:33:38.493 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:38.493 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.5Og 00:33:38.493 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.5Og 00:33:38.493 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.5Og 00:33:38.493 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:33:38.493 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:38.493 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:38.493 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:38.493 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:33:38.493 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:33:38.493 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:38.493 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=01d66ef5a7969dfebe563ea8762e41b70a22313ca9978417 00:33:38.493 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:33:38.493 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Ob4 00:33:38.493 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 01d66ef5a7969dfebe563ea8762e41b70a22313ca9978417 2 00:33:38.493 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 01d66ef5a7969dfebe563ea8762e41b70a22313ca9978417 2 00:33:38.493 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:38.493 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:38.493 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=01d66ef5a7969dfebe563ea8762e41b70a22313ca9978417 00:33:38.493 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:33:38.493 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:38.493 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Ob4 00:33:38.493 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Ob4 00:33:38.493 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Ob4 00:33:38.493 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:33:38.493 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:38.493 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:38.493 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:38.493 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:33:38.493 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:33:38.493 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:38.493 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8456f0977f26227c342938ceb4db3c7f 00:33:38.493 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:33:38.493 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.wrC 00:33:38.493 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8456f0977f26227c342938ceb4db3c7f 1 00:33:38.493 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8456f0977f26227c342938ceb4db3c7f 1 00:33:38.493 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:38.493 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:38.493 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8456f0977f26227c342938ceb4db3c7f 00:33:38.494 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:33:38.494 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:38.494 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.wrC 00:33:38.494 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.wrC 00:33:38.494 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.wrC 00:33:38.494 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:33:38.494 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:38.494 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:38.494 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:38.494 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:33:38.494 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:33:38.494 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:38.494 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9420f99da74c96e17998d93941987467 00:33:38.494 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:33:38.494 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.E1y 00:33:38.494 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9420f99da74c96e17998d93941987467 1 00:33:38.494 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9420f99da74c96e17998d93941987467 1 00:33:38.494 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:38.494 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:38.494 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9420f99da74c96e17998d93941987467 00:33:38.494 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:33:38.494 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:38.494 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.E1y 00:33:38.494 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.E1y 00:33:38.494 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.E1y 00:33:38.494 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:33:38.494 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:38.494 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:38.494 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:38.494 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:33:38.494 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:33:38.494 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:38.494 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=bc5a63eae406335c5c032edcab7019de41e35fda74e13ac9 00:33:38.494 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:33:38.494 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.66G 00:33:38.753 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key bc5a63eae406335c5c032edcab7019de41e35fda74e13ac9 2 00:33:38.753 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 bc5a63eae406335c5c032edcab7019de41e35fda74e13ac9 2 00:33:38.753 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:38.753 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:38.753 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=bc5a63eae406335c5c032edcab7019de41e35fda74e13ac9 00:33:38.753 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:33:38.753 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:38.753 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.66G 00:33:38.753 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.66G 00:33:38.753 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.66G 00:33:38.753 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:33:38.753 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:38.753 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:38.753 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:38.753 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:33:38.753 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:33:38.753 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:38.753 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=645d61062b76839516b930dce5742d65 00:33:38.753 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:33:38.753 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.8UV 00:33:38.753 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 645d61062b76839516b930dce5742d65 0 00:33:38.753 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 645d61062b76839516b930dce5742d65 0 00:33:38.753 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:38.753 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:38.753 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=645d61062b76839516b930dce5742d65 00:33:38.753 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:33:38.753 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:38.753 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.8UV 00:33:38.753 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.8UV 00:33:38.753 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.8UV 00:33:38.753 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:33:38.753 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:38.753 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:38.753 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:38.753 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:33:38.753 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:33:38.753 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:33:38.753 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d6744775b2d0c6fa28525bc7a132979dc1119f08caa8518575ebbcf8d265937c 00:33:38.753 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:33:38.753 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.xV7 00:33:38.753 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d6744775b2d0c6fa28525bc7a132979dc1119f08caa8518575ebbcf8d265937c 3 00:33:38.753 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d6744775b2d0c6fa28525bc7a132979dc1119f08caa8518575ebbcf8d265937c 3 00:33:38.753 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:38.753 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:38.753 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d6744775b2d0c6fa28525bc7a132979dc1119f08caa8518575ebbcf8d265937c 00:33:38.753 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:33:38.753 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:38.753 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.xV7 00:33:38.753 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.xV7 00:33:38.753 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.xV7 00:33:38.753 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:33:38.753 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 384034 00:33:38.753 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 384034 ']' 00:33:38.753 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:38.753 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:38.753 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:38.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:38.753 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:38.753 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.012 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:39.012 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:33:39.012 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:39.012 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.q6s 00:33:39.012 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.012 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.012 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.012 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.HZJ ]] 00:33:39.012 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.HZJ 00:33:39.012 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.012 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.012 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.012 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:39.012 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.5Og 00:33:39.012 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.012 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.012 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.012 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Ob4 ]] 00:33:39.012 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Ob4 00:33:39.012 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.013 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.013 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.013 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:39.013 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.wrC 00:33:39.013 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.013 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.013 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.013 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.E1y ]] 00:33:39.013 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.E1y 00:33:39.013 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.013 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.013 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.013 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:39.013 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.66G 00:33:39.013 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.013 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.013 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.013 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.8UV ]] 00:33:39.013 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.8UV 00:33:39.013 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.013 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.013 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.013 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:39.013 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.xV7 00:33:39.013 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.013 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.013 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.013 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:33:39.013 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:33:39.013 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:33:39.013 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:39.013 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:39.013 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:39.013 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:39.013 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:39.013 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:39.013 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:39.013 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:39.013 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:39.013 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:39.013 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:33:39.013 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:33:39.013 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:33:39.013 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:39.013 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:39.013 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:39.013 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:33:39.013 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:33:39.013 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:33:39.276 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:39.276 00:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:40.211 Waiting for block devices as requested 00:33:40.211 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:33:40.469 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:40.469 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:40.727 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:40.727 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:40.727 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:40.727 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:40.984 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:40.984 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:40.984 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:40.984 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:41.242 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:41.242 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:41.242 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:41.501 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:41.501 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:41.501 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:42.075 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:33:42.075 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:42.075 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:33:42.075 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:33:42.075 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:42.075 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:33:42.075 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:33:42.075 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:33:42.075 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:42.075 No valid GPT data, bailing 00:33:42.075 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:42.075 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:33:42.075 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:33:42.075 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:33:42.075 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:33:42.075 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:42.075 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:42.075 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:42.075 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:33:42.075 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:33:42.075 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:33:42.075 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:33:42.075 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:33:42.075 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:33:42.075 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:33:42.075 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:33:42.075 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:42.075 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:33:42.075 00:33:42.075 Discovery Log Number of Records 2, Generation counter 2 00:33:42.075 =====Discovery Log Entry 0====== 00:33:42.075 trtype: tcp 00:33:42.075 adrfam: ipv4 00:33:42.075 subtype: current discovery subsystem 00:33:42.075 treq: not specified, sq flow control disable supported 00:33:42.075 portid: 1 00:33:42.075 trsvcid: 4420 00:33:42.075 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:42.075 traddr: 10.0.0.1 00:33:42.075 eflags: none 00:33:42.075 sectype: none 00:33:42.075 =====Discovery Log Entry 1====== 00:33:42.075 trtype: tcp 00:33:42.075 adrfam: ipv4 00:33:42.075 subtype: nvme subsystem 00:33:42.075 treq: not specified, sq flow control disable supported 00:33:42.075 portid: 1 00:33:42.075 trsvcid: 4420 00:33:42.075 subnqn: nqn.2024-02.io.spdk:cnode0 00:33:42.075 traddr: 10.0.0.1 00:33:42.075 eflags: none 00:33:42.075 sectype: none 00:33:42.075 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:42.075 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:33:42.075 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:33:42.075 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:42.075 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:42.075 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:42.075 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:42.075 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:42.075 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzliOTA2ZTcxZmUyNTQ5MjdjMThlYjQ2YTYxNGRhZGFhNDNhMGQzNWIwNTIzNjA1Q2cT/g==: 00:33:42.075 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDFkNjZlZjVhNzk2OWRmZWJlNTYzZWE4NzYyZTQxYjcwYTIyMzEzY2E5OTc4NDE3OnxDlw==: 00:33:42.075 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:42.075 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:42.339 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzliOTA2ZTcxZmUyNTQ5MjdjMThlYjQ2YTYxNGRhZGFhNDNhMGQzNWIwNTIzNjA1Q2cT/g==: 00:33:42.339 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDFkNjZlZjVhNzk2OWRmZWJlNTYzZWE4NzYyZTQxYjcwYTIyMzEzY2E5OTc4NDE3OnxDlw==: ]] 00:33:42.339 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDFkNjZlZjVhNzk2OWRmZWJlNTYzZWE4NzYyZTQxYjcwYTIyMzEzY2E5OTc4NDE3OnxDlw==: 00:33:42.339 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:33:42.339 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:33:42.339 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:33:42.339 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:33:42.339 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:33:42.339 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:42.339 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:33:42.339 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:33:42.339 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:42.339 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:42.339 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:33:42.339 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.339 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.339 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.339 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:42.339 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:42.339 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:42.339 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:42.339 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:42.339 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:42.339 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:42.339 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:42.339 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:42.339 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:42.339 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:42.339 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:42.339 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.339 00:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.339 nvme0n1 00:33:42.339 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.339 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:42.339 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:42.339 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.339 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.339 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.339 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:42.339 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:42.339 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.339 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.339 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.339 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:42.339 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:42.339 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:42.339 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:33:42.339 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:42.339 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:42.339 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:42.339 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:42.339 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDVkMWJhNTJiMTk3ZDM0Y2RhZGEyYzA3MWIzNzI4MmRsN8Y9: 00:33:42.339 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTFkZGE5M2Q1Zjk4ZDY1YmY3OTBiMTdiZmZhMDYzNzhkODIzZTFjOGFhNWVmZjU0ZmVlNGUwMjllZGFkNjBkZOSp8C8=: 00:33:42.339 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:42.339 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:42.339 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDVkMWJhNTJiMTk3ZDM0Y2RhZGEyYzA3MWIzNzI4MmRsN8Y9: 00:33:42.339 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTFkZGE5M2Q1Zjk4ZDY1YmY3OTBiMTdiZmZhMDYzNzhkODIzZTFjOGFhNWVmZjU0ZmVlNGUwMjllZGFkNjBkZOSp8C8=: ]] 00:33:42.339 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTFkZGE5M2Q1Zjk4ZDY1YmY3OTBiMTdiZmZhMDYzNzhkODIzZTFjOGFhNWVmZjU0ZmVlNGUwMjllZGFkNjBkZOSp8C8=: 00:33:42.339 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:33:42.340 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:42.340 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:42.340 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:42.340 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:42.340 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:42.340 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:42.340 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.340 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.340 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.340 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:42.340 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:42.340 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:42.340 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:42.340 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:42.340 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:42.340 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:42.340 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:42.340 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:42.340 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:42.340 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:42.340 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:42.340 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.340 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.598 nvme0n1 00:33:42.598 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.598 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:42.598 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.598 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.598 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:42.598 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.598 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:42.598 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:42.598 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.598 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.598 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.598 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:42.599 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:42.599 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:42.599 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:42.599 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:42.599 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:42.599 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzliOTA2ZTcxZmUyNTQ5MjdjMThlYjQ2YTYxNGRhZGFhNDNhMGQzNWIwNTIzNjA1Q2cT/g==: 00:33:42.599 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDFkNjZlZjVhNzk2OWRmZWJlNTYzZWE4NzYyZTQxYjcwYTIyMzEzY2E5OTc4NDE3OnxDlw==: 00:33:42.599 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:42.599 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:42.599 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzliOTA2ZTcxZmUyNTQ5MjdjMThlYjQ2YTYxNGRhZGFhNDNhMGQzNWIwNTIzNjA1Q2cT/g==: 00:33:42.599 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDFkNjZlZjVhNzk2OWRmZWJlNTYzZWE4NzYyZTQxYjcwYTIyMzEzY2E5OTc4NDE3OnxDlw==: ]] 00:33:42.599 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDFkNjZlZjVhNzk2OWRmZWJlNTYzZWE4NzYyZTQxYjcwYTIyMzEzY2E5OTc4NDE3OnxDlw==: 00:33:42.599 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:33:42.599 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:42.599 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:42.599 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:42.599 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:42.599 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:42.599 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:42.599 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.599 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.599 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.599 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:42.599 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:42.599 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:42.599 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:42.599 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:42.599 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:42.599 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:42.599 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:42.599 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:42.599 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:42.599 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:42.599 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:42.599 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.599 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.858 nvme0n1 00:33:42.858 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.858 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:42.858 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.858 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.858 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:42.858 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.858 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:42.858 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:42.858 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.858 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.858 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.858 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:42.858 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:33:42.858 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:42.858 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:42.858 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:42.858 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:42.858 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODQ1NmYwOTc3ZjI2MjI3YzM0MjkzOGNlYjRkYjNjN2b/0K+I: 00:33:42.858 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTQyMGY5OWRhNzRjOTZlMTc5OThkOTM5NDE5ODc0NjdylBg7: 00:33:42.858 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:42.858 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:42.858 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODQ1NmYwOTc3ZjI2MjI3YzM0MjkzOGNlYjRkYjNjN2b/0K+I: 00:33:42.858 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTQyMGY5OWRhNzRjOTZlMTc5OThkOTM5NDE5ODc0NjdylBg7: ]] 00:33:42.858 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTQyMGY5OWRhNzRjOTZlMTc5OThkOTM5NDE5ODc0NjdylBg7: 00:33:42.858 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:33:42.858 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:42.858 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:42.858 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:42.858 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:42.858 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:42.858 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:42.858 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.858 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.858 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.858 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:42.858 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:42.858 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:42.858 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:42.858 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:42.858 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:42.858 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:42.858 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:42.858 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:42.858 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:42.858 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:42.858 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:42.858 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.858 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.118 nvme0n1 00:33:43.118 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.118 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:43.118 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.118 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.118 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:43.118 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.118 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:43.118 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:43.118 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.118 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.118 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.118 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:43.118 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:33:43.118 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:43.118 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:43.118 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:43.118 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:43.118 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmM1YTYzZWFlNDA2MzM1YzVjMDMyZWRjYWI3MDE5ZGU0MWUzNWZkYTc0ZTEzYWM5quoLAQ==: 00:33:43.118 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjQ1ZDYxMDYyYjc2ODM5NTE2YjkzMGRjZTU3NDJkNjUWbsco: 00:33:43.118 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:43.118 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:43.118 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmM1YTYzZWFlNDA2MzM1YzVjMDMyZWRjYWI3MDE5ZGU0MWUzNWZkYTc0ZTEzYWM5quoLAQ==: 00:33:43.118 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjQ1ZDYxMDYyYjc2ODM5NTE2YjkzMGRjZTU3NDJkNjUWbsco: ]] 00:33:43.118 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjQ1ZDYxMDYyYjc2ODM5NTE2YjkzMGRjZTU3NDJkNjUWbsco: 00:33:43.118 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:33:43.118 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:43.118 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:43.118 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:43.118 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:43.118 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:43.118 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:43.118 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.118 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.118 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.118 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:43.118 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:43.119 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:43.119 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:43.119 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:43.119 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:43.119 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:43.119 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:43.119 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:43.119 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:43.119 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:43.119 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:43.119 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.119 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.378 nvme0n1 00:33:43.378 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.378 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:43.378 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:43.378 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.378 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.378 00:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.378 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:43.378 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:43.378 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.378 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.378 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.378 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:43.378 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:33:43.378 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:43.378 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:43.378 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:43.378 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:43.378 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDY3NDQ3NzViMmQwYzZmYTI4NTI1YmM3YTEzMjk3OWRjMTExOWYwOGNhYTg1MTg1NzVlYmJjZjhkMjY1OTM3Y/rKD9M=: 00:33:43.378 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:43.378 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:43.378 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:43.378 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDY3NDQ3NzViMmQwYzZmYTI4NTI1YmM3YTEzMjk3OWRjMTExOWYwOGNhYTg1MTg1NzVlYmJjZjhkMjY1OTM3Y/rKD9M=: 00:33:43.378 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:43.378 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:33:43.378 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:43.378 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:43.378 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:43.378 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:43.378 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:43.379 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:43.379 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.379 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.379 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.379 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:43.379 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:43.379 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:43.379 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:43.379 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:43.379 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:43.379 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:43.379 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:43.379 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:43.379 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:43.379 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:43.379 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:43.379 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.379 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.379 nvme0n1 00:33:43.379 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.379 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:43.379 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:43.379 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.379 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.637 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.637 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:43.637 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:43.637 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.637 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.637 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.637 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:43.637 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:43.637 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:33:43.637 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:43.637 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:43.637 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:43.637 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:43.637 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDVkMWJhNTJiMTk3ZDM0Y2RhZGEyYzA3MWIzNzI4MmRsN8Y9: 00:33:43.637 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTFkZGE5M2Q1Zjk4ZDY1YmY3OTBiMTdiZmZhMDYzNzhkODIzZTFjOGFhNWVmZjU0ZmVlNGUwMjllZGFkNjBkZOSp8C8=: 00:33:43.637 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:43.637 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:43.895 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDVkMWJhNTJiMTk3ZDM0Y2RhZGEyYzA3MWIzNzI4MmRsN8Y9: 00:33:43.895 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTFkZGE5M2Q1Zjk4ZDY1YmY3OTBiMTdiZmZhMDYzNzhkODIzZTFjOGFhNWVmZjU0ZmVlNGUwMjllZGFkNjBkZOSp8C8=: ]] 00:33:43.895 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTFkZGE5M2Q1Zjk4ZDY1YmY3OTBiMTdiZmZhMDYzNzhkODIzZTFjOGFhNWVmZjU0ZmVlNGUwMjllZGFkNjBkZOSp8C8=: 00:33:43.895 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:33:43.895 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:43.895 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:43.895 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:43.895 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:43.895 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:43.895 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:43.895 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.895 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.895 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.895 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:43.895 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:43.895 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:43.895 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:43.895 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:43.895 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:43.895 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:43.895 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:43.895 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:43.895 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:43.895 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:43.895 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:43.895 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.895 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.895 nvme0n1 00:33:43.895 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.895 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:43.895 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:43.895 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.895 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.153 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.153 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:44.153 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:44.153 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.153 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.153 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.153 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:44.153 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:33:44.153 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:44.153 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:44.153 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:44.153 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:44.153 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzliOTA2ZTcxZmUyNTQ5MjdjMThlYjQ2YTYxNGRhZGFhNDNhMGQzNWIwNTIzNjA1Q2cT/g==: 00:33:44.153 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDFkNjZlZjVhNzk2OWRmZWJlNTYzZWE4NzYyZTQxYjcwYTIyMzEzY2E5OTc4NDE3OnxDlw==: 00:33:44.153 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:44.153 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:44.153 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzliOTA2ZTcxZmUyNTQ5MjdjMThlYjQ2YTYxNGRhZGFhNDNhMGQzNWIwNTIzNjA1Q2cT/g==: 00:33:44.153 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDFkNjZlZjVhNzk2OWRmZWJlNTYzZWE4NzYyZTQxYjcwYTIyMzEzY2E5OTc4NDE3OnxDlw==: ]] 00:33:44.153 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDFkNjZlZjVhNzk2OWRmZWJlNTYzZWE4NzYyZTQxYjcwYTIyMzEzY2E5OTc4NDE3OnxDlw==: 00:33:44.153 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:33:44.153 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:44.153 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:44.153 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:44.153 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:44.153 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:44.153 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:44.153 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.153 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.153 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.153 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:44.153 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:44.153 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:44.153 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:44.153 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:44.153 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:44.154 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:44.154 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:44.154 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:44.154 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:44.154 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:44.154 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:44.154 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.154 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.154 nvme0n1 00:33:44.154 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.154 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:44.154 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.154 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.154 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:44.154 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.412 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:44.412 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:44.412 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.412 00:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.412 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.412 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:44.412 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:33:44.412 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:44.412 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:44.412 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:44.412 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:44.412 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODQ1NmYwOTc3ZjI2MjI3YzM0MjkzOGNlYjRkYjNjN2b/0K+I: 00:33:44.412 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTQyMGY5OWRhNzRjOTZlMTc5OThkOTM5NDE5ODc0NjdylBg7: 00:33:44.412 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:44.412 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:44.412 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODQ1NmYwOTc3ZjI2MjI3YzM0MjkzOGNlYjRkYjNjN2b/0K+I: 00:33:44.412 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTQyMGY5OWRhNzRjOTZlMTc5OThkOTM5NDE5ODc0NjdylBg7: ]] 00:33:44.412 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTQyMGY5OWRhNzRjOTZlMTc5OThkOTM5NDE5ODc0NjdylBg7: 00:33:44.412 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:33:44.412 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:44.412 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:44.412 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:44.412 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:44.412 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:44.412 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:44.412 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.412 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.412 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.412 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:44.412 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:44.412 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:44.412 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:44.412 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:44.412 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:44.412 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:44.412 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:44.412 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:44.412 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:44.413 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:44.413 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:44.413 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.413 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.413 nvme0n1 00:33:44.413 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.413 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:44.413 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.413 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.413 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:44.413 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.671 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:44.671 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:44.671 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.671 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.671 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.671 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:44.671 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:33:44.671 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:44.671 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:44.671 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:44.671 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:44.671 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmM1YTYzZWFlNDA2MzM1YzVjMDMyZWRjYWI3MDE5ZGU0MWUzNWZkYTc0ZTEzYWM5quoLAQ==: 00:33:44.671 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjQ1ZDYxMDYyYjc2ODM5NTE2YjkzMGRjZTU3NDJkNjUWbsco: 00:33:44.671 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:44.671 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:44.671 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmM1YTYzZWFlNDA2MzM1YzVjMDMyZWRjYWI3MDE5ZGU0MWUzNWZkYTc0ZTEzYWM5quoLAQ==: 00:33:44.671 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjQ1ZDYxMDYyYjc2ODM5NTE2YjkzMGRjZTU3NDJkNjUWbsco: ]] 00:33:44.671 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjQ1ZDYxMDYyYjc2ODM5NTE2YjkzMGRjZTU3NDJkNjUWbsco: 00:33:44.671 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:33:44.671 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:44.671 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:44.671 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:44.671 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:44.671 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:44.672 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:44.672 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.672 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.672 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.672 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:44.672 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:44.672 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:44.672 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:44.672 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:44.672 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:44.672 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:44.672 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:44.672 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:44.672 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:44.672 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:44.672 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:44.672 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.672 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.672 nvme0n1 00:33:44.672 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.672 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:44.672 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.672 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.672 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:44.672 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.930 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:44.930 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:44.930 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.930 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.930 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.930 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:44.930 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:33:44.930 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:44.930 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:44.930 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:44.930 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:44.930 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDY3NDQ3NzViMmQwYzZmYTI4NTI1YmM3YTEzMjk3OWRjMTExOWYwOGNhYTg1MTg1NzVlYmJjZjhkMjY1OTM3Y/rKD9M=: 00:33:44.930 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:44.930 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:44.930 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:44.930 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDY3NDQ3NzViMmQwYzZmYTI4NTI1YmM3YTEzMjk3OWRjMTExOWYwOGNhYTg1MTg1NzVlYmJjZjhkMjY1OTM3Y/rKD9M=: 00:33:44.930 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:44.930 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:33:44.930 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:44.930 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:44.930 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:44.930 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:44.930 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:44.930 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:44.930 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.930 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.930 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.930 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:44.930 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:44.930 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:44.930 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:44.930 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:44.930 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:44.930 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:44.930 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:44.930 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:44.930 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:44.930 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:44.930 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:44.930 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.930 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.930 nvme0n1 00:33:44.930 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.930 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:44.930 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.930 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:44.930 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.930 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.188 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:45.188 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:45.188 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.188 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.188 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.188 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:45.188 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:45.189 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:33:45.189 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:45.189 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:45.189 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:45.189 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:45.189 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDVkMWJhNTJiMTk3ZDM0Y2RhZGEyYzA3MWIzNzI4MmRsN8Y9: 00:33:45.189 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTFkZGE5M2Q1Zjk4ZDY1YmY3OTBiMTdiZmZhMDYzNzhkODIzZTFjOGFhNWVmZjU0ZmVlNGUwMjllZGFkNjBkZOSp8C8=: 00:33:45.189 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:45.189 00:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:45.754 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDVkMWJhNTJiMTk3ZDM0Y2RhZGEyYzA3MWIzNzI4MmRsN8Y9: 00:33:45.754 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTFkZGE5M2Q1Zjk4ZDY1YmY3OTBiMTdiZmZhMDYzNzhkODIzZTFjOGFhNWVmZjU0ZmVlNGUwMjllZGFkNjBkZOSp8C8=: ]] 00:33:45.754 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTFkZGE5M2Q1Zjk4ZDY1YmY3OTBiMTdiZmZhMDYzNzhkODIzZTFjOGFhNWVmZjU0ZmVlNGUwMjllZGFkNjBkZOSp8C8=: 00:33:45.754 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:33:45.754 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:45.754 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:45.754 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:45.754 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:45.754 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:45.754 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:45.754 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.754 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.754 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.754 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:45.754 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:45.754 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:45.754 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:45.754 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:45.754 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:45.754 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:45.754 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:45.754 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:45.754 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:45.754 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:45.754 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:45.754 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.754 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.013 nvme0n1 00:33:46.013 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.013 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:46.013 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:46.013 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.013 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.013 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.013 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:46.013 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:46.013 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.013 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.013 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.013 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:46.013 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:33:46.013 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:46.013 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:46.013 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:46.013 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:46.013 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzliOTA2ZTcxZmUyNTQ5MjdjMThlYjQ2YTYxNGRhZGFhNDNhMGQzNWIwNTIzNjA1Q2cT/g==: 00:33:46.013 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDFkNjZlZjVhNzk2OWRmZWJlNTYzZWE4NzYyZTQxYjcwYTIyMzEzY2E5OTc4NDE3OnxDlw==: 00:33:46.013 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:46.013 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:46.013 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzliOTA2ZTcxZmUyNTQ5MjdjMThlYjQ2YTYxNGRhZGFhNDNhMGQzNWIwNTIzNjA1Q2cT/g==: 00:33:46.013 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDFkNjZlZjVhNzk2OWRmZWJlNTYzZWE4NzYyZTQxYjcwYTIyMzEzY2E5OTc4NDE3OnxDlw==: ]] 00:33:46.013 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDFkNjZlZjVhNzk2OWRmZWJlNTYzZWE4NzYyZTQxYjcwYTIyMzEzY2E5OTc4NDE3OnxDlw==: 00:33:46.013 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:33:46.013 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:46.013 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:46.013 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:46.013 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:46.013 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:46.013 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:46.013 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.013 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.013 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.013 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:46.013 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:46.013 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:46.013 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:46.013 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:46.013 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:46.013 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:46.013 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:46.013 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:46.013 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:46.013 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:46.013 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:46.013 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.013 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.271 nvme0n1 00:33:46.272 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.272 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:46.272 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:46.272 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.272 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.272 00:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.272 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:46.272 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:46.272 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.272 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.272 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.272 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:46.272 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:33:46.272 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:46.272 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:46.272 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:46.272 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:46.272 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODQ1NmYwOTc3ZjI2MjI3YzM0MjkzOGNlYjRkYjNjN2b/0K+I: 00:33:46.272 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTQyMGY5OWRhNzRjOTZlMTc5OThkOTM5NDE5ODc0NjdylBg7: 00:33:46.272 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:46.272 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:46.272 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODQ1NmYwOTc3ZjI2MjI3YzM0MjkzOGNlYjRkYjNjN2b/0K+I: 00:33:46.272 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTQyMGY5OWRhNzRjOTZlMTc5OThkOTM5NDE5ODc0NjdylBg7: ]] 00:33:46.272 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTQyMGY5OWRhNzRjOTZlMTc5OThkOTM5NDE5ODc0NjdylBg7: 00:33:46.272 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:33:46.272 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:46.272 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:46.272 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:46.272 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:46.272 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:46.272 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:46.272 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.272 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.272 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.272 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:46.272 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:46.272 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:46.272 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:46.272 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:46.272 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:46.272 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:46.272 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:46.272 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:46.272 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:46.272 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:46.272 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:46.272 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.272 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.540 nvme0n1 00:33:46.540 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.540 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:46.540 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.540 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.540 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:46.540 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.801 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:46.801 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:46.801 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.801 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.801 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.802 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:46.802 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:33:46.802 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:46.802 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:46.802 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:46.802 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:46.802 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmM1YTYzZWFlNDA2MzM1YzVjMDMyZWRjYWI3MDE5ZGU0MWUzNWZkYTc0ZTEzYWM5quoLAQ==: 00:33:46.802 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjQ1ZDYxMDYyYjc2ODM5NTE2YjkzMGRjZTU3NDJkNjUWbsco: 00:33:46.802 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:46.802 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:46.802 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmM1YTYzZWFlNDA2MzM1YzVjMDMyZWRjYWI3MDE5ZGU0MWUzNWZkYTc0ZTEzYWM5quoLAQ==: 00:33:46.802 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjQ1ZDYxMDYyYjc2ODM5NTE2YjkzMGRjZTU3NDJkNjUWbsco: ]] 00:33:46.802 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjQ1ZDYxMDYyYjc2ODM5NTE2YjkzMGRjZTU3NDJkNjUWbsco: 00:33:46.802 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:33:46.802 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:46.802 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:46.802 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:46.802 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:46.802 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:46.802 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:46.802 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.802 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.802 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.802 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:46.802 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:46.802 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:46.802 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:46.802 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:46.802 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:46.802 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:46.802 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:46.802 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:46.802 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:46.802 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:46.802 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:46.802 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.802 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.061 nvme0n1 00:33:47.061 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.061 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:47.061 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.061 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.061 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:47.061 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.061 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:47.061 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:47.061 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.061 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.061 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.061 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:47.061 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:33:47.061 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:47.061 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:47.061 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:47.061 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:47.061 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDY3NDQ3NzViMmQwYzZmYTI4NTI1YmM3YTEzMjk3OWRjMTExOWYwOGNhYTg1MTg1NzVlYmJjZjhkMjY1OTM3Y/rKD9M=: 00:33:47.061 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:47.061 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:47.061 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:47.061 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDY3NDQ3NzViMmQwYzZmYTI4NTI1YmM3YTEzMjk3OWRjMTExOWYwOGNhYTg1MTg1NzVlYmJjZjhkMjY1OTM3Y/rKD9M=: 00:33:47.061 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:47.061 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:33:47.061 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:47.061 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:47.061 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:47.061 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:47.061 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:47.061 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:47.061 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.061 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.061 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.061 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:47.061 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:47.061 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:47.061 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:47.061 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:47.061 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:47.062 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:47.062 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:47.062 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:47.062 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:47.062 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:47.062 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:47.062 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.062 00:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.319 nvme0n1 00:33:47.319 00:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.319 00:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:47.319 00:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.319 00:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.319 00:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:47.319 00:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.319 00:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:47.319 00:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:47.319 00:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.319 00:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.319 00:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.319 00:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:47.319 00:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:47.319 00:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:33:47.319 00:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:47.319 00:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:47.319 00:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:47.319 00:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:47.319 00:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDVkMWJhNTJiMTk3ZDM0Y2RhZGEyYzA3MWIzNzI4MmRsN8Y9: 00:33:47.319 00:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTFkZGE5M2Q1Zjk4ZDY1YmY3OTBiMTdiZmZhMDYzNzhkODIzZTFjOGFhNWVmZjU0ZmVlNGUwMjllZGFkNjBkZOSp8C8=: 00:33:47.319 00:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:47.319 00:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:49.217 00:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDVkMWJhNTJiMTk3ZDM0Y2RhZGEyYzA3MWIzNzI4MmRsN8Y9: 00:33:49.217 00:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTFkZGE5M2Q1Zjk4ZDY1YmY3OTBiMTdiZmZhMDYzNzhkODIzZTFjOGFhNWVmZjU0ZmVlNGUwMjllZGFkNjBkZOSp8C8=: ]] 00:33:49.218 00:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTFkZGE5M2Q1Zjk4ZDY1YmY3OTBiMTdiZmZhMDYzNzhkODIzZTFjOGFhNWVmZjU0ZmVlNGUwMjllZGFkNjBkZOSp8C8=: 00:33:49.218 00:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:33:49.218 00:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:49.218 00:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:49.218 00:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:49.218 00:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:49.218 00:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:49.218 00:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:49.218 00:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.218 00:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.218 00:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.218 00:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:49.218 00:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:49.218 00:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:49.218 00:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:49.218 00:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:49.218 00:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:49.218 00:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:49.218 00:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:49.218 00:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:49.218 00:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:49.218 00:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:49.218 00:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:49.218 00:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.218 00:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.476 nvme0n1 00:33:49.476 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.476 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:49.476 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:49.476 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.476 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.476 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.476 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:49.476 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:49.476 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.476 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.476 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.476 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:49.476 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:33:49.476 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:49.476 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:49.476 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:49.476 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:49.476 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzliOTA2ZTcxZmUyNTQ5MjdjMThlYjQ2YTYxNGRhZGFhNDNhMGQzNWIwNTIzNjA1Q2cT/g==: 00:33:49.476 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDFkNjZlZjVhNzk2OWRmZWJlNTYzZWE4NzYyZTQxYjcwYTIyMzEzY2E5OTc4NDE3OnxDlw==: 00:33:49.476 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:49.476 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:49.476 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzliOTA2ZTcxZmUyNTQ5MjdjMThlYjQ2YTYxNGRhZGFhNDNhMGQzNWIwNTIzNjA1Q2cT/g==: 00:33:49.476 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDFkNjZlZjVhNzk2OWRmZWJlNTYzZWE4NzYyZTQxYjcwYTIyMzEzY2E5OTc4NDE3OnxDlw==: ]] 00:33:49.476 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDFkNjZlZjVhNzk2OWRmZWJlNTYzZWE4NzYyZTQxYjcwYTIyMzEzY2E5OTc4NDE3OnxDlw==: 00:33:49.476 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:33:49.476 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:49.476 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:49.476 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:49.476 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:49.476 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:49.476 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:49.476 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.476 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.734 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.734 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:49.734 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:49.734 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:49.734 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:49.734 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:49.734 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:49.734 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:49.734 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:49.734 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:49.734 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:49.734 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:49.734 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:49.734 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.734 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.992 nvme0n1 00:33:49.992 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.992 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:49.992 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:49.992 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.992 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.992 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.249 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:50.249 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:50.249 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.249 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.249 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.249 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:50.249 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:33:50.249 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:50.249 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:50.249 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:50.249 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:50.249 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODQ1NmYwOTc3ZjI2MjI3YzM0MjkzOGNlYjRkYjNjN2b/0K+I: 00:33:50.249 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTQyMGY5OWRhNzRjOTZlMTc5OThkOTM5NDE5ODc0NjdylBg7: 00:33:50.249 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:50.249 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:50.249 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODQ1NmYwOTc3ZjI2MjI3YzM0MjkzOGNlYjRkYjNjN2b/0K+I: 00:33:50.249 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTQyMGY5OWRhNzRjOTZlMTc5OThkOTM5NDE5ODc0NjdylBg7: ]] 00:33:50.249 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTQyMGY5OWRhNzRjOTZlMTc5OThkOTM5NDE5ODc0NjdylBg7: 00:33:50.249 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:33:50.249 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:50.249 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:50.249 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:50.249 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:50.249 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:50.249 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:50.249 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.249 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.249 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.249 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:50.249 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:50.249 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:50.249 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:50.249 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:50.249 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:50.249 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:50.249 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:50.249 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:50.249 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:50.249 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:50.249 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:50.249 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.249 00:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.816 nvme0n1 00:33:50.816 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.816 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:50.816 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.816 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.816 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:50.816 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.816 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:50.816 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:50.816 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.816 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.816 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.816 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:50.816 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:33:50.816 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:50.816 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:50.816 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:50.816 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:50.816 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmM1YTYzZWFlNDA2MzM1YzVjMDMyZWRjYWI3MDE5ZGU0MWUzNWZkYTc0ZTEzYWM5quoLAQ==: 00:33:50.816 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjQ1ZDYxMDYyYjc2ODM5NTE2YjkzMGRjZTU3NDJkNjUWbsco: 00:33:50.816 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:50.816 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:50.816 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmM1YTYzZWFlNDA2MzM1YzVjMDMyZWRjYWI3MDE5ZGU0MWUzNWZkYTc0ZTEzYWM5quoLAQ==: 00:33:50.817 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjQ1ZDYxMDYyYjc2ODM5NTE2YjkzMGRjZTU3NDJkNjUWbsco: ]] 00:33:50.817 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjQ1ZDYxMDYyYjc2ODM5NTE2YjkzMGRjZTU3NDJkNjUWbsco: 00:33:50.817 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:33:50.817 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:50.817 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:50.817 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:50.817 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:50.817 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:50.817 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:50.817 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.817 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.817 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.817 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:50.817 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:50.817 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:50.817 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:50.817 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:50.817 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:50.817 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:50.817 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:50.817 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:50.817 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:50.817 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:50.817 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:50.817 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.817 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.383 nvme0n1 00:33:51.383 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.383 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:51.383 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.383 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.383 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:51.383 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.383 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:51.383 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:51.383 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.383 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.383 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.383 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:51.383 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:33:51.383 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:51.383 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:51.383 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:51.383 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:51.383 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDY3NDQ3NzViMmQwYzZmYTI4NTI1YmM3YTEzMjk3OWRjMTExOWYwOGNhYTg1MTg1NzVlYmJjZjhkMjY1OTM3Y/rKD9M=: 00:33:51.383 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:51.383 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:51.383 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:51.383 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDY3NDQ3NzViMmQwYzZmYTI4NTI1YmM3YTEzMjk3OWRjMTExOWYwOGNhYTg1MTg1NzVlYmJjZjhkMjY1OTM3Y/rKD9M=: 00:33:51.383 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:51.383 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:33:51.383 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:51.383 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:51.383 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:51.383 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:51.383 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:51.383 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:51.383 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.383 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.383 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.383 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:51.383 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:51.383 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:51.383 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:51.384 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:51.384 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:51.384 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:51.384 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:51.384 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:51.384 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:51.384 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:51.384 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:51.384 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.384 00:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.641 nvme0n1 00:33:51.641 00:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.641 00:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:51.641 00:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.641 00:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.641 00:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:51.641 00:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.900 00:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:51.900 00:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:51.900 00:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.900 00:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.900 00:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.900 00:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:51.900 00:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:51.900 00:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:33:51.900 00:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:51.900 00:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:51.900 00:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:51.900 00:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:51.900 00:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDVkMWJhNTJiMTk3ZDM0Y2RhZGEyYzA3MWIzNzI4MmRsN8Y9: 00:33:51.900 00:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTFkZGE5M2Q1Zjk4ZDY1YmY3OTBiMTdiZmZhMDYzNzhkODIzZTFjOGFhNWVmZjU0ZmVlNGUwMjllZGFkNjBkZOSp8C8=: 00:33:51.900 00:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:51.900 00:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:51.900 00:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDVkMWJhNTJiMTk3ZDM0Y2RhZGEyYzA3MWIzNzI4MmRsN8Y9: 00:33:51.900 00:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTFkZGE5M2Q1Zjk4ZDY1YmY3OTBiMTdiZmZhMDYzNzhkODIzZTFjOGFhNWVmZjU0ZmVlNGUwMjllZGFkNjBkZOSp8C8=: ]] 00:33:51.900 00:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTFkZGE5M2Q1Zjk4ZDY1YmY3OTBiMTdiZmZhMDYzNzhkODIzZTFjOGFhNWVmZjU0ZmVlNGUwMjllZGFkNjBkZOSp8C8=: 00:33:51.900 00:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:33:51.900 00:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:51.900 00:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:51.900 00:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:51.900 00:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:51.900 00:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:51.900 00:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:51.900 00:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.900 00:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.900 00:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.900 00:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:51.900 00:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:51.900 00:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:51.900 00:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:51.900 00:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:51.900 00:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:51.900 00:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:51.900 00:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:51.900 00:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:51.900 00:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:51.900 00:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:51.900 00:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:51.900 00:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.900 00:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.834 nvme0n1 00:33:52.834 00:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.834 00:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:52.834 00:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.834 00:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.834 00:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:52.834 00:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.834 00:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:52.835 00:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:52.835 00:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.835 00:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.835 00:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.835 00:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:52.835 00:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:33:52.835 00:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:52.835 00:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:52.835 00:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:52.835 00:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:52.835 00:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzliOTA2ZTcxZmUyNTQ5MjdjMThlYjQ2YTYxNGRhZGFhNDNhMGQzNWIwNTIzNjA1Q2cT/g==: 00:33:52.835 00:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDFkNjZlZjVhNzk2OWRmZWJlNTYzZWE4NzYyZTQxYjcwYTIyMzEzY2E5OTc4NDE3OnxDlw==: 00:33:52.835 00:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:52.835 00:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:52.835 00:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzliOTA2ZTcxZmUyNTQ5MjdjMThlYjQ2YTYxNGRhZGFhNDNhMGQzNWIwNTIzNjA1Q2cT/g==: 00:33:52.835 00:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDFkNjZlZjVhNzk2OWRmZWJlNTYzZWE4NzYyZTQxYjcwYTIyMzEzY2E5OTc4NDE3OnxDlw==: ]] 00:33:52.835 00:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDFkNjZlZjVhNzk2OWRmZWJlNTYzZWE4NzYyZTQxYjcwYTIyMzEzY2E5OTc4NDE3OnxDlw==: 00:33:52.835 00:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:33:52.835 00:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:52.835 00:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:52.835 00:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:52.835 00:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:52.835 00:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:52.835 00:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:52.835 00:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.835 00:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.835 00:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.835 00:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:52.835 00:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:52.835 00:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:52.835 00:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:52.835 00:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:52.835 00:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:52.835 00:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:52.835 00:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:52.835 00:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:52.835 00:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:52.835 00:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:52.835 00:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:52.835 00:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.835 00:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.767 nvme0n1 00:33:53.767 00:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.767 00:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:53.767 00:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.767 00:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.767 00:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:53.767 00:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.767 00:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:53.767 00:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:53.767 00:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.767 00:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.767 00:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.767 00:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:53.767 00:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:33:53.767 00:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:53.767 00:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:53.767 00:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:53.767 00:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:53.767 00:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODQ1NmYwOTc3ZjI2MjI3YzM0MjkzOGNlYjRkYjNjN2b/0K+I: 00:33:53.767 00:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTQyMGY5OWRhNzRjOTZlMTc5OThkOTM5NDE5ODc0NjdylBg7: 00:33:53.767 00:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:53.767 00:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:53.767 00:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODQ1NmYwOTc3ZjI2MjI3YzM0MjkzOGNlYjRkYjNjN2b/0K+I: 00:33:53.767 00:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTQyMGY5OWRhNzRjOTZlMTc5OThkOTM5NDE5ODc0NjdylBg7: ]] 00:33:53.767 00:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTQyMGY5OWRhNzRjOTZlMTc5OThkOTM5NDE5ODc0NjdylBg7: 00:33:53.767 00:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:33:53.767 00:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:53.767 00:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:53.767 00:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:53.767 00:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:53.767 00:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:53.767 00:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:53.767 00:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.767 00:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.767 00:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.767 00:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:53.767 00:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:53.767 00:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:53.767 00:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:53.767 00:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:53.767 00:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:53.767 00:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:53.767 00:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:53.767 00:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:53.767 00:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:53.767 00:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:53.767 00:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:53.767 00:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.767 00:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.700 nvme0n1 00:33:54.700 00:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.700 00:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:54.700 00:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:54.700 00:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.700 00:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.700 00:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.700 00:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:54.700 00:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:54.700 00:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.700 00:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.700 00:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.700 00:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:54.700 00:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:33:54.700 00:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:54.700 00:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:54.700 00:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:54.700 00:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:54.700 00:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmM1YTYzZWFlNDA2MzM1YzVjMDMyZWRjYWI3MDE5ZGU0MWUzNWZkYTc0ZTEzYWM5quoLAQ==: 00:33:54.700 00:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjQ1ZDYxMDYyYjc2ODM5NTE2YjkzMGRjZTU3NDJkNjUWbsco: 00:33:54.700 00:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:54.700 00:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:54.700 00:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmM1YTYzZWFlNDA2MzM1YzVjMDMyZWRjYWI3MDE5ZGU0MWUzNWZkYTc0ZTEzYWM5quoLAQ==: 00:33:54.700 00:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjQ1ZDYxMDYyYjc2ODM5NTE2YjkzMGRjZTU3NDJkNjUWbsco: ]] 00:33:54.700 00:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjQ1ZDYxMDYyYjc2ODM5NTE2YjkzMGRjZTU3NDJkNjUWbsco: 00:33:54.700 00:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:33:54.700 00:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:54.700 00:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:54.700 00:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:54.700 00:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:54.700 00:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:54.700 00:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:54.700 00:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.700 00:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.700 00:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.700 00:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:54.700 00:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:54.700 00:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:54.700 00:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:54.700 00:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:54.700 00:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:54.700 00:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:54.700 00:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:54.700 00:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:54.700 00:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:54.700 00:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:54.700 00:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:54.700 00:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.700 00:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.633 nvme0n1 00:33:55.633 00:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.633 00:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:55.633 00:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:55.633 00:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.633 00:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.633 00:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.633 00:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:55.633 00:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:55.633 00:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.633 00:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.633 00:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.633 00:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:55.633 00:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:33:55.633 00:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:55.633 00:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:55.633 00:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:55.633 00:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:55.633 00:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDY3NDQ3NzViMmQwYzZmYTI4NTI1YmM3YTEzMjk3OWRjMTExOWYwOGNhYTg1MTg1NzVlYmJjZjhkMjY1OTM3Y/rKD9M=: 00:33:55.633 00:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:55.633 00:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:55.633 00:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:55.634 00:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDY3NDQ3NzViMmQwYzZmYTI4NTI1YmM3YTEzMjk3OWRjMTExOWYwOGNhYTg1MTg1NzVlYmJjZjhkMjY1OTM3Y/rKD9M=: 00:33:55.634 00:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:55.634 00:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:33:55.634 00:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:55.634 00:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:55.634 00:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:55.634 00:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:55.634 00:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:55.634 00:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:55.634 00:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.634 00:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.634 00:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.634 00:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:55.634 00:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:55.634 00:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:55.634 00:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:55.634 00:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:55.634 00:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:55.634 00:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:55.634 00:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:55.634 00:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:55.634 00:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:55.634 00:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:55.634 00:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:55.634 00:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.634 00:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.200 nvme0n1 00:33:56.200 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.200 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:56.200 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:56.200 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.200 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.200 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.459 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:56.459 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:56.459 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.459 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.459 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.459 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:56.459 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:56.459 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:56.459 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:33:56.459 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:56.459 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:56.459 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:56.459 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:56.459 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDVkMWJhNTJiMTk3ZDM0Y2RhZGEyYzA3MWIzNzI4MmRsN8Y9: 00:33:56.459 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTFkZGE5M2Q1Zjk4ZDY1YmY3OTBiMTdiZmZhMDYzNzhkODIzZTFjOGFhNWVmZjU0ZmVlNGUwMjllZGFkNjBkZOSp8C8=: 00:33:56.459 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:56.459 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:56.459 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDVkMWJhNTJiMTk3ZDM0Y2RhZGEyYzA3MWIzNzI4MmRsN8Y9: 00:33:56.459 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTFkZGE5M2Q1Zjk4ZDY1YmY3OTBiMTdiZmZhMDYzNzhkODIzZTFjOGFhNWVmZjU0ZmVlNGUwMjllZGFkNjBkZOSp8C8=: ]] 00:33:56.459 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTFkZGE5M2Q1Zjk4ZDY1YmY3OTBiMTdiZmZhMDYzNzhkODIzZTFjOGFhNWVmZjU0ZmVlNGUwMjllZGFkNjBkZOSp8C8=: 00:33:56.459 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:33:56.459 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:56.459 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:56.459 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:56.459 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:56.459 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:56.459 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:56.459 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.459 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.459 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.459 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:56.459 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:56.459 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:56.459 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:56.459 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:56.459 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:56.459 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:56.459 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:56.459 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:56.459 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:56.459 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:56.459 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:56.459 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.459 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.459 nvme0n1 00:33:56.459 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.459 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:56.459 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:56.459 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.459 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.459 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.459 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:56.459 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:56.459 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.459 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.718 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.718 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:56.718 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:33:56.718 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:56.718 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:56.718 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:56.718 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:56.718 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzliOTA2ZTcxZmUyNTQ5MjdjMThlYjQ2YTYxNGRhZGFhNDNhMGQzNWIwNTIzNjA1Q2cT/g==: 00:33:56.718 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDFkNjZlZjVhNzk2OWRmZWJlNTYzZWE4NzYyZTQxYjcwYTIyMzEzY2E5OTc4NDE3OnxDlw==: 00:33:56.718 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:56.718 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:56.718 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzliOTA2ZTcxZmUyNTQ5MjdjMThlYjQ2YTYxNGRhZGFhNDNhMGQzNWIwNTIzNjA1Q2cT/g==: 00:33:56.718 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDFkNjZlZjVhNzk2OWRmZWJlNTYzZWE4NzYyZTQxYjcwYTIyMzEzY2E5OTc4NDE3OnxDlw==: ]] 00:33:56.718 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDFkNjZlZjVhNzk2OWRmZWJlNTYzZWE4NzYyZTQxYjcwYTIyMzEzY2E5OTc4NDE3OnxDlw==: 00:33:56.718 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:33:56.718 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:56.718 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:56.718 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:56.718 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:56.718 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:56.718 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:56.718 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.718 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.718 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.718 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:56.718 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:56.718 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:56.718 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:56.718 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:56.718 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:56.718 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:56.718 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:56.718 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:56.718 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:56.718 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:56.718 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:56.718 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.718 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.718 nvme0n1 00:33:56.718 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.718 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:56.718 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:56.718 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.718 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.718 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.718 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:56.718 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:56.718 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.718 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.718 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.718 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:56.718 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:33:56.719 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:56.719 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:56.719 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:56.719 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:56.719 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODQ1NmYwOTc3ZjI2MjI3YzM0MjkzOGNlYjRkYjNjN2b/0K+I: 00:33:56.719 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTQyMGY5OWRhNzRjOTZlMTc5OThkOTM5NDE5ODc0NjdylBg7: 00:33:56.719 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:56.719 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:56.719 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODQ1NmYwOTc3ZjI2MjI3YzM0MjkzOGNlYjRkYjNjN2b/0K+I: 00:33:56.719 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTQyMGY5OWRhNzRjOTZlMTc5OThkOTM5NDE5ODc0NjdylBg7: ]] 00:33:56.719 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTQyMGY5OWRhNzRjOTZlMTc5OThkOTM5NDE5ODc0NjdylBg7: 00:33:56.719 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:33:56.719 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:56.719 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:56.719 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:56.719 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:56.719 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:56.719 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:56.719 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.719 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.719 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.719 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:56.719 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:56.719 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:56.719 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:56.719 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:56.719 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:56.719 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:56.719 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:56.719 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:56.719 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:56.719 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:56.719 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:56.719 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.719 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.978 nvme0n1 00:33:56.978 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.978 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:56.978 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.978 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:56.978 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.978 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.978 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:56.978 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:56.978 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.978 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.978 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.978 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:56.978 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:33:56.978 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:56.978 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:56.978 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:56.978 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:56.978 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmM1YTYzZWFlNDA2MzM1YzVjMDMyZWRjYWI3MDE5ZGU0MWUzNWZkYTc0ZTEzYWM5quoLAQ==: 00:33:56.978 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjQ1ZDYxMDYyYjc2ODM5NTE2YjkzMGRjZTU3NDJkNjUWbsco: 00:33:56.978 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:56.978 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:56.978 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmM1YTYzZWFlNDA2MzM1YzVjMDMyZWRjYWI3MDE5ZGU0MWUzNWZkYTc0ZTEzYWM5quoLAQ==: 00:33:56.978 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjQ1ZDYxMDYyYjc2ODM5NTE2YjkzMGRjZTU3NDJkNjUWbsco: ]] 00:33:56.978 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjQ1ZDYxMDYyYjc2ODM5NTE2YjkzMGRjZTU3NDJkNjUWbsco: 00:33:56.978 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:33:56.978 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:56.978 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:56.978 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:56.978 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:56.978 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:56.978 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:56.978 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.978 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.978 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.978 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:56.978 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:56.978 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:56.978 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:56.978 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:56.978 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:56.978 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:56.978 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:56.978 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:56.978 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:56.979 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:56.979 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:56.979 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.979 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.252 nvme0n1 00:33:57.252 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.252 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:57.252 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.252 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.252 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:57.252 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.252 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:57.252 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:57.252 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.252 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.252 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.252 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:57.252 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:33:57.252 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:57.252 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:57.252 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:57.252 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:57.252 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDY3NDQ3NzViMmQwYzZmYTI4NTI1YmM3YTEzMjk3OWRjMTExOWYwOGNhYTg1MTg1NzVlYmJjZjhkMjY1OTM3Y/rKD9M=: 00:33:57.253 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:57.253 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:57.253 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:57.253 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDY3NDQ3NzViMmQwYzZmYTI4NTI1YmM3YTEzMjk3OWRjMTExOWYwOGNhYTg1MTg1NzVlYmJjZjhkMjY1OTM3Y/rKD9M=: 00:33:57.253 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:57.253 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:33:57.253 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:57.253 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:57.253 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:57.253 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:57.253 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:57.253 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:57.253 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.253 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.253 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.253 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:57.253 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:57.253 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:57.253 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:57.253 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:57.254 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:57.254 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:57.254 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:57.254 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:57.254 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:57.254 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:57.254 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:57.254 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.254 00:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.517 nvme0n1 00:33:57.517 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.517 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:57.517 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.517 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.517 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:57.517 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.517 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:57.517 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:57.517 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.517 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.517 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.517 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:57.517 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:57.517 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:33:57.517 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:57.517 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:57.517 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:57.517 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:57.517 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDVkMWJhNTJiMTk3ZDM0Y2RhZGEyYzA3MWIzNzI4MmRsN8Y9: 00:33:57.517 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTFkZGE5M2Q1Zjk4ZDY1YmY3OTBiMTdiZmZhMDYzNzhkODIzZTFjOGFhNWVmZjU0ZmVlNGUwMjllZGFkNjBkZOSp8C8=: 00:33:57.517 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:57.517 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:57.517 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDVkMWJhNTJiMTk3ZDM0Y2RhZGEyYzA3MWIzNzI4MmRsN8Y9: 00:33:57.517 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTFkZGE5M2Q1Zjk4ZDY1YmY3OTBiMTdiZmZhMDYzNzhkODIzZTFjOGFhNWVmZjU0ZmVlNGUwMjllZGFkNjBkZOSp8C8=: ]] 00:33:57.517 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTFkZGE5M2Q1Zjk4ZDY1YmY3OTBiMTdiZmZhMDYzNzhkODIzZTFjOGFhNWVmZjU0ZmVlNGUwMjllZGFkNjBkZOSp8C8=: 00:33:57.517 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:33:57.517 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:57.517 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:57.517 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:57.517 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:57.517 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:57.517 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:57.517 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.517 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.517 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.517 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:57.517 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:57.517 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:57.517 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:57.517 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:57.517 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:57.517 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:57.517 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:57.517 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:57.517 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:57.517 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:57.517 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:57.517 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.517 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.775 nvme0n1 00:33:57.775 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.775 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:57.775 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:57.775 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.775 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.775 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.775 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:57.775 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:57.775 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.775 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.775 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.775 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:57.775 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:33:57.775 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:57.775 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:57.775 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:57.775 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:57.775 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzliOTA2ZTcxZmUyNTQ5MjdjMThlYjQ2YTYxNGRhZGFhNDNhMGQzNWIwNTIzNjA1Q2cT/g==: 00:33:57.775 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDFkNjZlZjVhNzk2OWRmZWJlNTYzZWE4NzYyZTQxYjcwYTIyMzEzY2E5OTc4NDE3OnxDlw==: 00:33:57.775 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:57.775 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:57.775 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzliOTA2ZTcxZmUyNTQ5MjdjMThlYjQ2YTYxNGRhZGFhNDNhMGQzNWIwNTIzNjA1Q2cT/g==: 00:33:57.775 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDFkNjZlZjVhNzk2OWRmZWJlNTYzZWE4NzYyZTQxYjcwYTIyMzEzY2E5OTc4NDE3OnxDlw==: ]] 00:33:57.775 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDFkNjZlZjVhNzk2OWRmZWJlNTYzZWE4NzYyZTQxYjcwYTIyMzEzY2E5OTc4NDE3OnxDlw==: 00:33:57.775 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:33:57.775 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:57.775 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:57.775 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:57.775 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:57.775 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:57.775 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:57.775 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.775 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.775 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.775 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:57.775 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:57.775 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:57.775 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:57.775 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:57.775 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:57.776 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:57.776 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:57.776 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:57.776 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:57.776 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:57.776 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:57.776 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.776 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.034 nvme0n1 00:33:58.034 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.034 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:58.034 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.034 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.034 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:58.034 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.034 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:58.034 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:58.034 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.034 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.034 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.034 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:58.034 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:33:58.034 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:58.034 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:58.034 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:58.034 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:58.034 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODQ1NmYwOTc3ZjI2MjI3YzM0MjkzOGNlYjRkYjNjN2b/0K+I: 00:33:58.034 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTQyMGY5OWRhNzRjOTZlMTc5OThkOTM5NDE5ODc0NjdylBg7: 00:33:58.034 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:58.034 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:58.034 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODQ1NmYwOTc3ZjI2MjI3YzM0MjkzOGNlYjRkYjNjN2b/0K+I: 00:33:58.034 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTQyMGY5OWRhNzRjOTZlMTc5OThkOTM5NDE5ODc0NjdylBg7: ]] 00:33:58.034 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTQyMGY5OWRhNzRjOTZlMTc5OThkOTM5NDE5ODc0NjdylBg7: 00:33:58.034 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:33:58.034 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:58.034 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:58.034 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:58.034 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:58.034 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:58.034 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:58.034 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.034 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.034 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.034 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:58.034 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:58.034 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:58.034 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:58.034 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:58.034 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:58.034 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:58.034 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:58.034 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:58.034 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:58.034 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:58.034 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:58.034 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.034 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.293 nvme0n1 00:33:58.293 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.293 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:58.293 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:58.293 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.293 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.293 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.293 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:58.293 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:58.293 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.293 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.293 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.293 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:58.293 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:33:58.293 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:58.293 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:58.293 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:58.293 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:58.293 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmM1YTYzZWFlNDA2MzM1YzVjMDMyZWRjYWI3MDE5ZGU0MWUzNWZkYTc0ZTEzYWM5quoLAQ==: 00:33:58.293 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjQ1ZDYxMDYyYjc2ODM5NTE2YjkzMGRjZTU3NDJkNjUWbsco: 00:33:58.293 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:58.293 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:58.293 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmM1YTYzZWFlNDA2MzM1YzVjMDMyZWRjYWI3MDE5ZGU0MWUzNWZkYTc0ZTEzYWM5quoLAQ==: 00:33:58.293 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjQ1ZDYxMDYyYjc2ODM5NTE2YjkzMGRjZTU3NDJkNjUWbsco: ]] 00:33:58.293 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjQ1ZDYxMDYyYjc2ODM5NTE2YjkzMGRjZTU3NDJkNjUWbsco: 00:33:58.293 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:33:58.293 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:58.293 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:58.293 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:58.293 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:58.293 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:58.293 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:58.293 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.293 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.293 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.293 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:58.293 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:58.293 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:58.293 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:58.293 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:58.293 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:58.293 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:58.293 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:58.293 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:58.293 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:58.293 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:58.293 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:58.293 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.293 00:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.552 nvme0n1 00:33:58.552 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.552 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:58.552 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:58.552 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.552 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.552 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.552 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:58.552 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:58.552 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.552 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.552 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.552 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:58.552 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:33:58.552 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:58.552 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:58.552 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:58.552 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:58.552 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDY3NDQ3NzViMmQwYzZmYTI4NTI1YmM3YTEzMjk3OWRjMTExOWYwOGNhYTg1MTg1NzVlYmJjZjhkMjY1OTM3Y/rKD9M=: 00:33:58.552 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:58.552 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:58.552 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:58.552 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDY3NDQ3NzViMmQwYzZmYTI4NTI1YmM3YTEzMjk3OWRjMTExOWYwOGNhYTg1MTg1NzVlYmJjZjhkMjY1OTM3Y/rKD9M=: 00:33:58.552 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:58.552 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:33:58.552 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:58.552 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:58.552 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:58.552 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:58.552 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:58.552 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:58.552 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.552 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.552 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.552 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:58.552 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:58.552 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:58.552 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:58.552 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:58.552 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:58.552 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:58.552 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:58.552 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:58.552 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:58.552 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:58.552 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:58.552 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.552 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.817 nvme0n1 00:33:58.817 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.817 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:58.817 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.817 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:58.817 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.817 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.817 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:58.817 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:58.817 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.817 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.817 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.817 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:58.817 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:58.817 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:33:58.817 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:58.817 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:58.817 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:58.817 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:58.817 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDVkMWJhNTJiMTk3ZDM0Y2RhZGEyYzA3MWIzNzI4MmRsN8Y9: 00:33:58.817 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTFkZGE5M2Q1Zjk4ZDY1YmY3OTBiMTdiZmZhMDYzNzhkODIzZTFjOGFhNWVmZjU0ZmVlNGUwMjllZGFkNjBkZOSp8C8=: 00:33:58.817 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:58.817 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:58.817 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDVkMWJhNTJiMTk3ZDM0Y2RhZGEyYzA3MWIzNzI4MmRsN8Y9: 00:33:58.817 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTFkZGE5M2Q1Zjk4ZDY1YmY3OTBiMTdiZmZhMDYzNzhkODIzZTFjOGFhNWVmZjU0ZmVlNGUwMjllZGFkNjBkZOSp8C8=: ]] 00:33:58.817 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTFkZGE5M2Q1Zjk4ZDY1YmY3OTBiMTdiZmZhMDYzNzhkODIzZTFjOGFhNWVmZjU0ZmVlNGUwMjllZGFkNjBkZOSp8C8=: 00:33:58.817 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:33:58.817 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:58.817 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:58.817 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:58.817 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:58.817 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:58.817 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:58.817 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.817 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.817 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.817 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:58.817 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:58.817 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:58.817 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:58.817 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:58.817 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:58.817 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:58.817 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:58.817 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:58.817 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:58.817 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:58.817 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:58.817 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.817 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.075 nvme0n1 00:33:59.075 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.075 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:59.075 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:59.075 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.075 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.075 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.075 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:59.075 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:59.075 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.075 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.075 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.075 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:59.075 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:33:59.075 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:59.075 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:59.075 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:59.075 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:59.075 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzliOTA2ZTcxZmUyNTQ5MjdjMThlYjQ2YTYxNGRhZGFhNDNhMGQzNWIwNTIzNjA1Q2cT/g==: 00:33:59.075 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDFkNjZlZjVhNzk2OWRmZWJlNTYzZWE4NzYyZTQxYjcwYTIyMzEzY2E5OTc4NDE3OnxDlw==: 00:33:59.075 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:59.075 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:59.075 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzliOTA2ZTcxZmUyNTQ5MjdjMThlYjQ2YTYxNGRhZGFhNDNhMGQzNWIwNTIzNjA1Q2cT/g==: 00:33:59.075 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDFkNjZlZjVhNzk2OWRmZWJlNTYzZWE4NzYyZTQxYjcwYTIyMzEzY2E5OTc4NDE3OnxDlw==: ]] 00:33:59.075 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDFkNjZlZjVhNzk2OWRmZWJlNTYzZWE4NzYyZTQxYjcwYTIyMzEzY2E5OTc4NDE3OnxDlw==: 00:33:59.075 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:33:59.075 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:59.075 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:59.075 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:59.075 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:59.075 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:59.075 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:59.075 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.075 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.075 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.075 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:59.075 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:59.075 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:59.075 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:59.075 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:59.075 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:59.075 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:59.075 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:59.075 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:59.075 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:59.075 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:59.075 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:59.075 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.075 00:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.334 nvme0n1 00:33:59.334 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.334 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:59.334 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.334 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.334 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:59.592 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.592 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:59.592 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:59.592 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.592 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.592 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.592 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:59.592 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:33:59.592 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:59.592 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:59.592 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:59.592 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:59.592 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODQ1NmYwOTc3ZjI2MjI3YzM0MjkzOGNlYjRkYjNjN2b/0K+I: 00:33:59.592 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTQyMGY5OWRhNzRjOTZlMTc5OThkOTM5NDE5ODc0NjdylBg7: 00:33:59.592 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:59.592 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:59.592 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODQ1NmYwOTc3ZjI2MjI3YzM0MjkzOGNlYjRkYjNjN2b/0K+I: 00:33:59.592 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTQyMGY5OWRhNzRjOTZlMTc5OThkOTM5NDE5ODc0NjdylBg7: ]] 00:33:59.592 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTQyMGY5OWRhNzRjOTZlMTc5OThkOTM5NDE5ODc0NjdylBg7: 00:33:59.592 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:33:59.592 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:59.592 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:59.592 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:59.592 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:59.592 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:59.592 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:59.592 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.592 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.592 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.592 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:59.592 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:59.592 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:59.592 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:59.592 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:59.592 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:59.592 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:59.592 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:59.592 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:59.592 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:59.592 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:59.592 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:59.592 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.592 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.850 nvme0n1 00:33:59.850 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.850 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:59.850 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.850 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.850 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:59.850 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.850 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:59.850 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:59.850 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.850 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.850 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.850 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:59.850 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:33:59.850 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:59.850 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:59.850 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:59.850 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:59.850 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmM1YTYzZWFlNDA2MzM1YzVjMDMyZWRjYWI3MDE5ZGU0MWUzNWZkYTc0ZTEzYWM5quoLAQ==: 00:33:59.850 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjQ1ZDYxMDYyYjc2ODM5NTE2YjkzMGRjZTU3NDJkNjUWbsco: 00:33:59.850 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:59.850 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:59.850 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmM1YTYzZWFlNDA2MzM1YzVjMDMyZWRjYWI3MDE5ZGU0MWUzNWZkYTc0ZTEzYWM5quoLAQ==: 00:33:59.850 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjQ1ZDYxMDYyYjc2ODM5NTE2YjkzMGRjZTU3NDJkNjUWbsco: ]] 00:33:59.850 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjQ1ZDYxMDYyYjc2ODM5NTE2YjkzMGRjZTU3NDJkNjUWbsco: 00:33:59.850 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:33:59.850 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:59.850 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:59.850 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:59.850 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:59.850 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:59.850 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:59.850 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.850 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.850 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.850 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:59.850 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:59.850 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:59.850 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:59.850 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:59.851 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:59.851 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:59.851 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:59.851 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:59.851 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:59.851 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:59.851 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:59.851 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.851 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.111 nvme0n1 00:34:00.111 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.111 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:00.111 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.111 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.111 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:00.111 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.111 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:00.111 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:00.111 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.111 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.111 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.111 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:00.111 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:34:00.111 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:00.111 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:00.111 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:00.111 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:00.111 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDY3NDQ3NzViMmQwYzZmYTI4NTI1YmM3YTEzMjk3OWRjMTExOWYwOGNhYTg1MTg1NzVlYmJjZjhkMjY1OTM3Y/rKD9M=: 00:34:00.111 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:00.111 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:00.111 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:00.111 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDY3NDQ3NzViMmQwYzZmYTI4NTI1YmM3YTEzMjk3OWRjMTExOWYwOGNhYTg1MTg1NzVlYmJjZjhkMjY1OTM3Y/rKD9M=: 00:34:00.111 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:00.111 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:34:00.111 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:00.111 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:00.111 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:00.111 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:00.111 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:00.111 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:00.111 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.111 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.111 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.111 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:00.111 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:00.111 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:00.111 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:00.111 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:00.111 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:00.111 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:00.111 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:00.111 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:00.111 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:00.111 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:00.111 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:00.111 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.111 00:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.370 nvme0n1 00:34:00.370 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.370 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:00.370 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:00.370 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.370 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.370 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.628 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:00.628 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:00.628 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.628 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.628 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.628 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:00.628 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:00.628 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:34:00.628 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:00.628 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:00.628 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:00.628 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:00.628 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDVkMWJhNTJiMTk3ZDM0Y2RhZGEyYzA3MWIzNzI4MmRsN8Y9: 00:34:00.628 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTFkZGE5M2Q1Zjk4ZDY1YmY3OTBiMTdiZmZhMDYzNzhkODIzZTFjOGFhNWVmZjU0ZmVlNGUwMjllZGFkNjBkZOSp8C8=: 00:34:00.628 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:00.628 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:00.628 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDVkMWJhNTJiMTk3ZDM0Y2RhZGEyYzA3MWIzNzI4MmRsN8Y9: 00:34:00.628 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTFkZGE5M2Q1Zjk4ZDY1YmY3OTBiMTdiZmZhMDYzNzhkODIzZTFjOGFhNWVmZjU0ZmVlNGUwMjllZGFkNjBkZOSp8C8=: ]] 00:34:00.628 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTFkZGE5M2Q1Zjk4ZDY1YmY3OTBiMTdiZmZhMDYzNzhkODIzZTFjOGFhNWVmZjU0ZmVlNGUwMjllZGFkNjBkZOSp8C8=: 00:34:00.628 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:34:00.628 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:00.628 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:00.628 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:00.628 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:00.628 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:00.628 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:00.628 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.628 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.628 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.628 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:00.628 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:00.628 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:00.628 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:00.628 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:00.628 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:00.628 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:00.628 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:00.628 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:00.628 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:00.628 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:00.628 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:00.628 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.628 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.195 nvme0n1 00:34:01.195 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.195 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:01.195 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:01.195 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.195 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.195 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.195 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:01.195 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:01.195 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.195 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.195 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.195 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:01.195 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:34:01.195 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:01.195 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:01.195 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:01.195 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:01.195 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzliOTA2ZTcxZmUyNTQ5MjdjMThlYjQ2YTYxNGRhZGFhNDNhMGQzNWIwNTIzNjA1Q2cT/g==: 00:34:01.195 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDFkNjZlZjVhNzk2OWRmZWJlNTYzZWE4NzYyZTQxYjcwYTIyMzEzY2E5OTc4NDE3OnxDlw==: 00:34:01.195 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:01.195 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:01.195 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzliOTA2ZTcxZmUyNTQ5MjdjMThlYjQ2YTYxNGRhZGFhNDNhMGQzNWIwNTIzNjA1Q2cT/g==: 00:34:01.195 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDFkNjZlZjVhNzk2OWRmZWJlNTYzZWE4NzYyZTQxYjcwYTIyMzEzY2E5OTc4NDE3OnxDlw==: ]] 00:34:01.195 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDFkNjZlZjVhNzk2OWRmZWJlNTYzZWE4NzYyZTQxYjcwYTIyMzEzY2E5OTc4NDE3OnxDlw==: 00:34:01.195 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:34:01.195 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:01.195 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:01.195 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:01.195 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:01.195 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:01.195 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:01.195 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.195 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.195 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.195 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:01.195 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:01.195 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:01.195 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:01.195 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:01.195 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:01.195 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:01.195 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:01.195 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:01.195 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:01.195 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:01.195 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:01.195 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.195 00:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.761 nvme0n1 00:34:01.761 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.761 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:01.761 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.761 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.761 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:01.761 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.761 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:01.761 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:01.761 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.761 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.761 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.761 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:01.761 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:34:01.761 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:01.761 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:01.761 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:01.761 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:01.761 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODQ1NmYwOTc3ZjI2MjI3YzM0MjkzOGNlYjRkYjNjN2b/0K+I: 00:34:01.761 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTQyMGY5OWRhNzRjOTZlMTc5OThkOTM5NDE5ODc0NjdylBg7: 00:34:01.761 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:01.761 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:01.761 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODQ1NmYwOTc3ZjI2MjI3YzM0MjkzOGNlYjRkYjNjN2b/0K+I: 00:34:01.761 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTQyMGY5OWRhNzRjOTZlMTc5OThkOTM5NDE5ODc0NjdylBg7: ]] 00:34:01.761 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTQyMGY5OWRhNzRjOTZlMTc5OThkOTM5NDE5ODc0NjdylBg7: 00:34:01.761 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:34:01.761 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:01.761 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:01.761 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:01.761 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:01.761 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:01.761 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:01.761 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.761 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.761 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.761 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:01.761 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:01.761 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:01.761 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:01.761 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:01.761 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:01.761 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:01.761 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:01.761 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:01.761 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:01.761 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:01.761 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:01.761 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.761 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.334 nvme0n1 00:34:02.334 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.334 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:02.334 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.334 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.334 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:02.334 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.334 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:02.334 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:02.334 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.334 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.334 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.334 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:02.334 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:34:02.335 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:02.335 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:02.335 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:02.335 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:02.335 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmM1YTYzZWFlNDA2MzM1YzVjMDMyZWRjYWI3MDE5ZGU0MWUzNWZkYTc0ZTEzYWM5quoLAQ==: 00:34:02.335 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjQ1ZDYxMDYyYjc2ODM5NTE2YjkzMGRjZTU3NDJkNjUWbsco: 00:34:02.335 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:02.335 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:02.335 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmM1YTYzZWFlNDA2MzM1YzVjMDMyZWRjYWI3MDE5ZGU0MWUzNWZkYTc0ZTEzYWM5quoLAQ==: 00:34:02.335 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjQ1ZDYxMDYyYjc2ODM5NTE2YjkzMGRjZTU3NDJkNjUWbsco: ]] 00:34:02.335 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjQ1ZDYxMDYyYjc2ODM5NTE2YjkzMGRjZTU3NDJkNjUWbsco: 00:34:02.335 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:34:02.335 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:02.335 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:02.335 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:02.335 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:02.335 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:02.335 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:02.335 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.335 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.335 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.335 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:02.335 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:02.335 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:02.335 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:02.335 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:02.335 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:02.335 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:02.335 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:02.335 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:02.336 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:02.336 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:02.336 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:02.336 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.336 00:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.917 nvme0n1 00:34:02.917 00:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.917 00:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:02.917 00:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.917 00:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.917 00:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:02.917 00:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.917 00:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:02.917 00:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:02.917 00:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.917 00:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.917 00:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.917 00:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:02.917 00:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:34:02.917 00:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:02.917 00:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:02.917 00:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:02.917 00:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:02.917 00:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDY3NDQ3NzViMmQwYzZmYTI4NTI1YmM3YTEzMjk3OWRjMTExOWYwOGNhYTg1MTg1NzVlYmJjZjhkMjY1OTM3Y/rKD9M=: 00:34:02.917 00:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:02.917 00:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:02.917 00:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:02.917 00:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDY3NDQ3NzViMmQwYzZmYTI4NTI1YmM3YTEzMjk3OWRjMTExOWYwOGNhYTg1MTg1NzVlYmJjZjhkMjY1OTM3Y/rKD9M=: 00:34:02.917 00:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:02.917 00:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:34:02.918 00:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:02.918 00:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:02.918 00:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:02.918 00:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:02.918 00:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:02.918 00:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:02.918 00:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.918 00:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.918 00:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.918 00:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:02.918 00:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:02.918 00:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:02.918 00:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:02.918 00:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:02.918 00:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:02.918 00:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:02.918 00:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:02.918 00:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:02.918 00:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:02.918 00:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:02.918 00:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:02.918 00:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.918 00:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.204 nvme0n1 00:34:03.204 00:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.204 00:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:03.204 00:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:03.204 00:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.204 00:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.204 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.537 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:03.537 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:03.537 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.537 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.537 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.537 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:03.537 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:03.537 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:34:03.537 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:03.537 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:03.537 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:03.537 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:03.537 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDVkMWJhNTJiMTk3ZDM0Y2RhZGEyYzA3MWIzNzI4MmRsN8Y9: 00:34:03.537 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTFkZGE5M2Q1Zjk4ZDY1YmY3OTBiMTdiZmZhMDYzNzhkODIzZTFjOGFhNWVmZjU0ZmVlNGUwMjllZGFkNjBkZOSp8C8=: 00:34:03.537 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:03.537 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:03.537 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDVkMWJhNTJiMTk3ZDM0Y2RhZGEyYzA3MWIzNzI4MmRsN8Y9: 00:34:03.537 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTFkZGE5M2Q1Zjk4ZDY1YmY3OTBiMTdiZmZhMDYzNzhkODIzZTFjOGFhNWVmZjU0ZmVlNGUwMjllZGFkNjBkZOSp8C8=: ]] 00:34:03.537 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTFkZGE5M2Q1Zjk4ZDY1YmY3OTBiMTdiZmZhMDYzNzhkODIzZTFjOGFhNWVmZjU0ZmVlNGUwMjllZGFkNjBkZOSp8C8=: 00:34:03.537 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:34:03.537 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:03.537 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:03.537 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:03.537 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:03.537 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:03.537 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:03.537 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.537 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.537 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.537 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:03.537 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:03.537 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:03.537 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:03.537 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:03.537 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:03.537 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:03.537 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:03.537 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:03.537 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:03.537 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:03.537 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:03.537 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.537 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.225 nvme0n1 00:34:04.225 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.225 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:04.225 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.225 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.225 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:04.225 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.225 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:04.225 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:04.225 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.225 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.225 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.225 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:04.225 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:34:04.225 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:04.225 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:04.225 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:04.225 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:04.225 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzliOTA2ZTcxZmUyNTQ5MjdjMThlYjQ2YTYxNGRhZGFhNDNhMGQzNWIwNTIzNjA1Q2cT/g==: 00:34:04.225 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDFkNjZlZjVhNzk2OWRmZWJlNTYzZWE4NzYyZTQxYjcwYTIyMzEzY2E5OTc4NDE3OnxDlw==: 00:34:04.225 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:04.225 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:04.225 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzliOTA2ZTcxZmUyNTQ5MjdjMThlYjQ2YTYxNGRhZGFhNDNhMGQzNWIwNTIzNjA1Q2cT/g==: 00:34:04.225 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDFkNjZlZjVhNzk2OWRmZWJlNTYzZWE4NzYyZTQxYjcwYTIyMzEzY2E5OTc4NDE3OnxDlw==: ]] 00:34:04.225 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDFkNjZlZjVhNzk2OWRmZWJlNTYzZWE4NzYyZTQxYjcwYTIyMzEzY2E5OTc4NDE3OnxDlw==: 00:34:04.225 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:34:04.225 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:04.225 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:04.225 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:04.225 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:04.226 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:04.226 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:04.226 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.226 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.226 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.226 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:04.226 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:04.226 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:04.226 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:04.226 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:04.226 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:04.226 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:04.226 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:04.226 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:04.226 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:04.226 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:04.226 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:04.226 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.226 00:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.162 nvme0n1 00:34:05.162 00:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.162 00:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:05.162 00:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:05.162 00:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.162 00:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.162 00:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.162 00:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:05.162 00:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:05.162 00:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.162 00:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.162 00:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.162 00:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:05.162 00:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:34:05.162 00:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:05.162 00:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:05.162 00:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:05.162 00:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:05.162 00:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODQ1NmYwOTc3ZjI2MjI3YzM0MjkzOGNlYjRkYjNjN2b/0K+I: 00:34:05.162 00:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTQyMGY5OWRhNzRjOTZlMTc5OThkOTM5NDE5ODc0NjdylBg7: 00:34:05.162 00:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:05.162 00:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:05.162 00:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODQ1NmYwOTc3ZjI2MjI3YzM0MjkzOGNlYjRkYjNjN2b/0K+I: 00:34:05.162 00:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTQyMGY5OWRhNzRjOTZlMTc5OThkOTM5NDE5ODc0NjdylBg7: ]] 00:34:05.162 00:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTQyMGY5OWRhNzRjOTZlMTc5OThkOTM5NDE5ODc0NjdylBg7: 00:34:05.162 00:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:34:05.162 00:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:05.162 00:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:05.162 00:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:05.162 00:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:05.162 00:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:05.162 00:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:05.162 00:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.162 00:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.162 00:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.162 00:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:05.162 00:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:05.162 00:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:05.162 00:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:05.162 00:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:05.163 00:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:05.163 00:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:05.163 00:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:05.163 00:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:05.163 00:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:05.163 00:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:05.163 00:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:05.163 00:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.163 00:39:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.096 nvme0n1 00:34:06.096 00:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.096 00:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:06.096 00:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:06.096 00:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.096 00:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.096 00:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.096 00:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:06.096 00:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:06.096 00:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.096 00:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.096 00:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.096 00:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:06.096 00:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:34:06.096 00:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:06.096 00:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:06.096 00:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:06.096 00:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:06.096 00:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmM1YTYzZWFlNDA2MzM1YzVjMDMyZWRjYWI3MDE5ZGU0MWUzNWZkYTc0ZTEzYWM5quoLAQ==: 00:34:06.096 00:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjQ1ZDYxMDYyYjc2ODM5NTE2YjkzMGRjZTU3NDJkNjUWbsco: 00:34:06.096 00:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:06.096 00:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:06.096 00:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmM1YTYzZWFlNDA2MzM1YzVjMDMyZWRjYWI3MDE5ZGU0MWUzNWZkYTc0ZTEzYWM5quoLAQ==: 00:34:06.096 00:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjQ1ZDYxMDYyYjc2ODM5NTE2YjkzMGRjZTU3NDJkNjUWbsco: ]] 00:34:06.096 00:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjQ1ZDYxMDYyYjc2ODM5NTE2YjkzMGRjZTU3NDJkNjUWbsco: 00:34:06.096 00:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:34:06.096 00:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:06.096 00:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:06.096 00:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:06.096 00:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:06.096 00:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:06.096 00:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:06.096 00:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.096 00:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.096 00:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.096 00:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:06.096 00:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:06.096 00:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:06.096 00:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:06.096 00:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:06.096 00:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:06.096 00:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:06.096 00:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:06.096 00:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:06.096 00:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:06.096 00:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:06.096 00:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:06.096 00:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.096 00:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.031 nvme0n1 00:34:07.031 00:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.031 00:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:07.031 00:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.031 00:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:07.031 00:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.031 00:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.031 00:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:07.031 00:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:07.031 00:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.031 00:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.031 00:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.031 00:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:07.031 00:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:34:07.031 00:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:07.031 00:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:07.031 00:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:07.031 00:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:07.031 00:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDY3NDQ3NzViMmQwYzZmYTI4NTI1YmM3YTEzMjk3OWRjMTExOWYwOGNhYTg1MTg1NzVlYmJjZjhkMjY1OTM3Y/rKD9M=: 00:34:07.031 00:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:07.031 00:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:07.031 00:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:07.031 00:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDY3NDQ3NzViMmQwYzZmYTI4NTI1YmM3YTEzMjk3OWRjMTExOWYwOGNhYTg1MTg1NzVlYmJjZjhkMjY1OTM3Y/rKD9M=: 00:34:07.031 00:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:07.031 00:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:34:07.031 00:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:07.031 00:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:07.031 00:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:07.031 00:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:07.031 00:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:07.031 00:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:07.031 00:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.031 00:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.031 00:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.031 00:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:07.031 00:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:07.031 00:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:07.031 00:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:07.031 00:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:07.031 00:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:07.031 00:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:07.031 00:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:07.031 00:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:07.031 00:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:07.031 00:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:07.031 00:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:07.031 00:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.031 00:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.964 nvme0n1 00:34:07.964 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.964 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:07.964 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.964 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.964 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:07.964 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.964 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:07.964 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:07.964 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.964 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.964 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.964 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:07.964 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:07.964 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:07.964 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:34:07.964 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:07.964 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:07.964 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:07.964 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:07.964 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDVkMWJhNTJiMTk3ZDM0Y2RhZGEyYzA3MWIzNzI4MmRsN8Y9: 00:34:07.964 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTFkZGE5M2Q1Zjk4ZDY1YmY3OTBiMTdiZmZhMDYzNzhkODIzZTFjOGFhNWVmZjU0ZmVlNGUwMjllZGFkNjBkZOSp8C8=: 00:34:07.964 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:07.964 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:07.964 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDVkMWJhNTJiMTk3ZDM0Y2RhZGEyYzA3MWIzNzI4MmRsN8Y9: 00:34:07.964 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTFkZGE5M2Q1Zjk4ZDY1YmY3OTBiMTdiZmZhMDYzNzhkODIzZTFjOGFhNWVmZjU0ZmVlNGUwMjllZGFkNjBkZOSp8C8=: ]] 00:34:07.964 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTFkZGE5M2Q1Zjk4ZDY1YmY3OTBiMTdiZmZhMDYzNzhkODIzZTFjOGFhNWVmZjU0ZmVlNGUwMjllZGFkNjBkZOSp8C8=: 00:34:07.964 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:34:07.964 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:07.964 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:07.964 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:07.964 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:07.964 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:07.964 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:07.964 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.964 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.964 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.964 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:07.964 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:07.964 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:07.964 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:07.964 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:07.964 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:07.964 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:07.964 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:07.964 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:07.964 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:07.964 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:07.964 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:07.964 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.964 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.223 nvme0n1 00:34:08.223 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.223 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:08.223 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:08.223 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.223 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.223 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.223 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:08.223 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:08.223 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.223 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.223 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.223 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:08.223 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:34:08.223 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:08.223 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:08.223 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:08.223 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:08.223 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzliOTA2ZTcxZmUyNTQ5MjdjMThlYjQ2YTYxNGRhZGFhNDNhMGQzNWIwNTIzNjA1Q2cT/g==: 00:34:08.223 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDFkNjZlZjVhNzk2OWRmZWJlNTYzZWE4NzYyZTQxYjcwYTIyMzEzY2E5OTc4NDE3OnxDlw==: 00:34:08.223 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:08.223 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:08.223 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzliOTA2ZTcxZmUyNTQ5MjdjMThlYjQ2YTYxNGRhZGFhNDNhMGQzNWIwNTIzNjA1Q2cT/g==: 00:34:08.223 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDFkNjZlZjVhNzk2OWRmZWJlNTYzZWE4NzYyZTQxYjcwYTIyMzEzY2E5OTc4NDE3OnxDlw==: ]] 00:34:08.223 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDFkNjZlZjVhNzk2OWRmZWJlNTYzZWE4NzYyZTQxYjcwYTIyMzEzY2E5OTc4NDE3OnxDlw==: 00:34:08.223 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:34:08.223 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:08.223 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:08.223 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:08.223 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:08.223 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:08.223 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:08.223 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.223 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.223 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.223 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:08.223 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:08.223 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:08.223 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:08.223 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:08.223 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:08.223 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:08.223 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:08.223 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:08.223 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:08.223 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:08.223 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:08.223 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.224 00:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.224 nvme0n1 00:34:08.224 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.224 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:08.224 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.224 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.224 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:08.224 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.482 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:08.482 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:08.482 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.482 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.482 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.482 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:08.482 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:34:08.482 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:08.482 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:08.482 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:08.482 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:08.482 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODQ1NmYwOTc3ZjI2MjI3YzM0MjkzOGNlYjRkYjNjN2b/0K+I: 00:34:08.482 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTQyMGY5OWRhNzRjOTZlMTc5OThkOTM5NDE5ODc0NjdylBg7: 00:34:08.482 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:08.482 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:08.482 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODQ1NmYwOTc3ZjI2MjI3YzM0MjkzOGNlYjRkYjNjN2b/0K+I: 00:34:08.482 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTQyMGY5OWRhNzRjOTZlMTc5OThkOTM5NDE5ODc0NjdylBg7: ]] 00:34:08.482 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTQyMGY5OWRhNzRjOTZlMTc5OThkOTM5NDE5ODc0NjdylBg7: 00:34:08.482 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:34:08.482 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:08.482 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:08.482 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:08.482 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:08.482 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:08.482 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:08.482 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.482 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.482 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.482 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:08.482 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:08.482 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:08.482 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:08.482 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:08.482 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:08.482 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:08.482 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:08.482 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:08.482 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:08.482 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:08.482 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:08.482 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.482 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.482 nvme0n1 00:34:08.482 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.482 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:08.482 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.482 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.482 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:08.482 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.482 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:08.482 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:08.482 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.482 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmM1YTYzZWFlNDA2MzM1YzVjMDMyZWRjYWI3MDE5ZGU0MWUzNWZkYTc0ZTEzYWM5quoLAQ==: 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjQ1ZDYxMDYyYjc2ODM5NTE2YjkzMGRjZTU3NDJkNjUWbsco: 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmM1YTYzZWFlNDA2MzM1YzVjMDMyZWRjYWI3MDE5ZGU0MWUzNWZkYTc0ZTEzYWM5quoLAQ==: 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjQ1ZDYxMDYyYjc2ODM5NTE2YjkzMGRjZTU3NDJkNjUWbsco: ]] 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjQ1ZDYxMDYyYjc2ODM5NTE2YjkzMGRjZTU3NDJkNjUWbsco: 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.741 nvme0n1 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDY3NDQ3NzViMmQwYzZmYTI4NTI1YmM3YTEzMjk3OWRjMTExOWYwOGNhYTg1MTg1NzVlYmJjZjhkMjY1OTM3Y/rKD9M=: 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDY3NDQ3NzViMmQwYzZmYTI4NTI1YmM3YTEzMjk3OWRjMTExOWYwOGNhYTg1MTg1NzVlYmJjZjhkMjY1OTM3Y/rKD9M=: 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.741 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.001 nvme0n1 00:34:09.001 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.001 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:09.001 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:09.001 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.001 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.001 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.001 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:09.001 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:09.001 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.001 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.001 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.001 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:09.001 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:09.001 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:34:09.001 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:09.001 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:09.001 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:09.001 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:09.001 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDVkMWJhNTJiMTk3ZDM0Y2RhZGEyYzA3MWIzNzI4MmRsN8Y9: 00:34:09.001 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTFkZGE5M2Q1Zjk4ZDY1YmY3OTBiMTdiZmZhMDYzNzhkODIzZTFjOGFhNWVmZjU0ZmVlNGUwMjllZGFkNjBkZOSp8C8=: 00:34:09.001 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:09.001 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:09.001 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDVkMWJhNTJiMTk3ZDM0Y2RhZGEyYzA3MWIzNzI4MmRsN8Y9: 00:34:09.001 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTFkZGE5M2Q1Zjk4ZDY1YmY3OTBiMTdiZmZhMDYzNzhkODIzZTFjOGFhNWVmZjU0ZmVlNGUwMjllZGFkNjBkZOSp8C8=: ]] 00:34:09.001 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTFkZGE5M2Q1Zjk4ZDY1YmY3OTBiMTdiZmZhMDYzNzhkODIzZTFjOGFhNWVmZjU0ZmVlNGUwMjllZGFkNjBkZOSp8C8=: 00:34:09.001 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:34:09.001 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:09.001 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:09.001 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:09.001 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:09.001 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:09.001 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:09.001 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.001 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.001 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.001 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:09.001 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:09.001 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:09.001 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:09.001 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:09.001 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:09.001 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:09.001 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:09.001 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:09.001 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:09.001 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:09.001 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:09.001 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.001 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.260 nvme0n1 00:34:09.260 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.260 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:09.260 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:09.260 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.260 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.260 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.260 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:09.260 00:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:09.260 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.260 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.260 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.260 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:09.260 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:34:09.260 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:09.260 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:09.260 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:09.260 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:09.260 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzliOTA2ZTcxZmUyNTQ5MjdjMThlYjQ2YTYxNGRhZGFhNDNhMGQzNWIwNTIzNjA1Q2cT/g==: 00:34:09.260 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDFkNjZlZjVhNzk2OWRmZWJlNTYzZWE4NzYyZTQxYjcwYTIyMzEzY2E5OTc4NDE3OnxDlw==: 00:34:09.260 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:09.260 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:09.260 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzliOTA2ZTcxZmUyNTQ5MjdjMThlYjQ2YTYxNGRhZGFhNDNhMGQzNWIwNTIzNjA1Q2cT/g==: 00:34:09.260 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDFkNjZlZjVhNzk2OWRmZWJlNTYzZWE4NzYyZTQxYjcwYTIyMzEzY2E5OTc4NDE3OnxDlw==: ]] 00:34:09.260 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDFkNjZlZjVhNzk2OWRmZWJlNTYzZWE4NzYyZTQxYjcwYTIyMzEzY2E5OTc4NDE3OnxDlw==: 00:34:09.260 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:34:09.260 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:09.260 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:09.260 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:09.260 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:09.260 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:09.260 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:09.260 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.260 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.260 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.260 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:09.260 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:09.260 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:09.261 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:09.261 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:09.261 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:09.261 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:09.261 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:09.261 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:09.261 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:09.261 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:09.261 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:09.261 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.261 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.519 nvme0n1 00:34:09.519 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.519 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:09.519 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:09.519 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.519 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.519 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.519 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:09.519 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:09.519 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.519 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.519 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.519 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:09.519 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:34:09.519 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:09.519 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:09.519 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:09.519 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:09.519 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODQ1NmYwOTc3ZjI2MjI3YzM0MjkzOGNlYjRkYjNjN2b/0K+I: 00:34:09.519 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTQyMGY5OWRhNzRjOTZlMTc5OThkOTM5NDE5ODc0NjdylBg7: 00:34:09.519 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:09.519 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:09.519 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODQ1NmYwOTc3ZjI2MjI3YzM0MjkzOGNlYjRkYjNjN2b/0K+I: 00:34:09.519 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTQyMGY5OWRhNzRjOTZlMTc5OThkOTM5NDE5ODc0NjdylBg7: ]] 00:34:09.519 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTQyMGY5OWRhNzRjOTZlMTc5OThkOTM5NDE5ODc0NjdylBg7: 00:34:09.519 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:34:09.519 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:09.519 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:09.519 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:09.519 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:09.519 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:09.519 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:09.519 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.519 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.519 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.519 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:09.519 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:09.519 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:09.519 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:09.519 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:09.519 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:09.519 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:09.519 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:09.519 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:09.519 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:09.519 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:09.519 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:09.519 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.519 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.780 nvme0n1 00:34:09.780 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.780 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:09.780 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:09.780 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.780 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.780 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.780 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:09.780 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:09.780 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.780 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.780 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.781 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:09.781 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:34:09.781 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:09.781 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:09.781 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:09.781 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:09.781 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmM1YTYzZWFlNDA2MzM1YzVjMDMyZWRjYWI3MDE5ZGU0MWUzNWZkYTc0ZTEzYWM5quoLAQ==: 00:34:09.781 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjQ1ZDYxMDYyYjc2ODM5NTE2YjkzMGRjZTU3NDJkNjUWbsco: 00:34:09.781 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:09.781 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:09.781 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmM1YTYzZWFlNDA2MzM1YzVjMDMyZWRjYWI3MDE5ZGU0MWUzNWZkYTc0ZTEzYWM5quoLAQ==: 00:34:09.781 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjQ1ZDYxMDYyYjc2ODM5NTE2YjkzMGRjZTU3NDJkNjUWbsco: ]] 00:34:09.781 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjQ1ZDYxMDYyYjc2ODM5NTE2YjkzMGRjZTU3NDJkNjUWbsco: 00:34:09.781 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:34:09.781 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:09.781 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:09.781 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:09.782 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:09.782 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:09.782 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:09.782 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.782 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.782 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.782 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:09.782 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:09.782 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:09.782 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:09.782 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:09.782 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:09.782 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:09.782 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:09.783 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:09.783 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:09.783 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:09.783 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:09.783 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.783 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.044 nvme0n1 00:34:10.044 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.044 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:10.044 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.044 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.044 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:10.044 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.044 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:10.044 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:10.044 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.044 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.044 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.044 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:10.044 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:34:10.044 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:10.044 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:10.044 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:10.044 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:10.044 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDY3NDQ3NzViMmQwYzZmYTI4NTI1YmM3YTEzMjk3OWRjMTExOWYwOGNhYTg1MTg1NzVlYmJjZjhkMjY1OTM3Y/rKD9M=: 00:34:10.044 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:10.044 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:10.044 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:10.044 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDY3NDQ3NzViMmQwYzZmYTI4NTI1YmM3YTEzMjk3OWRjMTExOWYwOGNhYTg1MTg1NzVlYmJjZjhkMjY1OTM3Y/rKD9M=: 00:34:10.044 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:10.044 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:34:10.044 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:10.044 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:10.044 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:10.044 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:10.044 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:10.044 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:10.044 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.044 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.044 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.044 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:10.044 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:10.044 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:10.044 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:10.044 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:10.044 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:10.044 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:10.044 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:10.044 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:10.044 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:10.044 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:10.044 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:10.044 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.044 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.303 nvme0n1 00:34:10.303 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.303 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:10.303 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.303 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.303 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:10.303 00:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.303 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:10.303 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:10.303 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.303 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.303 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.303 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:10.303 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:10.303 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:34:10.303 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:10.303 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:10.303 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:10.303 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:10.303 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDVkMWJhNTJiMTk3ZDM0Y2RhZGEyYzA3MWIzNzI4MmRsN8Y9: 00:34:10.303 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTFkZGE5M2Q1Zjk4ZDY1YmY3OTBiMTdiZmZhMDYzNzhkODIzZTFjOGFhNWVmZjU0ZmVlNGUwMjllZGFkNjBkZOSp8C8=: 00:34:10.303 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:10.303 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:10.303 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDVkMWJhNTJiMTk3ZDM0Y2RhZGEyYzA3MWIzNzI4MmRsN8Y9: 00:34:10.303 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTFkZGE5M2Q1Zjk4ZDY1YmY3OTBiMTdiZmZhMDYzNzhkODIzZTFjOGFhNWVmZjU0ZmVlNGUwMjllZGFkNjBkZOSp8C8=: ]] 00:34:10.303 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTFkZGE5M2Q1Zjk4ZDY1YmY3OTBiMTdiZmZhMDYzNzhkODIzZTFjOGFhNWVmZjU0ZmVlNGUwMjllZGFkNjBkZOSp8C8=: 00:34:10.303 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:34:10.303 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:10.303 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:10.303 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:10.303 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:10.303 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:10.303 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:10.303 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.303 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.303 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.303 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:10.303 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:10.303 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:10.303 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:10.303 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:10.303 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:10.303 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:10.303 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:10.303 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:10.303 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:10.303 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:10.303 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:10.303 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.303 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.561 nvme0n1 00:34:10.561 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.561 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:10.561 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.561 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.561 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:10.561 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.561 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:10.561 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:10.561 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.561 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.561 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.561 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:10.561 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:34:10.561 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:10.561 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:10.561 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:10.561 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:10.561 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzliOTA2ZTcxZmUyNTQ5MjdjMThlYjQ2YTYxNGRhZGFhNDNhMGQzNWIwNTIzNjA1Q2cT/g==: 00:34:10.561 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDFkNjZlZjVhNzk2OWRmZWJlNTYzZWE4NzYyZTQxYjcwYTIyMzEzY2E5OTc4NDE3OnxDlw==: 00:34:10.561 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:10.561 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:10.561 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzliOTA2ZTcxZmUyNTQ5MjdjMThlYjQ2YTYxNGRhZGFhNDNhMGQzNWIwNTIzNjA1Q2cT/g==: 00:34:10.561 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDFkNjZlZjVhNzk2OWRmZWJlNTYzZWE4NzYyZTQxYjcwYTIyMzEzY2E5OTc4NDE3OnxDlw==: ]] 00:34:10.561 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDFkNjZlZjVhNzk2OWRmZWJlNTYzZWE4NzYyZTQxYjcwYTIyMzEzY2E5OTc4NDE3OnxDlw==: 00:34:10.561 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:34:10.561 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:10.561 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:10.561 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:10.561 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:10.561 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:10.561 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:10.561 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.561 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.561 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.561 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:10.561 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:10.561 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:10.561 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:10.561 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:10.561 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:10.561 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:10.561 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:10.561 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:10.561 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:10.561 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:10.561 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:10.561 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.561 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.128 nvme0n1 00:34:11.128 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.128 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:11.128 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:11.128 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.128 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.128 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.128 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:11.128 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:11.128 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.128 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.128 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.128 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:11.128 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:34:11.128 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:11.128 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:11.128 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:11.128 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:11.128 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODQ1NmYwOTc3ZjI2MjI3YzM0MjkzOGNlYjRkYjNjN2b/0K+I: 00:34:11.128 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTQyMGY5OWRhNzRjOTZlMTc5OThkOTM5NDE5ODc0NjdylBg7: 00:34:11.128 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:11.128 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:11.128 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODQ1NmYwOTc3ZjI2MjI3YzM0MjkzOGNlYjRkYjNjN2b/0K+I: 00:34:11.128 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTQyMGY5OWRhNzRjOTZlMTc5OThkOTM5NDE5ODc0NjdylBg7: ]] 00:34:11.128 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTQyMGY5OWRhNzRjOTZlMTc5OThkOTM5NDE5ODc0NjdylBg7: 00:34:11.128 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:34:11.128 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:11.128 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:11.128 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:11.129 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:11.129 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:11.129 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:11.129 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.129 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.129 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.129 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:11.129 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:11.129 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:11.129 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:11.129 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:11.129 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:11.129 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:11.129 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:11.129 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:11.129 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:11.129 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:11.129 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:11.129 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.129 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.386 nvme0n1 00:34:11.386 00:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.386 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:11.386 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:11.386 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.386 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.386 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.386 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:11.386 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:11.386 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.386 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.386 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.387 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:11.387 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:34:11.387 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:11.387 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:11.387 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:11.387 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:11.387 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmM1YTYzZWFlNDA2MzM1YzVjMDMyZWRjYWI3MDE5ZGU0MWUzNWZkYTc0ZTEzYWM5quoLAQ==: 00:34:11.387 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjQ1ZDYxMDYyYjc2ODM5NTE2YjkzMGRjZTU3NDJkNjUWbsco: 00:34:11.387 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:11.387 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:11.387 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmM1YTYzZWFlNDA2MzM1YzVjMDMyZWRjYWI3MDE5ZGU0MWUzNWZkYTc0ZTEzYWM5quoLAQ==: 00:34:11.387 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjQ1ZDYxMDYyYjc2ODM5NTE2YjkzMGRjZTU3NDJkNjUWbsco: ]] 00:34:11.387 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjQ1ZDYxMDYyYjc2ODM5NTE2YjkzMGRjZTU3NDJkNjUWbsco: 00:34:11.387 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:34:11.387 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:11.387 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:11.387 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:11.387 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:11.387 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:11.387 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:11.387 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.387 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.387 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.387 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:11.387 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:11.387 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:11.387 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:11.387 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:11.387 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:11.387 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:11.387 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:11.387 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:11.387 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:11.387 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:11.387 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:11.387 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.387 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.645 nvme0n1 00:34:11.645 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.645 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:11.645 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:11.645 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.645 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.645 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.645 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:11.645 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:11.645 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.645 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.645 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.645 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:11.645 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:34:11.645 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:11.645 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:11.645 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:11.645 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:11.645 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDY3NDQ3NzViMmQwYzZmYTI4NTI1YmM3YTEzMjk3OWRjMTExOWYwOGNhYTg1MTg1NzVlYmJjZjhkMjY1OTM3Y/rKD9M=: 00:34:11.645 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:11.645 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:11.645 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:11.645 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDY3NDQ3NzViMmQwYzZmYTI4NTI1YmM3YTEzMjk3OWRjMTExOWYwOGNhYTg1MTg1NzVlYmJjZjhkMjY1OTM3Y/rKD9M=: 00:34:11.645 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:11.645 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:34:11.645 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:11.645 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:11.645 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:11.645 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:11.645 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:11.645 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:11.645 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.645 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.645 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.645 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:11.645 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:11.645 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:11.645 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:11.645 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:11.645 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:11.645 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:11.645 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:11.645 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:11.645 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:11.645 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:11.645 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:11.645 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.645 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.904 nvme0n1 00:34:11.904 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.904 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:11.904 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:11.904 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.904 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.162 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.162 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:12.162 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:12.163 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.163 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.163 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.163 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:12.163 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:12.163 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:34:12.163 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:12.163 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:12.163 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:12.163 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:12.163 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDVkMWJhNTJiMTk3ZDM0Y2RhZGEyYzA3MWIzNzI4MmRsN8Y9: 00:34:12.163 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTFkZGE5M2Q1Zjk4ZDY1YmY3OTBiMTdiZmZhMDYzNzhkODIzZTFjOGFhNWVmZjU0ZmVlNGUwMjllZGFkNjBkZOSp8C8=: 00:34:12.163 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:12.163 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:12.163 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDVkMWJhNTJiMTk3ZDM0Y2RhZGEyYzA3MWIzNzI4MmRsN8Y9: 00:34:12.163 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTFkZGE5M2Q1Zjk4ZDY1YmY3OTBiMTdiZmZhMDYzNzhkODIzZTFjOGFhNWVmZjU0ZmVlNGUwMjllZGFkNjBkZOSp8C8=: ]] 00:34:12.163 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTFkZGE5M2Q1Zjk4ZDY1YmY3OTBiMTdiZmZhMDYzNzhkODIzZTFjOGFhNWVmZjU0ZmVlNGUwMjllZGFkNjBkZOSp8C8=: 00:34:12.163 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:34:12.163 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:12.163 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:12.163 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:12.163 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:12.163 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:12.163 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:12.163 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.163 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.163 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.163 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:12.163 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:12.163 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:12.163 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:12.163 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:12.163 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:12.163 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:12.163 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:12.163 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:12.163 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:12.163 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:12.163 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:12.163 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.163 00:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.730 nvme0n1 00:34:12.730 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.730 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:12.730 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.730 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.730 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:12.730 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.730 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:12.730 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:12.730 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.730 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.730 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.730 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:12.730 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:34:12.730 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:12.730 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:12.730 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:12.730 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:12.730 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzliOTA2ZTcxZmUyNTQ5MjdjMThlYjQ2YTYxNGRhZGFhNDNhMGQzNWIwNTIzNjA1Q2cT/g==: 00:34:12.730 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDFkNjZlZjVhNzk2OWRmZWJlNTYzZWE4NzYyZTQxYjcwYTIyMzEzY2E5OTc4NDE3OnxDlw==: 00:34:12.730 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:12.730 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:12.730 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzliOTA2ZTcxZmUyNTQ5MjdjMThlYjQ2YTYxNGRhZGFhNDNhMGQzNWIwNTIzNjA1Q2cT/g==: 00:34:12.730 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDFkNjZlZjVhNzk2OWRmZWJlNTYzZWE4NzYyZTQxYjcwYTIyMzEzY2E5OTc4NDE3OnxDlw==: ]] 00:34:12.730 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDFkNjZlZjVhNzk2OWRmZWJlNTYzZWE4NzYyZTQxYjcwYTIyMzEzY2E5OTc4NDE3OnxDlw==: 00:34:12.730 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:34:12.730 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:12.730 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:12.730 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:12.730 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:12.730 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:12.730 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:12.730 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.730 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.730 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.730 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:12.730 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:12.730 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:12.730 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:12.730 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:12.730 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:12.730 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:12.730 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:12.730 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:12.730 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:12.730 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:12.730 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:12.730 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.730 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.307 nvme0n1 00:34:13.307 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.307 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:13.307 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:13.307 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.307 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.307 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.307 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:13.307 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:13.307 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.307 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.307 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.307 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:13.307 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:34:13.307 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:13.307 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:13.307 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:13.307 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:13.307 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODQ1NmYwOTc3ZjI2MjI3YzM0MjkzOGNlYjRkYjNjN2b/0K+I: 00:34:13.307 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTQyMGY5OWRhNzRjOTZlMTc5OThkOTM5NDE5ODc0NjdylBg7: 00:34:13.307 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:13.307 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:13.307 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODQ1NmYwOTc3ZjI2MjI3YzM0MjkzOGNlYjRkYjNjN2b/0K+I: 00:34:13.307 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTQyMGY5OWRhNzRjOTZlMTc5OThkOTM5NDE5ODc0NjdylBg7: ]] 00:34:13.307 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTQyMGY5OWRhNzRjOTZlMTc5OThkOTM5NDE5ODc0NjdylBg7: 00:34:13.307 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:34:13.307 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:13.307 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:13.307 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:13.307 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:13.307 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:13.307 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:13.307 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.307 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.307 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.307 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:13.307 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:13.307 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:13.307 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:13.307 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:13.307 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:13.307 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:13.307 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:13.307 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:13.307 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:13.307 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:13.307 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:13.307 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.307 00:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.874 nvme0n1 00:34:13.874 00:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.874 00:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:13.874 00:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:13.874 00:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.874 00:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.874 00:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.874 00:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:13.874 00:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:13.874 00:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.874 00:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.874 00:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.874 00:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:13.874 00:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:34:13.874 00:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:13.874 00:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:13.874 00:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:13.874 00:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:13.874 00:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmM1YTYzZWFlNDA2MzM1YzVjMDMyZWRjYWI3MDE5ZGU0MWUzNWZkYTc0ZTEzYWM5quoLAQ==: 00:34:13.874 00:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjQ1ZDYxMDYyYjc2ODM5NTE2YjkzMGRjZTU3NDJkNjUWbsco: 00:34:13.874 00:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:13.874 00:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:13.874 00:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmM1YTYzZWFlNDA2MzM1YzVjMDMyZWRjYWI3MDE5ZGU0MWUzNWZkYTc0ZTEzYWM5quoLAQ==: 00:34:13.874 00:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjQ1ZDYxMDYyYjc2ODM5NTE2YjkzMGRjZTU3NDJkNjUWbsco: ]] 00:34:13.874 00:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjQ1ZDYxMDYyYjc2ODM5NTE2YjkzMGRjZTU3NDJkNjUWbsco: 00:34:13.874 00:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:34:13.874 00:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:13.874 00:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:13.874 00:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:13.874 00:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:13.874 00:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:13.874 00:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:13.874 00:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.874 00:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.874 00:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.874 00:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:13.874 00:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:13.874 00:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:13.874 00:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:13.874 00:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:13.874 00:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:13.874 00:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:13.874 00:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:13.874 00:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:13.874 00:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:13.874 00:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:13.874 00:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:13.874 00:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.874 00:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.440 nvme0n1 00:34:14.440 00:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.440 00:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:14.440 00:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:14.440 00:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.440 00:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.440 00:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.440 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:14.440 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:14.440 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.440 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.440 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.440 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:14.440 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:34:14.440 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:14.440 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:14.440 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:14.440 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:14.440 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDY3NDQ3NzViMmQwYzZmYTI4NTI1YmM3YTEzMjk3OWRjMTExOWYwOGNhYTg1MTg1NzVlYmJjZjhkMjY1OTM3Y/rKD9M=: 00:34:14.440 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:14.440 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:14.440 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:14.440 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDY3NDQ3NzViMmQwYzZmYTI4NTI1YmM3YTEzMjk3OWRjMTExOWYwOGNhYTg1MTg1NzVlYmJjZjhkMjY1OTM3Y/rKD9M=: 00:34:14.440 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:14.440 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:34:14.440 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:14.440 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:14.440 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:14.440 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:14.440 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:14.440 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:14.440 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.440 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.440 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.440 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:14.440 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:14.440 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:14.440 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:14.440 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:14.440 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:14.440 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:14.440 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:14.440 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:14.440 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:14.440 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:14.440 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:14.440 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.440 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.698 nvme0n1 00:34:14.698 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.698 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:14.698 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.698 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.956 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:14.956 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.956 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:14.956 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:14.956 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.956 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.956 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.956 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:14.956 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:14.956 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:34:14.956 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:14.956 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:14.956 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:14.956 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:14.956 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDVkMWJhNTJiMTk3ZDM0Y2RhZGEyYzA3MWIzNzI4MmRsN8Y9: 00:34:14.956 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTFkZGE5M2Q1Zjk4ZDY1YmY3OTBiMTdiZmZhMDYzNzhkODIzZTFjOGFhNWVmZjU0ZmVlNGUwMjllZGFkNjBkZOSp8C8=: 00:34:14.956 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:14.956 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:14.956 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDVkMWJhNTJiMTk3ZDM0Y2RhZGEyYzA3MWIzNzI4MmRsN8Y9: 00:34:14.956 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTFkZGE5M2Q1Zjk4ZDY1YmY3OTBiMTdiZmZhMDYzNzhkODIzZTFjOGFhNWVmZjU0ZmVlNGUwMjllZGFkNjBkZOSp8C8=: ]] 00:34:14.956 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTFkZGE5M2Q1Zjk4ZDY1YmY3OTBiMTdiZmZhMDYzNzhkODIzZTFjOGFhNWVmZjU0ZmVlNGUwMjllZGFkNjBkZOSp8C8=: 00:34:14.956 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:34:14.956 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:14.956 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:14.956 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:14.956 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:14.956 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:14.956 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:14.956 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.956 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.956 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.956 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:14.956 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:14.956 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:14.956 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:14.956 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:14.956 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:14.956 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:14.956 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:14.956 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:14.956 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:14.956 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:14.956 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:14.956 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.956 00:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.889 nvme0n1 00:34:15.889 00:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.889 00:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:15.889 00:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.889 00:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.889 00:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:15.889 00:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.889 00:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:15.889 00:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:15.889 00:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.889 00:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.889 00:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.889 00:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:15.889 00:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:34:15.889 00:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:15.889 00:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:15.889 00:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:15.889 00:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:15.889 00:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzliOTA2ZTcxZmUyNTQ5MjdjMThlYjQ2YTYxNGRhZGFhNDNhMGQzNWIwNTIzNjA1Q2cT/g==: 00:34:15.889 00:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDFkNjZlZjVhNzk2OWRmZWJlNTYzZWE4NzYyZTQxYjcwYTIyMzEzY2E5OTc4NDE3OnxDlw==: 00:34:15.889 00:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:15.889 00:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:15.889 00:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzliOTA2ZTcxZmUyNTQ5MjdjMThlYjQ2YTYxNGRhZGFhNDNhMGQzNWIwNTIzNjA1Q2cT/g==: 00:34:15.889 00:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDFkNjZlZjVhNzk2OWRmZWJlNTYzZWE4NzYyZTQxYjcwYTIyMzEzY2E5OTc4NDE3OnxDlw==: ]] 00:34:15.889 00:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDFkNjZlZjVhNzk2OWRmZWJlNTYzZWE4NzYyZTQxYjcwYTIyMzEzY2E5OTc4NDE3OnxDlw==: 00:34:15.889 00:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:34:15.889 00:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:15.889 00:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:15.889 00:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:15.889 00:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:15.889 00:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:15.889 00:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:15.889 00:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.889 00:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.889 00:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.889 00:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:15.889 00:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:15.889 00:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:15.889 00:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:15.889 00:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:15.889 00:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:15.889 00:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:15.889 00:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:15.889 00:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:15.889 00:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:15.889 00:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:15.889 00:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:15.889 00:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.889 00:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.824 nvme0n1 00:34:16.824 00:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.824 00:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:16.824 00:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.824 00:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.824 00:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:16.824 00:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.824 00:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:16.824 00:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:16.824 00:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.824 00:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.824 00:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.824 00:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:16.824 00:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:34:16.824 00:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:16.824 00:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:16.824 00:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:16.824 00:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:16.824 00:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODQ1NmYwOTc3ZjI2MjI3YzM0MjkzOGNlYjRkYjNjN2b/0K+I: 00:34:16.824 00:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTQyMGY5OWRhNzRjOTZlMTc5OThkOTM5NDE5ODc0NjdylBg7: 00:34:16.824 00:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:16.824 00:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:16.824 00:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODQ1NmYwOTc3ZjI2MjI3YzM0MjkzOGNlYjRkYjNjN2b/0K+I: 00:34:16.824 00:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTQyMGY5OWRhNzRjOTZlMTc5OThkOTM5NDE5ODc0NjdylBg7: ]] 00:34:16.824 00:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTQyMGY5OWRhNzRjOTZlMTc5OThkOTM5NDE5ODc0NjdylBg7: 00:34:16.824 00:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:34:16.824 00:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:16.824 00:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:16.824 00:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:16.824 00:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:16.824 00:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:16.824 00:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:16.824 00:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.824 00:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.824 00:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.824 00:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:16.824 00:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:16.824 00:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:16.825 00:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:16.825 00:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.825 00:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.825 00:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:16.825 00:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:16.825 00:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:16.825 00:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:16.825 00:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:16.825 00:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:16.825 00:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.825 00:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.758 nvme0n1 00:34:17.758 00:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.758 00:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:17.758 00:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:17.758 00:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.758 00:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.758 00:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.758 00:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:17.758 00:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:17.758 00:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.758 00:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.758 00:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.758 00:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:17.758 00:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:34:17.759 00:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:17.759 00:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:17.759 00:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:17.759 00:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:17.759 00:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmM1YTYzZWFlNDA2MzM1YzVjMDMyZWRjYWI3MDE5ZGU0MWUzNWZkYTc0ZTEzYWM5quoLAQ==: 00:34:17.759 00:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjQ1ZDYxMDYyYjc2ODM5NTE2YjkzMGRjZTU3NDJkNjUWbsco: 00:34:17.759 00:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:17.759 00:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:17.759 00:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmM1YTYzZWFlNDA2MzM1YzVjMDMyZWRjYWI3MDE5ZGU0MWUzNWZkYTc0ZTEzYWM5quoLAQ==: 00:34:17.759 00:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjQ1ZDYxMDYyYjc2ODM5NTE2YjkzMGRjZTU3NDJkNjUWbsco: ]] 00:34:17.759 00:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjQ1ZDYxMDYyYjc2ODM5NTE2YjkzMGRjZTU3NDJkNjUWbsco: 00:34:17.759 00:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:34:17.759 00:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:17.759 00:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:17.759 00:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:17.759 00:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:17.759 00:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:17.759 00:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:17.759 00:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.759 00:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.759 00:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.759 00:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:17.759 00:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:17.759 00:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:17.759 00:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:17.759 00:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:17.759 00:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:17.759 00:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:17.759 00:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:17.759 00:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:17.759 00:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:17.759 00:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:17.759 00:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:17.759 00:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.759 00:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.695 nvme0n1 00:34:18.695 00:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.695 00:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:18.695 00:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:18.695 00:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.695 00:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.695 00:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.695 00:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:18.695 00:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:18.695 00:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.695 00:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.695 00:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.695 00:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:18.695 00:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:34:18.695 00:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:18.695 00:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:18.695 00:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:18.695 00:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:18.695 00:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDY3NDQ3NzViMmQwYzZmYTI4NTI1YmM3YTEzMjk3OWRjMTExOWYwOGNhYTg1MTg1NzVlYmJjZjhkMjY1OTM3Y/rKD9M=: 00:34:18.695 00:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:18.695 00:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:18.695 00:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:18.695 00:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDY3NDQ3NzViMmQwYzZmYTI4NTI1YmM3YTEzMjk3OWRjMTExOWYwOGNhYTg1MTg1NzVlYmJjZjhkMjY1OTM3Y/rKD9M=: 00:34:18.695 00:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:18.695 00:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:34:18.695 00:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:18.695 00:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:18.695 00:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:18.695 00:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:18.695 00:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:18.695 00:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:18.695 00:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.695 00:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.695 00:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.695 00:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:18.695 00:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:18.695 00:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:18.695 00:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:18.695 00:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:18.695 00:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:18.695 00:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:18.695 00:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:18.695 00:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:18.695 00:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:18.695 00:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:18.695 00:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:18.695 00:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.695 00:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.269 nvme0n1 00:34:19.269 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.269 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:19.269 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.269 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.269 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:19.527 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.527 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:19.527 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzliOTA2ZTcxZmUyNTQ5MjdjMThlYjQ2YTYxNGRhZGFhNDNhMGQzNWIwNTIzNjA1Q2cT/g==: 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDFkNjZlZjVhNzk2OWRmZWJlNTYzZWE4NzYyZTQxYjcwYTIyMzEzY2E5OTc4NDE3OnxDlw==: 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzliOTA2ZTcxZmUyNTQ5MjdjMThlYjQ2YTYxNGRhZGFhNDNhMGQzNWIwNTIzNjA1Q2cT/g==: 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDFkNjZlZjVhNzk2OWRmZWJlNTYzZWE4NzYyZTQxYjcwYTIyMzEzY2E5OTc4NDE3OnxDlw==: ]] 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDFkNjZlZjVhNzk2OWRmZWJlNTYzZWE4NzYyZTQxYjcwYTIyMzEzY2E5OTc4NDE3OnxDlw==: 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.528 request: 00:34:19.528 { 00:34:19.528 "name": "nvme0", 00:34:19.528 "trtype": "tcp", 00:34:19.528 "traddr": "10.0.0.1", 00:34:19.528 "adrfam": "ipv4", 00:34:19.528 "trsvcid": "4420", 00:34:19.528 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:19.528 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:19.528 "prchk_reftag": false, 00:34:19.528 "prchk_guard": false, 00:34:19.528 "hdgst": false, 00:34:19.528 "ddgst": false, 00:34:19.528 "allow_unrecognized_csi": false, 00:34:19.528 "method": "bdev_nvme_attach_controller", 00:34:19.528 "req_id": 1 00:34:19.528 } 00:34:19.528 Got JSON-RPC error response 00:34:19.528 response: 00:34:19.528 { 00:34:19.528 "code": -5, 00:34:19.528 "message": "Input/output error" 00:34:19.528 } 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.528 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.786 request: 00:34:19.786 { 00:34:19.786 "name": "nvme0", 00:34:19.786 "trtype": "tcp", 00:34:19.786 "traddr": "10.0.0.1", 00:34:19.786 "adrfam": "ipv4", 00:34:19.786 "trsvcid": "4420", 00:34:19.786 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:19.786 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:19.786 "prchk_reftag": false, 00:34:19.786 "prchk_guard": false, 00:34:19.786 "hdgst": false, 00:34:19.786 "ddgst": false, 00:34:19.786 "dhchap_key": "key2", 00:34:19.786 "allow_unrecognized_csi": false, 00:34:19.786 "method": "bdev_nvme_attach_controller", 00:34:19.786 "req_id": 1 00:34:19.786 } 00:34:19.786 Got JSON-RPC error response 00:34:19.786 response: 00:34:19.786 { 00:34:19.786 "code": -5, 00:34:19.786 "message": "Input/output error" 00:34:19.786 } 00:34:19.786 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:19.786 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:19.786 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:19.786 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:19.786 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:19.786 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:34:19.786 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.786 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.786 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:34:19.786 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.786 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:34:19.786 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:34:19.786 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:19.786 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:19.786 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:19.786 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:19.786 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:19.786 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:19.786 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:19.786 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:19.786 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:19.786 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:19.786 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:19.786 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:19.786 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:19.786 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:19.786 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:19.786 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:19.786 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:19.786 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:19.786 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.786 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.786 request: 00:34:19.786 { 00:34:19.786 "name": "nvme0", 00:34:19.786 "trtype": "tcp", 00:34:19.786 "traddr": "10.0.0.1", 00:34:19.786 "adrfam": "ipv4", 00:34:19.786 "trsvcid": "4420", 00:34:19.786 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:19.786 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:19.786 "prchk_reftag": false, 00:34:19.786 "prchk_guard": false, 00:34:19.786 "hdgst": false, 00:34:19.786 "ddgst": false, 00:34:19.786 "dhchap_key": "key1", 00:34:19.786 "dhchap_ctrlr_key": "ckey2", 00:34:19.786 "allow_unrecognized_csi": false, 00:34:19.786 "method": "bdev_nvme_attach_controller", 00:34:19.786 "req_id": 1 00:34:19.786 } 00:34:19.786 Got JSON-RPC error response 00:34:19.786 response: 00:34:19.786 { 00:34:19.786 "code": -5, 00:34:19.786 "message": "Input/output error" 00:34:19.786 } 00:34:19.786 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:19.786 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:19.786 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:19.786 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:19.786 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:19.786 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:34:19.786 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:19.786 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:19.786 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:19.786 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:19.786 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:19.786 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:19.786 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:19.786 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:19.786 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:19.786 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:19.786 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:34:19.786 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.786 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.045 nvme0n1 00:34:20.046 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.046 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:20.046 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.046 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:20.046 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:20.046 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:20.046 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODQ1NmYwOTc3ZjI2MjI3YzM0MjkzOGNlYjRkYjNjN2b/0K+I: 00:34:20.046 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTQyMGY5OWRhNzRjOTZlMTc5OThkOTM5NDE5ODc0NjdylBg7: 00:34:20.046 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:20.046 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:20.046 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODQ1NmYwOTc3ZjI2MjI3YzM0MjkzOGNlYjRkYjNjN2b/0K+I: 00:34:20.046 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTQyMGY5OWRhNzRjOTZlMTc5OThkOTM5NDE5ODc0NjdylBg7: ]] 00:34:20.046 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTQyMGY5OWRhNzRjOTZlMTc5OThkOTM5NDE5ODc0NjdylBg7: 00:34:20.046 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:20.046 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.046 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.046 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.046 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:34:20.046 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.046 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.046 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:34:20.046 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.046 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:20.046 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:20.046 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:20.046 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:20.046 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:20.046 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:20.046 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:20.046 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:20.046 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:20.046 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.046 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.046 request: 00:34:20.046 { 00:34:20.046 "name": "nvme0", 00:34:20.046 "dhchap_key": "key1", 00:34:20.046 "dhchap_ctrlr_key": "ckey2", 00:34:20.046 "method": "bdev_nvme_set_keys", 00:34:20.046 "req_id": 1 00:34:20.046 } 00:34:20.046 Got JSON-RPC error response 00:34:20.046 response: 00:34:20.046 { 00:34:20.046 "code": -13, 00:34:20.046 "message": "Permission denied" 00:34:20.046 } 00:34:20.046 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:20.046 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:20.046 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:20.046 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:20.046 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:20.046 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:20.046 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:20.046 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.046 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.046 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.046 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:34:20.046 00:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:34:21.420 00:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.420 00:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:21.420 00:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.420 00:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.420 00:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.420 00:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:34:21.420 00:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:34:22.355 00:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.355 00:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:22.355 00:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.355 00:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.355 00:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.355 00:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:34:22.356 00:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:22.356 00:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.356 00:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:22.356 00:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:22.356 00:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:22.356 00:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzliOTA2ZTcxZmUyNTQ5MjdjMThlYjQ2YTYxNGRhZGFhNDNhMGQzNWIwNTIzNjA1Q2cT/g==: 00:34:22.356 00:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDFkNjZlZjVhNzk2OWRmZWJlNTYzZWE4NzYyZTQxYjcwYTIyMzEzY2E5OTc4NDE3OnxDlw==: 00:34:22.356 00:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:22.356 00:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:22.356 00:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzliOTA2ZTcxZmUyNTQ5MjdjMThlYjQ2YTYxNGRhZGFhNDNhMGQzNWIwNTIzNjA1Q2cT/g==: 00:34:22.356 00:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDFkNjZlZjVhNzk2OWRmZWJlNTYzZWE4NzYyZTQxYjcwYTIyMzEzY2E5OTc4NDE3OnxDlw==: ]] 00:34:22.356 00:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDFkNjZlZjVhNzk2OWRmZWJlNTYzZWE4NzYyZTQxYjcwYTIyMzEzY2E5OTc4NDE3OnxDlw==: 00:34:22.356 00:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:34:22.356 00:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:22.356 00:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:22.356 00:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:22.356 00:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.356 00:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.356 00:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:22.356 00:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:22.356 00:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:22.356 00:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:22.356 00:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:22.356 00:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:34:22.356 00:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.356 00:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.356 nvme0n1 00:34:22.356 00:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.356 00:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:22.356 00:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.356 00:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:22.356 00:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:22.356 00:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:22.356 00:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODQ1NmYwOTc3ZjI2MjI3YzM0MjkzOGNlYjRkYjNjN2b/0K+I: 00:34:22.356 00:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTQyMGY5OWRhNzRjOTZlMTc5OThkOTM5NDE5ODc0NjdylBg7: 00:34:22.356 00:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:22.356 00:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:22.356 00:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODQ1NmYwOTc3ZjI2MjI3YzM0MjkzOGNlYjRkYjNjN2b/0K+I: 00:34:22.356 00:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTQyMGY5OWRhNzRjOTZlMTc5OThkOTM5NDE5ODc0NjdylBg7: ]] 00:34:22.356 00:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTQyMGY5OWRhNzRjOTZlMTc5OThkOTM5NDE5ODc0NjdylBg7: 00:34:22.356 00:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:22.356 00:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:22.356 00:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:22.356 00:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:22.356 00:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:22.356 00:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:22.356 00:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:22.356 00:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:22.356 00:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.356 00:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.356 request: 00:34:22.356 { 00:34:22.356 "name": "nvme0", 00:34:22.356 "dhchap_key": "key2", 00:34:22.356 "dhchap_ctrlr_key": "ckey1", 00:34:22.356 "method": "bdev_nvme_set_keys", 00:34:22.356 "req_id": 1 00:34:22.356 } 00:34:22.356 Got JSON-RPC error response 00:34:22.356 response: 00:34:22.356 { 00:34:22.356 "code": -13, 00:34:22.356 "message": "Permission denied" 00:34:22.356 } 00:34:22.356 00:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:22.356 00:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:22.356 00:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:22.356 00:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:22.356 00:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:22.356 00:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.356 00:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.356 00:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:34:22.356 00:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.356 00:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.615 00:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:34:22.615 00:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:34:23.549 00:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.549 00:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:34:23.549 00:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.549 00:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.549 00:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.549 00:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:34:23.549 00:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:34:23.549 00:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:34:23.549 00:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:34:23.549 00:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:23.549 00:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:34:23.549 00:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:23.549 00:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:34:23.549 00:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:23.549 00:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:23.549 rmmod nvme_tcp 00:34:23.549 rmmod nvme_fabrics 00:34:23.549 00:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:23.549 00:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:34:23.549 00:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:34:23.549 00:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 384034 ']' 00:34:23.549 00:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 384034 00:34:23.549 00:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 384034 ']' 00:34:23.549 00:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 384034 00:34:23.549 00:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:34:23.549 00:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:23.549 00:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 384034 00:34:23.549 00:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:23.549 00:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:23.549 00:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 384034' 00:34:23.549 killing process with pid 384034 00:34:23.549 00:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 384034 00:34:23.549 00:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 384034 00:34:23.807 00:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:23.807 00:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:23.807 00:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:23.807 00:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:34:23.807 00:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:34:23.807 00:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:23.807 00:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:34:23.807 00:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:23.807 00:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:23.807 00:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:23.807 00:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:23.807 00:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:26.348 00:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:26.348 00:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:26.348 00:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:26.348 00:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:34:26.348 00:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:34:26.348 00:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:34:26.348 00:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:26.348 00:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:26.348 00:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:26.348 00:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:26.348 00:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:34:26.348 00:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:34:26.348 00:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:27.285 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:27.285 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:27.285 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:27.285 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:27.285 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:27.285 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:27.285 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:27.285 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:27.285 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:27.285 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:27.285 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:27.285 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:27.285 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:27.285 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:27.285 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:27.285 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:28.225 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:34:28.225 00:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.q6s /tmp/spdk.key-null.5Og /tmp/spdk.key-sha256.wrC /tmp/spdk.key-sha384.66G /tmp/spdk.key-sha512.xV7 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:34:28.225 00:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:29.601 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:34:29.601 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:34:29.601 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:34:29.601 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:34:29.601 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:34:29.601 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:34:29.601 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:34:29.601 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:34:29.601 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:34:29.601 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:34:29.601 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:34:29.601 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:34:29.601 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:34:29.601 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:34:29.601 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:34:29.601 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:34:29.601 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:34:29.601 00:34:29.601 real 0m53.901s 00:34:29.601 user 0m51.670s 00:34:29.601 sys 0m6.102s 00:34:29.601 00:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:29.601 00:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.601 ************************************ 00:34:29.601 END TEST nvmf_auth_host 00:34:29.601 ************************************ 00:34:29.601 00:39:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:34:29.601 00:39:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:34:29.601 00:39:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:29.601 00:39:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:29.601 00:39:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.601 ************************************ 00:34:29.601 START TEST nvmf_digest 00:34:29.601 ************************************ 00:34:29.601 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:34:29.601 * Looking for test storage... 00:34:29.601 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:29.601 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:29.601 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:34:29.601 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:29.860 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:29.860 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:29.860 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:29.860 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:29.860 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:34:29.860 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:34:29.860 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:34:29.860 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:34:29.860 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:34:29.860 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:34:29.860 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:34:29.860 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:29.860 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:34:29.860 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:34:29.860 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:29.860 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:29.860 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:34:29.860 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:34:29.860 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:29.860 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:34:29.860 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:34:29.860 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:34:29.860 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:34:29.860 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:29.860 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:34:29.860 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:34:29.860 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:29.860 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:29.860 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:34:29.860 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:29.860 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:29.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:29.860 --rc genhtml_branch_coverage=1 00:34:29.860 --rc genhtml_function_coverage=1 00:34:29.860 --rc genhtml_legend=1 00:34:29.860 --rc geninfo_all_blocks=1 00:34:29.860 --rc geninfo_unexecuted_blocks=1 00:34:29.860 00:34:29.860 ' 00:34:29.860 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:29.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:29.860 --rc genhtml_branch_coverage=1 00:34:29.860 --rc genhtml_function_coverage=1 00:34:29.860 --rc genhtml_legend=1 00:34:29.860 --rc geninfo_all_blocks=1 00:34:29.860 --rc geninfo_unexecuted_blocks=1 00:34:29.860 00:34:29.860 ' 00:34:29.860 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:29.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:29.860 --rc genhtml_branch_coverage=1 00:34:29.860 --rc genhtml_function_coverage=1 00:34:29.860 --rc genhtml_legend=1 00:34:29.860 --rc geninfo_all_blocks=1 00:34:29.860 --rc geninfo_unexecuted_blocks=1 00:34:29.860 00:34:29.860 ' 00:34:29.860 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:29.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:29.860 --rc genhtml_branch_coverage=1 00:34:29.860 --rc genhtml_function_coverage=1 00:34:29.860 --rc genhtml_legend=1 00:34:29.860 --rc geninfo_all_blocks=1 00:34:29.860 --rc geninfo_unexecuted_blocks=1 00:34:29.860 00:34:29.860 ' 00:34:29.860 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:29.860 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:34:29.860 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:29.860 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:29.860 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:29.860 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:29.860 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:29.860 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:29.860 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:29.860 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:29.861 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:29.861 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:29.861 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:29.861 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:29.861 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:29.861 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:29.861 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:29.861 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:29.861 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:29.861 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:34:29.861 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:29.861 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:29.861 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:29.861 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:29.861 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:29.861 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:29.861 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:34:29.861 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:29.861 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:34:29.861 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:29.861 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:29.861 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:29.861 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:29.861 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:29.861 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:29.861 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:29.861 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:29.861 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:29.861 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:29.861 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:34:29.861 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:34:29.861 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:34:29.861 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:34:29.861 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:34:29.861 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:29.861 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:29.861 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:29.861 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:29.861 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:29.861 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:29.861 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:29.861 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:29.861 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:29.861 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:29.861 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:34:29.861 00:39:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:31.767 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:31.767 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:34:31.767 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:31.767 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:31.767 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:31.767 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:31.767 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:31.767 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:34:31.767 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:31.767 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:34:31.767 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:34:31.767 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:34:31.767 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:34:31.767 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:34:31.767 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:34:31.767 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:31.767 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:31.767 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:31.767 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:31.767 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:31.767 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:31.767 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:31.767 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:31.767 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:31.767 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:31.767 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:31.767 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:31.767 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:31.767 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:31.767 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:31.767 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:31.767 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:31.767 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:31.767 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:31.767 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:31.767 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:31.767 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:31.767 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:31.767 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:31.767 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:31.767 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:31.767 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:31.767 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:31.767 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:31.767 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:31.767 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:31.767 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:31.767 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:31.767 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:31.767 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:31.767 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:31.767 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:31.767 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:31.767 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:31.767 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:31.767 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:31.767 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:31.768 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:31.768 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:31.768 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:31.768 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:31.768 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:31.768 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:31.768 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:31.768 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:31.768 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:31.768 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:31.768 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:31.768 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:31.768 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:31.768 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:31.768 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:31.768 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:31.768 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:34:31.768 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:31.768 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:31.768 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:31.768 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:31.768 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:31.768 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:31.768 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:31.768 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:31.768 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:31.768 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:31.768 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:31.768 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:31.768 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:31.768 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:31.768 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:31.768 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:31.768 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:31.768 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:32.025 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:32.025 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:32.025 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:32.025 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:32.025 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:32.025 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:32.025 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:32.025 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:32.025 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:32.025 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:34:32.025 00:34:32.025 --- 10.0.0.2 ping statistics --- 00:34:32.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:32.025 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:34:32.025 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:32.025 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:32.025 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:34:32.025 00:34:32.025 --- 10.0.0.1 ping statistics --- 00:34:32.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:32.025 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:34:32.025 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:32.025 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:34:32.025 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:32.025 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:32.025 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:32.025 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:32.025 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:32.025 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:32.025 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:32.025 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:34:32.025 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:34:32.025 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:34:32.025 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:32.025 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:32.025 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:32.025 ************************************ 00:34:32.025 START TEST nvmf_digest_clean 00:34:32.025 ************************************ 00:34:32.025 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:34:32.025 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:34:32.025 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:34:32.025 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:34:32.025 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:34:32.025 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:34:32.025 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:32.025 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:32.025 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:32.025 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=394438 00:34:32.025 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:34:32.025 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 394438 00:34:32.025 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 394438 ']' 00:34:32.025 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:32.025 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:32.025 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:32.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:32.025 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:32.025 00:39:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:32.025 [2024-11-18 00:39:55.841529] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:34:32.026 [2024-11-18 00:39:55.841627] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:32.283 [2024-11-18 00:39:55.916968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:32.283 [2024-11-18 00:39:55.964405] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:32.283 [2024-11-18 00:39:55.964470] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:32.283 [2024-11-18 00:39:55.964499] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:32.283 [2024-11-18 00:39:55.964510] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:32.283 [2024-11-18 00:39:55.964520] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:32.283 [2024-11-18 00:39:55.965117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:32.283 00:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:32.283 00:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:34:32.283 00:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:32.283 00:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:32.283 00:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:32.283 00:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:32.283 00:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:34:32.283 00:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:34:32.283 00:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:34:32.283 00:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.284 00:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:32.542 null0 00:34:32.542 [2024-11-18 00:39:56.209641] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:32.542 [2024-11-18 00:39:56.233878] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:32.542 00:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.542 00:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:34:32.542 00:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:32.542 00:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:32.542 00:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:34:32.542 00:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:34:32.542 00:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:34:32.542 00:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:32.542 00:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=394461 00:34:32.542 00:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:34:32.542 00:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 394461 /var/tmp/bperf.sock 00:34:32.542 00:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 394461 ']' 00:34:32.542 00:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:32.542 00:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:32.542 00:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:32.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:32.542 00:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:32.542 00:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:32.542 [2024-11-18 00:39:56.280631] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:34:32.542 [2024-11-18 00:39:56.280708] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid394461 ] 00:34:32.542 [2024-11-18 00:39:56.348218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:32.800 [2024-11-18 00:39:56.397871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:32.800 00:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:32.800 00:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:34:32.800 00:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:32.800 00:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:32.800 00:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:33.364 00:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:33.364 00:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:33.622 nvme0n1 00:34:33.622 00:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:33.622 00:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:33.880 Running I/O for 2 seconds... 00:34:35.746 18973.00 IOPS, 74.11 MiB/s [2024-11-17T23:39:59.568Z] 19064.00 IOPS, 74.47 MiB/s 00:34:35.746 Latency(us) 00:34:35.746 [2024-11-17T23:39:59.568Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:35.746 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:34:35.746 nvme0n1 : 2.00 19086.36 74.56 0.00 0.00 6698.53 3349.62 13495.56 00:34:35.746 [2024-11-17T23:39:59.568Z] =================================================================================================================== 00:34:35.746 [2024-11-17T23:39:59.568Z] Total : 19086.36 74.56 0.00 0.00 6698.53 3349.62 13495.56 00:34:35.746 { 00:34:35.746 "results": [ 00:34:35.746 { 00:34:35.746 "job": "nvme0n1", 00:34:35.746 "core_mask": "0x2", 00:34:35.746 "workload": "randread", 00:34:35.746 "status": "finished", 00:34:35.746 "queue_depth": 128, 00:34:35.746 "io_size": 4096, 00:34:35.746 "runtime": 2.004363, 00:34:35.746 "iops": 19086.36309889975, 00:34:35.746 "mibps": 74.55610585507715, 00:34:35.746 "io_failed": 0, 00:34:35.746 "io_timeout": 0, 00:34:35.746 "avg_latency_us": 6698.5281125207175, 00:34:35.746 "min_latency_us": 3349.617777777778, 00:34:35.746 "max_latency_us": 13495.561481481482 00:34:35.746 } 00:34:35.746 ], 00:34:35.746 "core_count": 1 00:34:35.746 } 00:34:35.746 00:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:35.746 00:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:35.746 00:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:35.746 00:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:35.746 00:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:35.746 | select(.opcode=="crc32c") 00:34:35.746 | "\(.module_name) \(.executed)"' 00:34:36.004 00:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:36.004 00:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:36.004 00:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:36.004 00:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:36.004 00:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 394461 00:34:36.004 00:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 394461 ']' 00:34:36.004 00:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 394461 00:34:36.004 00:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:34:36.004 00:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:36.004 00:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 394461 00:34:36.004 00:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:36.004 00:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:36.004 00:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 394461' 00:34:36.004 killing process with pid 394461 00:34:36.004 00:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 394461 00:34:36.004 Received shutdown signal, test time was about 2.000000 seconds 00:34:36.004 00:34:36.004 Latency(us) 00:34:36.004 [2024-11-17T23:39:59.826Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:36.004 [2024-11-17T23:39:59.826Z] =================================================================================================================== 00:34:36.004 [2024-11-17T23:39:59.826Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:36.004 00:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 394461 00:34:36.262 00:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:34:36.262 00:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:36.262 00:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:36.262 00:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:34:36.262 00:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:34:36.262 00:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:34:36.262 00:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:36.262 00:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=394986 00:34:36.262 00:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:34:36.262 00:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 394986 /var/tmp/bperf.sock 00:34:36.262 00:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 394986 ']' 00:34:36.262 00:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:36.262 00:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:36.262 00:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:36.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:36.262 00:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:36.262 00:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:36.262 [2024-11-18 00:40:00.026739] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:34:36.262 [2024-11-18 00:40:00.026850] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid394986 ] 00:34:36.262 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:36.262 Zero copy mechanism will not be used. 00:34:36.520 [2024-11-18 00:40:00.094728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:36.520 [2024-11-18 00:40:00.140473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:36.520 00:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:36.520 00:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:34:36.520 00:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:36.520 00:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:36.520 00:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:37.087 00:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:37.087 00:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:37.345 nvme0n1 00:34:37.345 00:40:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:37.345 00:40:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:37.601 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:37.601 Zero copy mechanism will not be used. 00:34:37.601 Running I/O for 2 seconds... 00:34:39.468 5858.00 IOPS, 732.25 MiB/s [2024-11-17T23:40:03.291Z] 5701.50 IOPS, 712.69 MiB/s 00:34:39.469 Latency(us) 00:34:39.469 [2024-11-17T23:40:03.291Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:39.469 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:34:39.469 nvme0n1 : 2.00 5701.87 712.73 0.00 0.00 2801.88 670.53 7670.14 00:34:39.469 [2024-11-17T23:40:03.291Z] =================================================================================================================== 00:34:39.469 [2024-11-17T23:40:03.291Z] Total : 5701.87 712.73 0.00 0.00 2801.88 670.53 7670.14 00:34:39.469 { 00:34:39.469 "results": [ 00:34:39.469 { 00:34:39.469 "job": "nvme0n1", 00:34:39.469 "core_mask": "0x2", 00:34:39.469 "workload": "randread", 00:34:39.469 "status": "finished", 00:34:39.469 "queue_depth": 16, 00:34:39.469 "io_size": 131072, 00:34:39.469 "runtime": 2.003027, 00:34:39.469 "iops": 5701.870219422904, 00:34:39.469 "mibps": 712.733777427863, 00:34:39.469 "io_failed": 0, 00:34:39.469 "io_timeout": 0, 00:34:39.469 "avg_latency_us": 2801.877349521836, 00:34:39.469 "min_latency_us": 670.5303703703704, 00:34:39.469 "max_latency_us": 7670.139259259259 00:34:39.469 } 00:34:39.469 ], 00:34:39.469 "core_count": 1 00:34:39.469 } 00:34:39.469 00:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:39.469 00:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:39.469 00:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:39.469 00:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:39.469 | select(.opcode=="crc32c") 00:34:39.469 | "\(.module_name) \(.executed)"' 00:34:39.469 00:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:39.727 00:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:39.727 00:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:39.727 00:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:39.727 00:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:39.727 00:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 394986 00:34:39.727 00:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 394986 ']' 00:34:39.727 00:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 394986 00:34:39.727 00:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:34:39.727 00:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:39.727 00:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 394986 00:34:39.727 00:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:39.727 00:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:39.727 00:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 394986' 00:34:39.727 killing process with pid 394986 00:34:39.727 00:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 394986 00:34:39.727 Received shutdown signal, test time was about 2.000000 seconds 00:34:39.727 00:34:39.727 Latency(us) 00:34:39.727 [2024-11-17T23:40:03.549Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:39.727 [2024-11-17T23:40:03.549Z] =================================================================================================================== 00:34:39.727 [2024-11-17T23:40:03.549Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:39.727 00:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 394986 00:34:39.985 00:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:34:39.985 00:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:39.985 00:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:39.985 00:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:34:39.985 00:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:34:39.985 00:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:34:39.985 00:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:39.985 00:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=395399 00:34:39.985 00:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:34:39.985 00:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 395399 /var/tmp/bperf.sock 00:34:39.985 00:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 395399 ']' 00:34:39.985 00:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:39.985 00:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:39.985 00:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:39.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:39.985 00:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:39.985 00:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:39.985 [2024-11-18 00:40:03.780945] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:34:39.985 [2024-11-18 00:40:03.781044] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid395399 ] 00:34:40.243 [2024-11-18 00:40:03.848684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:40.243 [2024-11-18 00:40:03.897376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:40.243 00:40:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:40.243 00:40:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:34:40.243 00:40:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:40.243 00:40:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:40.243 00:40:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:40.810 00:40:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:40.810 00:40:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:41.068 nvme0n1 00:34:41.068 00:40:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:41.068 00:40:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:41.326 Running I/O for 2 seconds... 00:34:43.193 21365.00 IOPS, 83.46 MiB/s [2024-11-17T23:40:07.015Z] 21487.00 IOPS, 83.93 MiB/s 00:34:43.193 Latency(us) 00:34:43.193 [2024-11-17T23:40:07.015Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:43.193 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:43.193 nvme0n1 : 2.01 21497.15 83.97 0.00 0.00 5944.47 2657.85 9514.86 00:34:43.193 [2024-11-17T23:40:07.015Z] =================================================================================================================== 00:34:43.193 [2024-11-17T23:40:07.015Z] Total : 21497.15 83.97 0.00 0.00 5944.47 2657.85 9514.86 00:34:43.193 { 00:34:43.193 "results": [ 00:34:43.193 { 00:34:43.193 "job": "nvme0n1", 00:34:43.193 "core_mask": "0x2", 00:34:43.193 "workload": "randwrite", 00:34:43.193 "status": "finished", 00:34:43.193 "queue_depth": 128, 00:34:43.193 "io_size": 4096, 00:34:43.193 "runtime": 2.007196, 00:34:43.193 "iops": 21497.153242633005, 00:34:43.193 "mibps": 83.97325485403518, 00:34:43.193 "io_failed": 0, 00:34:43.193 "io_timeout": 0, 00:34:43.193 "avg_latency_us": 5944.4729069898185, 00:34:43.193 "min_latency_us": 2657.8488888888887, 00:34:43.193 "max_latency_us": 9514.856296296297 00:34:43.193 } 00:34:43.193 ], 00:34:43.193 "core_count": 1 00:34:43.193 } 00:34:43.193 00:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:43.193 00:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:43.193 00:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:43.193 00:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:43.193 | select(.opcode=="crc32c") 00:34:43.193 | "\(.module_name) \(.executed)"' 00:34:43.193 00:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:43.452 00:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:43.452 00:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:43.452 00:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:43.452 00:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:43.452 00:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 395399 00:34:43.452 00:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 395399 ']' 00:34:43.452 00:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 395399 00:34:43.452 00:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:34:43.452 00:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:43.452 00:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 395399 00:34:43.452 00:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:43.452 00:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:43.452 00:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 395399' 00:34:43.452 killing process with pid 395399 00:34:43.452 00:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 395399 00:34:43.452 Received shutdown signal, test time was about 2.000000 seconds 00:34:43.452 00:34:43.452 Latency(us) 00:34:43.452 [2024-11-17T23:40:07.274Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:43.452 [2024-11-17T23:40:07.275Z] =================================================================================================================== 00:34:43.453 [2024-11-17T23:40:07.275Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:43.453 00:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 395399 00:34:43.712 00:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:34:43.712 00:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:43.712 00:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:43.712 00:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:34:43.712 00:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:34:43.712 00:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:34:43.712 00:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:43.712 00:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=395808 00:34:43.712 00:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:34:43.712 00:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 395808 /var/tmp/bperf.sock 00:34:43.712 00:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 395808 ']' 00:34:43.712 00:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:43.712 00:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:43.712 00:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:43.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:43.712 00:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:43.712 00:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:43.712 [2024-11-18 00:40:07.466258] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:34:43.712 [2024-11-18 00:40:07.466370] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid395808 ] 00:34:43.712 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:43.712 Zero copy mechanism will not be used. 00:34:43.712 [2024-11-18 00:40:07.532602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:43.971 [2024-11-18 00:40:07.580494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:43.971 00:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:43.971 00:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:34:43.971 00:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:43.971 00:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:43.971 00:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:44.536 00:40:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:44.536 00:40:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:44.794 nvme0n1 00:34:44.794 00:40:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:44.794 00:40:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:45.052 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:45.052 Zero copy mechanism will not be used. 00:34:45.052 Running I/O for 2 seconds... 00:34:46.917 5973.00 IOPS, 746.62 MiB/s [2024-11-17T23:40:10.739Z] 6090.50 IOPS, 761.31 MiB/s 00:34:46.917 Latency(us) 00:34:46.917 [2024-11-17T23:40:10.739Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:46.917 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:34:46.917 nvme0n1 : 2.00 6088.94 761.12 0.00 0.00 2621.13 1711.22 4417.61 00:34:46.917 [2024-11-17T23:40:10.739Z] =================================================================================================================== 00:34:46.917 [2024-11-17T23:40:10.739Z] Total : 6088.94 761.12 0.00 0.00 2621.13 1711.22 4417.61 00:34:46.917 { 00:34:46.917 "results": [ 00:34:46.917 { 00:34:46.917 "job": "nvme0n1", 00:34:46.917 "core_mask": "0x2", 00:34:46.917 "workload": "randwrite", 00:34:46.917 "status": "finished", 00:34:46.917 "queue_depth": 16, 00:34:46.917 "io_size": 131072, 00:34:46.917 "runtime": 2.00396, 00:34:46.917 "iops": 6088.9438910956305, 00:34:46.918 "mibps": 761.1179863869538, 00:34:46.918 "io_failed": 0, 00:34:46.918 "io_timeout": 0, 00:34:46.918 "avg_latency_us": 2621.1281880930264, 00:34:46.918 "min_latency_us": 1711.2177777777779, 00:34:46.918 "max_latency_us": 4417.6118518518515 00:34:46.918 } 00:34:46.918 ], 00:34:46.918 "core_count": 1 00:34:46.918 } 00:34:46.918 00:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:46.918 00:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:46.918 00:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:46.918 00:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:46.918 | select(.opcode=="crc32c") 00:34:46.918 | "\(.module_name) \(.executed)"' 00:34:46.918 00:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:47.176 00:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:47.176 00:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:47.176 00:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:47.176 00:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:47.176 00:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 395808 00:34:47.176 00:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 395808 ']' 00:34:47.176 00:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 395808 00:34:47.176 00:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:34:47.176 00:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:47.176 00:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 395808 00:34:47.434 00:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:47.434 00:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:47.434 00:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 395808' 00:34:47.434 killing process with pid 395808 00:34:47.434 00:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 395808 00:34:47.434 Received shutdown signal, test time was about 2.000000 seconds 00:34:47.434 00:34:47.434 Latency(us) 00:34:47.434 [2024-11-17T23:40:11.256Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:47.434 [2024-11-17T23:40:11.256Z] =================================================================================================================== 00:34:47.434 [2024-11-17T23:40:11.256Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:47.434 00:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 395808 00:34:47.434 00:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 394438 00:34:47.434 00:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 394438 ']' 00:34:47.434 00:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 394438 00:34:47.434 00:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:34:47.434 00:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:47.434 00:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 394438 00:34:47.434 00:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:47.434 00:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:47.434 00:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 394438' 00:34:47.434 killing process with pid 394438 00:34:47.434 00:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 394438 00:34:47.434 00:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 394438 00:34:47.692 00:34:47.692 real 0m15.646s 00:34:47.692 user 0m31.710s 00:34:47.692 sys 0m4.159s 00:34:47.692 00:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:47.692 00:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:47.692 ************************************ 00:34:47.692 END TEST nvmf_digest_clean 00:34:47.692 ************************************ 00:34:47.692 00:40:11 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:34:47.692 00:40:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:47.692 00:40:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:47.692 00:40:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:47.692 ************************************ 00:34:47.692 START TEST nvmf_digest_error 00:34:47.692 ************************************ 00:34:47.692 00:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:34:47.692 00:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:34:47.692 00:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:47.692 00:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:47.692 00:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:47.692 00:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=396354 00:34:47.692 00:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:34:47.692 00:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 396354 00:34:47.692 00:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 396354 ']' 00:34:47.692 00:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:47.692 00:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:47.692 00:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:47.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:47.692 00:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:47.692 00:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:47.951 [2024-11-18 00:40:11.542933] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:34:47.951 [2024-11-18 00:40:11.543040] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:47.951 [2024-11-18 00:40:11.615365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:47.951 [2024-11-18 00:40:11.658103] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:47.951 [2024-11-18 00:40:11.658164] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:47.951 [2024-11-18 00:40:11.658194] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:47.951 [2024-11-18 00:40:11.658206] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:47.951 [2024-11-18 00:40:11.658215] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:47.951 [2024-11-18 00:40:11.658807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:47.951 00:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:47.951 00:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:34:47.951 00:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:47.951 00:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:47.951 00:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:48.209 00:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:48.210 00:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:34:48.210 00:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.210 00:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:48.210 [2024-11-18 00:40:11.795540] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:34:48.210 00:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.210 00:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:34:48.210 00:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:34:48.210 00:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.210 00:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:48.210 null0 00:34:48.210 [2024-11-18 00:40:11.908555] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:48.210 [2024-11-18 00:40:11.932798] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:48.210 00:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.210 00:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:34:48.210 00:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:48.210 00:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:34:48.210 00:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:34:48.210 00:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:34:48.210 00:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=396389 00:34:48.210 00:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:34:48.210 00:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 396389 /var/tmp/bperf.sock 00:34:48.210 00:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 396389 ']' 00:34:48.210 00:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:48.210 00:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:48.210 00:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:48.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:48.210 00:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:48.210 00:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:48.210 [2024-11-18 00:40:11.982695] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:34:48.210 [2024-11-18 00:40:11.982770] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid396389 ] 00:34:48.466 [2024-11-18 00:40:12.054597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:48.466 [2024-11-18 00:40:12.103112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:48.466 00:40:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:48.466 00:40:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:34:48.466 00:40:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:48.466 00:40:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:48.724 00:40:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:48.724 00:40:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.724 00:40:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:48.724 00:40:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.724 00:40:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:48.724 00:40:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:49.290 nvme0n1 00:34:49.290 00:40:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:34:49.290 00:40:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.290 00:40:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:49.290 00:40:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.290 00:40:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:49.290 00:40:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:49.290 Running I/O for 2 seconds... 00:34:49.290 [2024-11-18 00:40:13.079258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:49.290 [2024-11-18 00:40:13.079330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.290 [2024-11-18 00:40:13.079380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.290 [2024-11-18 00:40:13.094704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:49.290 [2024-11-18 00:40:13.094734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.290 [2024-11-18 00:40:13.094765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.290 [2024-11-18 00:40:13.110810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:49.290 [2024-11-18 00:40:13.110841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.290 [2024-11-18 00:40:13.110874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.549 [2024-11-18 00:40:13.126112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:49.549 [2024-11-18 00:40:13.126144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.549 [2024-11-18 00:40:13.126162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.549 [2024-11-18 00:40:13.138412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:49.549 [2024-11-18 00:40:13.138442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:20503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.549 [2024-11-18 00:40:13.138475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.549 [2024-11-18 00:40:13.152861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:49.549 [2024-11-18 00:40:13.152890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:14739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.549 [2024-11-18 00:40:13.152921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.549 [2024-11-18 00:40:13.166960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:49.549 [2024-11-18 00:40:13.166991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:6133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.549 [2024-11-18 00:40:13.167009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.549 [2024-11-18 00:40:13.178167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:49.549 [2024-11-18 00:40:13.178194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:13830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.549 [2024-11-18 00:40:13.178225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.549 [2024-11-18 00:40:13.191824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:49.549 [2024-11-18 00:40:13.191853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:23446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.549 [2024-11-18 00:40:13.191870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.549 [2024-11-18 00:40:13.205779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:49.549 [2024-11-18 00:40:13.205811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.549 [2024-11-18 00:40:13.205828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.549 [2024-11-18 00:40:13.215720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:49.549 [2024-11-18 00:40:13.215754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.549 [2024-11-18 00:40:13.215786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.549 [2024-11-18 00:40:13.231016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:49.549 [2024-11-18 00:40:13.231044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:1763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.549 [2024-11-18 00:40:13.231075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.549 [2024-11-18 00:40:13.247023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:49.549 [2024-11-18 00:40:13.247054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:19610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.549 [2024-11-18 00:40:13.247071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.549 [2024-11-18 00:40:13.261510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:49.549 [2024-11-18 00:40:13.261557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.549 [2024-11-18 00:40:13.261574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.549 [2024-11-18 00:40:13.273058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:49.549 [2024-11-18 00:40:13.273086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:18857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.549 [2024-11-18 00:40:13.273117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.549 [2024-11-18 00:40:13.288019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:49.549 [2024-11-18 00:40:13.288047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:6286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.549 [2024-11-18 00:40:13.288078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.549 [2024-11-18 00:40:13.302822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:49.549 [2024-11-18 00:40:13.302850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.549 [2024-11-18 00:40:13.302880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.549 [2024-11-18 00:40:13.313968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:49.549 [2024-11-18 00:40:13.313996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.549 [2024-11-18 00:40:13.314026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.549 [2024-11-18 00:40:13.330011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:49.549 [2024-11-18 00:40:13.330040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:3363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.549 [2024-11-18 00:40:13.330071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.549 [2024-11-18 00:40:13.344273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:49.549 [2024-11-18 00:40:13.344304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.549 [2024-11-18 00:40:13.344345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.549 [2024-11-18 00:40:13.354946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:49.549 [2024-11-18 00:40:13.354975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.549 [2024-11-18 00:40:13.354991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.549 [2024-11-18 00:40:13.368929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:49.549 [2024-11-18 00:40:13.368958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.550 [2024-11-18 00:40:13.368989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.808 [2024-11-18 00:40:13.381815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:49.808 [2024-11-18 00:40:13.381847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:6792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.808 [2024-11-18 00:40:13.381864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.808 [2024-11-18 00:40:13.393510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:49.808 [2024-11-18 00:40:13.393539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.808 [2024-11-18 00:40:13.393571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.808 [2024-11-18 00:40:13.407871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:49.808 [2024-11-18 00:40:13.407898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:5060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.808 [2024-11-18 00:40:13.407929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.808 [2024-11-18 00:40:13.424226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:49.808 [2024-11-18 00:40:13.424272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:3765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.808 [2024-11-18 00:40:13.424289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.808 [2024-11-18 00:40:13.437244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:49.808 [2024-11-18 00:40:13.437275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:8518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.808 [2024-11-18 00:40:13.437306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.808 [2024-11-18 00:40:13.451472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:49.808 [2024-11-18 00:40:13.451504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:15503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.808 [2024-11-18 00:40:13.451527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.808 [2024-11-18 00:40:13.462870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:49.808 [2024-11-18 00:40:13.462899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:8207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.808 [2024-11-18 00:40:13.462917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.808 [2024-11-18 00:40:13.478723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:49.808 [2024-11-18 00:40:13.478751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.808 [2024-11-18 00:40:13.478781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.808 [2024-11-18 00:40:13.493665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:49.808 [2024-11-18 00:40:13.493694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:18490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.808 [2024-11-18 00:40:13.493711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.808 [2024-11-18 00:40:13.507892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:49.808 [2024-11-18 00:40:13.507920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.808 [2024-11-18 00:40:13.507952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.808 [2024-11-18 00:40:13.522357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:49.808 [2024-11-18 00:40:13.522389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:17322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.808 [2024-11-18 00:40:13.522407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.808 [2024-11-18 00:40:13.533714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:49.808 [2024-11-18 00:40:13.533742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.808 [2024-11-18 00:40:13.533779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.808 [2024-11-18 00:40:13.549919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:49.808 [2024-11-18 00:40:13.549948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:8116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.808 [2024-11-18 00:40:13.549980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.808 [2024-11-18 00:40:13.566207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:49.808 [2024-11-18 00:40:13.566238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.808 [2024-11-18 00:40:13.566255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.808 [2024-11-18 00:40:13.580811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:49.808 [2024-11-18 00:40:13.580847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.808 [2024-11-18 00:40:13.580864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.808 [2024-11-18 00:40:13.592793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:49.808 [2024-11-18 00:40:13.592823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.808 [2024-11-18 00:40:13.592839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.808 [2024-11-18 00:40:13.607060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:49.808 [2024-11-18 00:40:13.607088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:20335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.808 [2024-11-18 00:40:13.607120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.808 [2024-11-18 00:40:13.623029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:49.808 [2024-11-18 00:40:13.623060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.809 [2024-11-18 00:40:13.623077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.067 [2024-11-18 00:40:13.638899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.067 [2024-11-18 00:40:13.638929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.067 [2024-11-18 00:40:13.638946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.067 [2024-11-18 00:40:13.651509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.067 [2024-11-18 00:40:13.651556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.067 [2024-11-18 00:40:13.651573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.067 [2024-11-18 00:40:13.666363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.067 [2024-11-18 00:40:13.666395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:8344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.067 [2024-11-18 00:40:13.666413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.067 [2024-11-18 00:40:13.680529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.067 [2024-11-18 00:40:13.680559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:8758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.067 [2024-11-18 00:40:13.680576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.067 [2024-11-18 00:40:13.691353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.067 [2024-11-18 00:40:13.691382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:4412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.067 [2024-11-18 00:40:13.691414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.067 [2024-11-18 00:40:13.707412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.067 [2024-11-18 00:40:13.707461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:7756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.067 [2024-11-18 00:40:13.707480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.067 [2024-11-18 00:40:13.721237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.067 [2024-11-18 00:40:13.721268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.067 [2024-11-18 00:40:13.721284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.067 [2024-11-18 00:40:13.735191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.067 [2024-11-18 00:40:13.735223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.067 [2024-11-18 00:40:13.735241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.067 [2024-11-18 00:40:13.747769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.067 [2024-11-18 00:40:13.747813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.067 [2024-11-18 00:40:13.747830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.067 [2024-11-18 00:40:13.761548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.067 [2024-11-18 00:40:13.761580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.067 [2024-11-18 00:40:13.761597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.068 [2024-11-18 00:40:13.773396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.068 [2024-11-18 00:40:13.773426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:13088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.068 [2024-11-18 00:40:13.773444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.068 [2024-11-18 00:40:13.787502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.068 [2024-11-18 00:40:13.787534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.068 [2024-11-18 00:40:13.787552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.068 [2024-11-18 00:40:13.799593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.068 [2024-11-18 00:40:13.799623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.068 [2024-11-18 00:40:13.799640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.068 [2024-11-18 00:40:13.814873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.068 [2024-11-18 00:40:13.814902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.068 [2024-11-18 00:40:13.814940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.068 [2024-11-18 00:40:13.831099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.068 [2024-11-18 00:40:13.831128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.068 [2024-11-18 00:40:13.831158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.068 [2024-11-18 00:40:13.847846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.068 [2024-11-18 00:40:13.847893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:21603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.068 [2024-11-18 00:40:13.847911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.068 [2024-11-18 00:40:13.860847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.068 [2024-11-18 00:40:13.860878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.068 [2024-11-18 00:40:13.860896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.068 [2024-11-18 00:40:13.873837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.068 [2024-11-18 00:40:13.873880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.068 [2024-11-18 00:40:13.873897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.068 [2024-11-18 00:40:13.884825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.068 [2024-11-18 00:40:13.884856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.068 [2024-11-18 00:40:13.884888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.326 [2024-11-18 00:40:13.900721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.327 [2024-11-18 00:40:13.900752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:18486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.327 [2024-11-18 00:40:13.900783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.327 [2024-11-18 00:40:13.917412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.327 [2024-11-18 00:40:13.917442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.327 [2024-11-18 00:40:13.917459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.327 [2024-11-18 00:40:13.931734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.327 [2024-11-18 00:40:13.931779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:3170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.327 [2024-11-18 00:40:13.931796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.327 [2024-11-18 00:40:13.947402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.327 [2024-11-18 00:40:13.947433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:15815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.327 [2024-11-18 00:40:13.947451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.327 [2024-11-18 00:40:13.958607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.327 [2024-11-18 00:40:13.958637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.327 [2024-11-18 00:40:13.958655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.327 [2024-11-18 00:40:13.974387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.327 [2024-11-18 00:40:13.974417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:9245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.327 [2024-11-18 00:40:13.974434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.327 [2024-11-18 00:40:13.987459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.327 [2024-11-18 00:40:13.987490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.327 [2024-11-18 00:40:13.987507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.327 [2024-11-18 00:40:13.998971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.327 [2024-11-18 00:40:13.999000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:18632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.327 [2024-11-18 00:40:13.999032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.327 [2024-11-18 00:40:14.014793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.327 [2024-11-18 00:40:14.014825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.327 [2024-11-18 00:40:14.014843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.327 [2024-11-18 00:40:14.029532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.327 [2024-11-18 00:40:14.029563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:13626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.327 [2024-11-18 00:40:14.029580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.327 [2024-11-18 00:40:14.045235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.327 [2024-11-18 00:40:14.045264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:17606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.327 [2024-11-18 00:40:14.045296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.327 [2024-11-18 00:40:14.055875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.327 [2024-11-18 00:40:14.055906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:16822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.327 [2024-11-18 00:40:14.055930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.327 18138.00 IOPS, 70.85 MiB/s [2024-11-17T23:40:14.149Z] [2024-11-18 00:40:14.070994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.327 [2024-11-18 00:40:14.071023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.327 [2024-11-18 00:40:14.071055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.327 [2024-11-18 00:40:14.085959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.327 [2024-11-18 00:40:14.085989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.327 [2024-11-18 00:40:14.086022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.327 [2024-11-18 00:40:14.100456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.327 [2024-11-18 00:40:14.100487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:24184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.327 [2024-11-18 00:40:14.100504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.327 [2024-11-18 00:40:14.113790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.327 [2024-11-18 00:40:14.113821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:23872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.327 [2024-11-18 00:40:14.113838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.327 [2024-11-18 00:40:14.124321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.327 [2024-11-18 00:40:14.124350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:24305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.327 [2024-11-18 00:40:14.124366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.327 [2024-11-18 00:40:14.137691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.327 [2024-11-18 00:40:14.137721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.327 [2024-11-18 00:40:14.137755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.586 [2024-11-18 00:40:14.154876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.586 [2024-11-18 00:40:14.154908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:17438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.586 [2024-11-18 00:40:14.154927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.586 [2024-11-18 00:40:14.168765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.586 [2024-11-18 00:40:14.168795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:12164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.586 [2024-11-18 00:40:14.168827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.586 [2024-11-18 00:40:14.182328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.586 [2024-11-18 00:40:14.182372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:7809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.586 [2024-11-18 00:40:14.182390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.586 [2024-11-18 00:40:14.194247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.586 [2024-11-18 00:40:14.194276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:22494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.586 [2024-11-18 00:40:14.194307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.586 [2024-11-18 00:40:14.208026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.586 [2024-11-18 00:40:14.208054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.586 [2024-11-18 00:40:14.208086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.586 [2024-11-18 00:40:14.225278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.586 [2024-11-18 00:40:14.225318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:10764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.586 [2024-11-18 00:40:14.225339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.586 [2024-11-18 00:40:14.238478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.586 [2024-11-18 00:40:14.238508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:5457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.586 [2024-11-18 00:40:14.238526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.586 [2024-11-18 00:40:14.249562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.586 [2024-11-18 00:40:14.249591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:8566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.586 [2024-11-18 00:40:14.249623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.586 [2024-11-18 00:40:14.265709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.586 [2024-11-18 00:40:14.265738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.586 [2024-11-18 00:40:14.265769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.586 [2024-11-18 00:40:14.279054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.587 [2024-11-18 00:40:14.279085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:19929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.587 [2024-11-18 00:40:14.279102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.587 [2024-11-18 00:40:14.294454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.587 [2024-11-18 00:40:14.294485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:2729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.587 [2024-11-18 00:40:14.294503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.587 [2024-11-18 00:40:14.309682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.587 [2024-11-18 00:40:14.309713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.587 [2024-11-18 00:40:14.309731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.587 [2024-11-18 00:40:14.324921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.587 [2024-11-18 00:40:14.324951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.587 [2024-11-18 00:40:14.324969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.587 [2024-11-18 00:40:14.336113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.587 [2024-11-18 00:40:14.336142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.587 [2024-11-18 00:40:14.336173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.587 [2024-11-18 00:40:14.349632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.587 [2024-11-18 00:40:14.349661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:22212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.587 [2024-11-18 00:40:14.349678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.587 [2024-11-18 00:40:14.364857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.587 [2024-11-18 00:40:14.364886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.587 [2024-11-18 00:40:14.364917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.587 [2024-11-18 00:40:14.380908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.587 [2024-11-18 00:40:14.380936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.587 [2024-11-18 00:40:14.380967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.587 [2024-11-18 00:40:14.396989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.587 [2024-11-18 00:40:14.397018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.587 [2024-11-18 00:40:14.397051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.846 [2024-11-18 00:40:14.412765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.846 [2024-11-18 00:40:14.412794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:14538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.846 [2024-11-18 00:40:14.412826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.846 [2024-11-18 00:40:14.428667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.846 [2024-11-18 00:40:14.428703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.846 [2024-11-18 00:40:14.428735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.846 [2024-11-18 00:40:14.444388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.846 [2024-11-18 00:40:14.444418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.846 [2024-11-18 00:40:14.444435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.846 [2024-11-18 00:40:14.457784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.846 [2024-11-18 00:40:14.457816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:8267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.846 [2024-11-18 00:40:14.457833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.846 [2024-11-18 00:40:14.474132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.846 [2024-11-18 00:40:14.474162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.846 [2024-11-18 00:40:14.474178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.846 [2024-11-18 00:40:14.485578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.846 [2024-11-18 00:40:14.485622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:19770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.846 [2024-11-18 00:40:14.485638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.846 [2024-11-18 00:40:14.501108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.846 [2024-11-18 00:40:14.501137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:15329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.846 [2024-11-18 00:40:14.501171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.846 [2024-11-18 00:40:14.516888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.846 [2024-11-18 00:40:14.516919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:25039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.846 [2024-11-18 00:40:14.516936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.846 [2024-11-18 00:40:14.530645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.846 [2024-11-18 00:40:14.530676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.846 [2024-11-18 00:40:14.530708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.846 [2024-11-18 00:40:14.542667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.846 [2024-11-18 00:40:14.542711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:20935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.846 [2024-11-18 00:40:14.542727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.846 [2024-11-18 00:40:14.558596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.846 [2024-11-18 00:40:14.558642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.846 [2024-11-18 00:40:14.558658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.846 [2024-11-18 00:40:14.574683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.846 [2024-11-18 00:40:14.574712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.846 [2024-11-18 00:40:14.574744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.846 [2024-11-18 00:40:14.590629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.846 [2024-11-18 00:40:14.590657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:3090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.846 [2024-11-18 00:40:14.590687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.846 [2024-11-18 00:40:14.606841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.846 [2024-11-18 00:40:14.606872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.846 [2024-11-18 00:40:14.606889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.846 [2024-11-18 00:40:14.622184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.846 [2024-11-18 00:40:14.622211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:7687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.846 [2024-11-18 00:40:14.622244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.846 [2024-11-18 00:40:14.636455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.846 [2024-11-18 00:40:14.636485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:23495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.846 [2024-11-18 00:40:14.636502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.847 [2024-11-18 00:40:14.647086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.847 [2024-11-18 00:40:14.647130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.847 [2024-11-18 00:40:14.647146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:50.847 [2024-11-18 00:40:14.660684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:50.847 [2024-11-18 00:40:14.660713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.847 [2024-11-18 00:40:14.660745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.105 [2024-11-18 00:40:14.673219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:51.105 [2024-11-18 00:40:14.673265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:4763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.105 [2024-11-18 00:40:14.673291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.105 [2024-11-18 00:40:14.687633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:51.105 [2024-11-18 00:40:14.687663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.105 [2024-11-18 00:40:14.687679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.105 [2024-11-18 00:40:14.700823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:51.105 [2024-11-18 00:40:14.700856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:17411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.105 [2024-11-18 00:40:14.700890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.105 [2024-11-18 00:40:14.712835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:51.105 [2024-11-18 00:40:14.712867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:17382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.105 [2024-11-18 00:40:14.712885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.105 [2024-11-18 00:40:14.726235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:51.105 [2024-11-18 00:40:14.726266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.105 [2024-11-18 00:40:14.726283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.105 [2024-11-18 00:40:14.737573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:51.105 [2024-11-18 00:40:14.737602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.105 [2024-11-18 00:40:14.737618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.105 [2024-11-18 00:40:14.751960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:51.105 [2024-11-18 00:40:14.751991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:8311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.105 [2024-11-18 00:40:14.752009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.105 [2024-11-18 00:40:14.764266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:51.105 [2024-11-18 00:40:14.764294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:15059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.105 [2024-11-18 00:40:14.764334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.105 [2024-11-18 00:40:14.776719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:51.105 [2024-11-18 00:40:14.776747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:7064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.105 [2024-11-18 00:40:14.776779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.105 [2024-11-18 00:40:14.792760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:51.105 [2024-11-18 00:40:14.792801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:15333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.105 [2024-11-18 00:40:14.792819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.105 [2024-11-18 00:40:14.804512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:51.105 [2024-11-18 00:40:14.804541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:24376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.105 [2024-11-18 00:40:14.804558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.105 [2024-11-18 00:40:14.818223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:51.105 [2024-11-18 00:40:14.818253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.105 [2024-11-18 00:40:14.818270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.105 [2024-11-18 00:40:14.830375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:51.105 [2024-11-18 00:40:14.830411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.105 [2024-11-18 00:40:14.830429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.105 [2024-11-18 00:40:14.845142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:51.105 [2024-11-18 00:40:14.845173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:10510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.105 [2024-11-18 00:40:14.845191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.105 [2024-11-18 00:40:14.857361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:51.105 [2024-11-18 00:40:14.857392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.105 [2024-11-18 00:40:14.857409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.105 [2024-11-18 00:40:14.872908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:51.106 [2024-11-18 00:40:14.872938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:23999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.106 [2024-11-18 00:40:14.872970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.106 [2024-11-18 00:40:14.886228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:51.106 [2024-11-18 00:40:14.886258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.106 [2024-11-18 00:40:14.886276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.106 [2024-11-18 00:40:14.897821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:51.106 [2024-11-18 00:40:14.897849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:17727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.106 [2024-11-18 00:40:14.897882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.106 [2024-11-18 00:40:14.912704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:51.106 [2024-11-18 00:40:14.912748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:1852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.106 [2024-11-18 00:40:14.912765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.364 [2024-11-18 00:40:14.928084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:51.364 [2024-11-18 00:40:14.928116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:20697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.364 [2024-11-18 00:40:14.928135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.364 [2024-11-18 00:40:14.939371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:51.364 [2024-11-18 00:40:14.939402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:6483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.364 [2024-11-18 00:40:14.939420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.364 [2024-11-18 00:40:14.953531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:51.364 [2024-11-18 00:40:14.953561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.364 [2024-11-18 00:40:14.953578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.364 [2024-11-18 00:40:14.967441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:51.364 [2024-11-18 00:40:14.967472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:11936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.364 [2024-11-18 00:40:14.967489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.364 [2024-11-18 00:40:14.980079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:51.364 [2024-11-18 00:40:14.980123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.364 [2024-11-18 00:40:14.980139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.364 [2024-11-18 00:40:14.995408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:51.364 [2024-11-18 00:40:14.995437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.364 [2024-11-18 00:40:14.995454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.365 [2024-11-18 00:40:15.010666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:51.365 [2024-11-18 00:40:15.010697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.365 [2024-11-18 00:40:15.010714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.365 [2024-11-18 00:40:15.026056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:51.365 [2024-11-18 00:40:15.026086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:22682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.365 [2024-11-18 00:40:15.026109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.365 [2024-11-18 00:40:15.040630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:51.365 [2024-11-18 00:40:15.040661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.365 [2024-11-18 00:40:15.040678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.365 [2024-11-18 00:40:15.052428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:51.365 [2024-11-18 00:40:15.052457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:19207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.365 [2024-11-18 00:40:15.052474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.365 18159.50 IOPS, 70.94 MiB/s [2024-11-17T23:40:15.187Z] [2024-11-18 00:40:15.067927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x223c3f0) 00:34:51.365 [2024-11-18 00:40:15.067957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:10137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.365 [2024-11-18 00:40:15.067973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.365 00:34:51.365 Latency(us) 00:34:51.365 [2024-11-17T23:40:15.187Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:51.365 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:34:51.365 nvme0n1 : 2.05 17802.20 69.54 0.00 0.00 7037.34 3543.80 48156.82 00:34:51.365 [2024-11-17T23:40:15.187Z] =================================================================================================================== 00:34:51.365 [2024-11-17T23:40:15.187Z] Total : 17802.20 69.54 0.00 0.00 7037.34 3543.80 48156.82 00:34:51.365 { 00:34:51.365 "results": [ 00:34:51.365 { 00:34:51.365 "job": "nvme0n1", 00:34:51.365 "core_mask": "0x2", 00:34:51.365 "workload": "randread", 00:34:51.365 "status": "finished", 00:34:51.365 "queue_depth": 128, 00:34:51.365 "io_size": 4096, 00:34:51.365 "runtime": 2.05087, 00:34:51.365 "iops": 17802.201017129315, 00:34:51.365 "mibps": 69.53984772316139, 00:34:51.365 "io_failed": 0, 00:34:51.365 "io_timeout": 0, 00:34:51.365 "avg_latency_us": 7037.337944875579, 00:34:51.365 "min_latency_us": 3543.7985185185184, 00:34:51.365 "max_latency_us": 48156.8237037037 00:34:51.365 } 00:34:51.365 ], 00:34:51.365 "core_count": 1 00:34:51.365 } 00:34:51.365 00:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:51.365 00:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:51.365 00:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:51.365 | .driver_specific 00:34:51.365 | .nvme_error 00:34:51.365 | .status_code 00:34:51.365 | .command_transient_transport_error' 00:34:51.365 00:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:51.623 00:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 143 > 0 )) 00:34:51.623 00:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 396389 00:34:51.623 00:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 396389 ']' 00:34:51.623 00:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 396389 00:34:51.623 00:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:34:51.623 00:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:51.623 00:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 396389 00:34:51.881 00:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:51.881 00:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:51.881 00:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 396389' 00:34:51.881 killing process with pid 396389 00:34:51.881 00:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 396389 00:34:51.881 Received shutdown signal, test time was about 2.000000 seconds 00:34:51.881 00:34:51.881 Latency(us) 00:34:51.881 [2024-11-17T23:40:15.703Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:51.881 [2024-11-17T23:40:15.703Z] =================================================================================================================== 00:34:51.881 [2024-11-17T23:40:15.703Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:51.881 00:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 396389 00:34:51.881 00:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:34:51.881 00:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:51.881 00:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:34:51.881 00:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:34:51.881 00:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:34:51.881 00:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=396791 00:34:51.881 00:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:34:51.881 00:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 396791 /var/tmp/bperf.sock 00:34:51.881 00:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 396791 ']' 00:34:51.881 00:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:51.881 00:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:51.881 00:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:51.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:51.881 00:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:51.881 00:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:51.881 [2024-11-18 00:40:15.691698] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:34:51.881 [2024-11-18 00:40:15.691778] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid396791 ] 00:34:51.881 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:51.881 Zero copy mechanism will not be used. 00:34:52.139 [2024-11-18 00:40:15.757070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:52.139 [2024-11-18 00:40:15.799555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:52.139 00:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:52.139 00:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:34:52.139 00:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:52.139 00:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:52.396 00:40:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:52.396 00:40:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.396 00:40:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:52.397 00:40:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.397 00:40:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:52.397 00:40:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:52.961 nvme0n1 00:34:52.961 00:40:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:34:52.961 00:40:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.961 00:40:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:52.961 00:40:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.961 00:40:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:52.961 00:40:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:52.961 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:52.961 Zero copy mechanism will not be used. 00:34:52.961 Running I/O for 2 seconds... 00:34:52.961 [2024-11-18 00:40:16.693931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:52.961 [2024-11-18 00:40:16.693998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.961 [2024-11-18 00:40:16.694020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.961 [2024-11-18 00:40:16.700008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:52.961 [2024-11-18 00:40:16.700045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.961 [2024-11-18 00:40:16.700065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.961 [2024-11-18 00:40:16.706902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:52.961 [2024-11-18 00:40:16.706934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.961 [2024-11-18 00:40:16.706968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.961 [2024-11-18 00:40:16.713634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:52.961 [2024-11-18 00:40:16.713668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.961 [2024-11-18 00:40:16.713701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.961 [2024-11-18 00:40:16.719588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:52.961 [2024-11-18 00:40:16.719635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.961 [2024-11-18 00:40:16.719653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.961 [2024-11-18 00:40:16.725468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:52.961 [2024-11-18 00:40:16.725500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.961 [2024-11-18 00:40:16.725519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.961 [2024-11-18 00:40:16.731263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:52.961 [2024-11-18 00:40:16.731295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.961 [2024-11-18 00:40:16.731338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.961 [2024-11-18 00:40:16.737122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:52.961 [2024-11-18 00:40:16.737168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.961 [2024-11-18 00:40:16.737187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.961 [2024-11-18 00:40:16.743000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:52.961 [2024-11-18 00:40:16.743046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.961 [2024-11-18 00:40:16.743065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.961 [2024-11-18 00:40:16.749091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:52.961 [2024-11-18 00:40:16.749137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.961 [2024-11-18 00:40:16.749157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.961 [2024-11-18 00:40:16.752784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:52.961 [2024-11-18 00:40:16.752816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.961 [2024-11-18 00:40:16.752850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.961 [2024-11-18 00:40:16.758626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:52.962 [2024-11-18 00:40:16.758659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.962 [2024-11-18 00:40:16.758677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.962 [2024-11-18 00:40:16.763933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:52.962 [2024-11-18 00:40:16.763965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.962 [2024-11-18 00:40:16.763989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.962 [2024-11-18 00:40:16.770046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:52.962 [2024-11-18 00:40:16.770091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.962 [2024-11-18 00:40:16.770109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.962 [2024-11-18 00:40:16.777397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:52.962 [2024-11-18 00:40:16.777430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.962 [2024-11-18 00:40:16.777448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.962 [2024-11-18 00:40:16.783151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:52.962 [2024-11-18 00:40:16.783185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.962 [2024-11-18 00:40:16.783203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.221 [2024-11-18 00:40:16.788926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.221 [2024-11-18 00:40:16.788973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.221 [2024-11-18 00:40:16.788990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.221 [2024-11-18 00:40:16.794443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.221 [2024-11-18 00:40:16.794476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.221 [2024-11-18 00:40:16.794495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.221 [2024-11-18 00:40:16.799968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.221 [2024-11-18 00:40:16.799999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.221 [2024-11-18 00:40:16.800032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.221 [2024-11-18 00:40:16.805344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.221 [2024-11-18 00:40:16.805377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.221 [2024-11-18 00:40:16.805396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.221 [2024-11-18 00:40:16.810744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.221 [2024-11-18 00:40:16.810776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.221 [2024-11-18 00:40:16.810810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.221 [2024-11-18 00:40:16.816540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.221 [2024-11-18 00:40:16.816578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.221 [2024-11-18 00:40:16.816598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.221 [2024-11-18 00:40:16.822203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.221 [2024-11-18 00:40:16.822250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.221 [2024-11-18 00:40:16.822268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.221 [2024-11-18 00:40:16.827822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.221 [2024-11-18 00:40:16.827855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.221 [2024-11-18 00:40:16.827873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.221 [2024-11-18 00:40:16.833549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.221 [2024-11-18 00:40:16.833582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.221 [2024-11-18 00:40:16.833600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.221 [2024-11-18 00:40:16.839371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.221 [2024-11-18 00:40:16.839404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.221 [2024-11-18 00:40:16.839422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.221 [2024-11-18 00:40:16.845106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.221 [2024-11-18 00:40:16.845139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.221 [2024-11-18 00:40:16.845157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.221 [2024-11-18 00:40:16.850742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.221 [2024-11-18 00:40:16.850773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.221 [2024-11-18 00:40:16.850806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.221 [2024-11-18 00:40:16.856536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.221 [2024-11-18 00:40:16.856569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.221 [2024-11-18 00:40:16.856587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.221 [2024-11-18 00:40:16.862316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.221 [2024-11-18 00:40:16.862348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.221 [2024-11-18 00:40:16.862366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.221 [2024-11-18 00:40:16.867967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.221 [2024-11-18 00:40:16.868001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.221 [2024-11-18 00:40:16.868019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.221 [2024-11-18 00:40:16.873805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.221 [2024-11-18 00:40:16.873837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.221 [2024-11-18 00:40:16.873870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.221 [2024-11-18 00:40:16.879354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.221 [2024-11-18 00:40:16.879387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.221 [2024-11-18 00:40:16.879405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.221 [2024-11-18 00:40:16.884840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.221 [2024-11-18 00:40:16.884873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.221 [2024-11-18 00:40:16.884891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.221 [2024-11-18 00:40:16.890301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.222 [2024-11-18 00:40:16.890343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.222 [2024-11-18 00:40:16.890362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.222 [2024-11-18 00:40:16.895829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.222 [2024-11-18 00:40:16.895878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.222 [2024-11-18 00:40:16.895896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.222 [2024-11-18 00:40:16.901423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.222 [2024-11-18 00:40:16.901455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.222 [2024-11-18 00:40:16.901473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.222 [2024-11-18 00:40:16.907005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.222 [2024-11-18 00:40:16.907036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.222 [2024-11-18 00:40:16.907070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.222 [2024-11-18 00:40:16.912475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.222 [2024-11-18 00:40:16.912507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.222 [2024-11-18 00:40:16.912532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.222 [2024-11-18 00:40:16.917965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.222 [2024-11-18 00:40:16.917998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.222 [2024-11-18 00:40:16.918016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.222 [2024-11-18 00:40:16.923531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.222 [2024-11-18 00:40:16.923564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.222 [2024-11-18 00:40:16.923582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.222 [2024-11-18 00:40:16.929002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.222 [2024-11-18 00:40:16.929034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.222 [2024-11-18 00:40:16.929066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.222 [2024-11-18 00:40:16.934414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.222 [2024-11-18 00:40:16.934447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.222 [2024-11-18 00:40:16.934466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.222 [2024-11-18 00:40:16.939833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.222 [2024-11-18 00:40:16.939864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.222 [2024-11-18 00:40:16.939897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.222 [2024-11-18 00:40:16.945329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.222 [2024-11-18 00:40:16.945362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.222 [2024-11-18 00:40:16.945379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.222 [2024-11-18 00:40:16.951861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.222 [2024-11-18 00:40:16.951893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.222 [2024-11-18 00:40:16.951926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.222 [2024-11-18 00:40:16.959751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.222 [2024-11-18 00:40:16.959785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.222 [2024-11-18 00:40:16.959803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.222 [2024-11-18 00:40:16.967392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.222 [2024-11-18 00:40:16.967426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.222 [2024-11-18 00:40:16.967444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.222 [2024-11-18 00:40:16.974140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.222 [2024-11-18 00:40:16.974184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.222 [2024-11-18 00:40:16.974217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.222 [2024-11-18 00:40:16.979989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.222 [2024-11-18 00:40:16.980025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.222 [2024-11-18 00:40:16.980059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.222 [2024-11-18 00:40:16.985825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.222 [2024-11-18 00:40:16.985872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.222 [2024-11-18 00:40:16.985891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.222 [2024-11-18 00:40:16.991643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.222 [2024-11-18 00:40:16.991675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.222 [2024-11-18 00:40:16.991694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.222 [2024-11-18 00:40:16.998824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.222 [2024-11-18 00:40:16.998858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.222 [2024-11-18 00:40:16.998876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.222 [2024-11-18 00:40:17.005875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.222 [2024-11-18 00:40:17.005909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.222 [2024-11-18 00:40:17.005928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.222 [2024-11-18 00:40:17.013820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.222 [2024-11-18 00:40:17.013851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.222 [2024-11-18 00:40:17.013884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.222 [2024-11-18 00:40:17.021642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.222 [2024-11-18 00:40:17.021686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.222 [2024-11-18 00:40:17.021726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.223 [2024-11-18 00:40:17.029158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.223 [2024-11-18 00:40:17.029192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.223 [2024-11-18 00:40:17.029211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.223 [2024-11-18 00:40:17.035892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.223 [2024-11-18 00:40:17.035932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.223 [2024-11-18 00:40:17.035950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.480 [2024-11-18 00:40:17.043600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.480 [2024-11-18 00:40:17.043644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.480 [2024-11-18 00:40:17.043663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.480 [2024-11-18 00:40:17.051748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.480 [2024-11-18 00:40:17.051781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.480 [2024-11-18 00:40:17.051799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.480 [2024-11-18 00:40:17.059199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.480 [2024-11-18 00:40:17.059232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.480 [2024-11-18 00:40:17.059251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.480 [2024-11-18 00:40:17.066554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.480 [2024-11-18 00:40:17.066587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.480 [2024-11-18 00:40:17.066605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.480 [2024-11-18 00:40:17.074188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.480 [2024-11-18 00:40:17.074220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.480 [2024-11-18 00:40:17.074239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.480 [2024-11-18 00:40:17.082246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.480 [2024-11-18 00:40:17.082292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.480 [2024-11-18 00:40:17.082335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.480 [2024-11-18 00:40:17.090007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.480 [2024-11-18 00:40:17.090060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.480 [2024-11-18 00:40:17.090079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.481 [2024-11-18 00:40:17.097013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.481 [2024-11-18 00:40:17.097046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.481 [2024-11-18 00:40:17.097064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.481 [2024-11-18 00:40:17.103148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.481 [2024-11-18 00:40:17.103197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.481 [2024-11-18 00:40:17.103215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.481 [2024-11-18 00:40:17.109062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.481 [2024-11-18 00:40:17.109094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.481 [2024-11-18 00:40:17.109112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.481 [2024-11-18 00:40:17.115061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.481 [2024-11-18 00:40:17.115092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.481 [2024-11-18 00:40:17.115125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.481 [2024-11-18 00:40:17.120835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.481 [2024-11-18 00:40:17.120867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.481 [2024-11-18 00:40:17.120885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.481 [2024-11-18 00:40:17.126524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.481 [2024-11-18 00:40:17.126557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.481 [2024-11-18 00:40:17.126574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.481 [2024-11-18 00:40:17.129995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.481 [2024-11-18 00:40:17.130023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.481 [2024-11-18 00:40:17.130039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.481 [2024-11-18 00:40:17.135555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.481 [2024-11-18 00:40:17.135586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.481 [2024-11-18 00:40:17.135621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.481 [2024-11-18 00:40:17.141074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.481 [2024-11-18 00:40:17.141121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.481 [2024-11-18 00:40:17.141139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.481 [2024-11-18 00:40:17.146790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.481 [2024-11-18 00:40:17.146833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.481 [2024-11-18 00:40:17.146849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.481 [2024-11-18 00:40:17.152487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.481 [2024-11-18 00:40:17.152519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.481 [2024-11-18 00:40:17.152554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.481 [2024-11-18 00:40:17.158100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.481 [2024-11-18 00:40:17.158146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.481 [2024-11-18 00:40:17.158163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.481 [2024-11-18 00:40:17.163536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.481 [2024-11-18 00:40:17.163567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.481 [2024-11-18 00:40:17.163600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.481 [2024-11-18 00:40:17.169022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.481 [2024-11-18 00:40:17.169066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.481 [2024-11-18 00:40:17.169082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.481 [2024-11-18 00:40:17.174863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.481 [2024-11-18 00:40:17.174893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.481 [2024-11-18 00:40:17.174926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.481 [2024-11-18 00:40:17.181965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.481 [2024-11-18 00:40:17.182010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.481 [2024-11-18 00:40:17.182026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.481 [2024-11-18 00:40:17.189000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.481 [2024-11-18 00:40:17.189030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.481 [2024-11-18 00:40:17.189069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.481 [2024-11-18 00:40:17.195036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.481 [2024-11-18 00:40:17.195082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.481 [2024-11-18 00:40:17.195099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.481 [2024-11-18 00:40:17.201456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.481 [2024-11-18 00:40:17.201487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.481 [2024-11-18 00:40:17.201524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.481 [2024-11-18 00:40:17.208792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.481 [2024-11-18 00:40:17.208839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.481 [2024-11-18 00:40:17.208857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.481 [2024-11-18 00:40:17.216409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.481 [2024-11-18 00:40:17.216440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.481 [2024-11-18 00:40:17.216473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.481 [2024-11-18 00:40:17.223482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.481 [2024-11-18 00:40:17.223514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.481 [2024-11-18 00:40:17.223531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.481 [2024-11-18 00:40:17.229934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.481 [2024-11-18 00:40:17.229963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.481 [2024-11-18 00:40:17.229993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.481 [2024-11-18 00:40:17.235589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.481 [2024-11-18 00:40:17.235635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.481 [2024-11-18 00:40:17.235651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.481 [2024-11-18 00:40:17.241186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.481 [2024-11-18 00:40:17.241216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.481 [2024-11-18 00:40:17.241249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.481 [2024-11-18 00:40:17.247105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.481 [2024-11-18 00:40:17.247141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.481 [2024-11-18 00:40:17.247160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.481 [2024-11-18 00:40:17.254378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.482 [2024-11-18 00:40:17.254410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.482 [2024-11-18 00:40:17.254427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.482 [2024-11-18 00:40:17.261189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.482 [2024-11-18 00:40:17.261237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.482 [2024-11-18 00:40:17.261254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.482 [2024-11-18 00:40:17.267469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.482 [2024-11-18 00:40:17.267516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.482 [2024-11-18 00:40:17.267535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.482 [2024-11-18 00:40:17.274380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.482 [2024-11-18 00:40:17.274411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.482 [2024-11-18 00:40:17.274443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.482 [2024-11-18 00:40:17.282111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.482 [2024-11-18 00:40:17.282141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.482 [2024-11-18 00:40:17.282173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.482 [2024-11-18 00:40:17.289592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.482 [2024-11-18 00:40:17.289639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.482 [2024-11-18 00:40:17.289656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.482 [2024-11-18 00:40:17.297445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.482 [2024-11-18 00:40:17.297491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.482 [2024-11-18 00:40:17.297510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.747 [2024-11-18 00:40:17.305498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.747 [2024-11-18 00:40:17.305532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.747 [2024-11-18 00:40:17.305555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.747 [2024-11-18 00:40:17.313491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.747 [2024-11-18 00:40:17.313523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.747 [2024-11-18 00:40:17.313557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.747 [2024-11-18 00:40:17.321643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.747 [2024-11-18 00:40:17.321675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.747 [2024-11-18 00:40:17.321708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.747 [2024-11-18 00:40:17.329854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.747 [2024-11-18 00:40:17.329887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.747 [2024-11-18 00:40:17.329905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.747 [2024-11-18 00:40:17.337612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.747 [2024-11-18 00:40:17.337643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.747 [2024-11-18 00:40:17.337678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.747 [2024-11-18 00:40:17.345899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.747 [2024-11-18 00:40:17.345931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.747 [2024-11-18 00:40:17.345965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.747 [2024-11-18 00:40:17.353165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.747 [2024-11-18 00:40:17.353197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.747 [2024-11-18 00:40:17.353236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.747 [2024-11-18 00:40:17.359818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.747 [2024-11-18 00:40:17.359850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.747 [2024-11-18 00:40:17.359868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.747 [2024-11-18 00:40:17.367190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.747 [2024-11-18 00:40:17.367237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.747 [2024-11-18 00:40:17.367256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.747 [2024-11-18 00:40:17.374828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.747 [2024-11-18 00:40:17.374881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.747 [2024-11-18 00:40:17.374900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.747 [2024-11-18 00:40:17.382369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.747 [2024-11-18 00:40:17.382402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.747 [2024-11-18 00:40:17.382420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.747 [2024-11-18 00:40:17.388189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.747 [2024-11-18 00:40:17.388221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.747 [2024-11-18 00:40:17.388238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.747 [2024-11-18 00:40:17.393855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.747 [2024-11-18 00:40:17.393903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.747 [2024-11-18 00:40:17.393922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.747 [2024-11-18 00:40:17.400063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.747 [2024-11-18 00:40:17.400094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.747 [2024-11-18 00:40:17.400127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.747 [2024-11-18 00:40:17.403966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.747 [2024-11-18 00:40:17.403994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.747 [2024-11-18 00:40:17.404025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.747 [2024-11-18 00:40:17.411294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.747 [2024-11-18 00:40:17.411345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.747 [2024-11-18 00:40:17.411362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.747 [2024-11-18 00:40:17.417199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.748 [2024-11-18 00:40:17.417228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.748 [2024-11-18 00:40:17.417260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.748 [2024-11-18 00:40:17.423133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.748 [2024-11-18 00:40:17.423163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.748 [2024-11-18 00:40:17.423196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.748 [2024-11-18 00:40:17.429078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.748 [2024-11-18 00:40:17.429108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.748 [2024-11-18 00:40:17.429141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.748 [2024-11-18 00:40:17.434884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.748 [2024-11-18 00:40:17.434913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.748 [2024-11-18 00:40:17.434931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.748 [2024-11-18 00:40:17.440886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.748 [2024-11-18 00:40:17.440930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.748 [2024-11-18 00:40:17.440947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.748 [2024-11-18 00:40:17.446703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.748 [2024-11-18 00:40:17.446733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.748 [2024-11-18 00:40:17.446750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.748 [2024-11-18 00:40:17.452642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.748 [2024-11-18 00:40:17.452675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.748 [2024-11-18 00:40:17.452693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.748 [2024-11-18 00:40:17.459132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.748 [2024-11-18 00:40:17.459163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.748 [2024-11-18 00:40:17.459196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.748 [2024-11-18 00:40:17.465685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.748 [2024-11-18 00:40:17.465729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.748 [2024-11-18 00:40:17.465746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.748 [2024-11-18 00:40:17.472095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.748 [2024-11-18 00:40:17.472128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.748 [2024-11-18 00:40:17.472146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.748 [2024-11-18 00:40:17.479277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.748 [2024-11-18 00:40:17.479334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.748 [2024-11-18 00:40:17.479359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.748 [2024-11-18 00:40:17.484651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.748 [2024-11-18 00:40:17.484710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.748 [2024-11-18 00:40:17.484743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.748 [2024-11-18 00:40:17.490677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.748 [2024-11-18 00:40:17.490708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.748 [2024-11-18 00:40:17.490761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.748 [2024-11-18 00:40:17.497907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.748 [2024-11-18 00:40:17.497953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.748 [2024-11-18 00:40:17.497972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.748 [2024-11-18 00:40:17.505711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.748 [2024-11-18 00:40:17.505742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.748 [2024-11-18 00:40:17.505775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.748 [2024-11-18 00:40:17.513526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.748 [2024-11-18 00:40:17.513558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.748 [2024-11-18 00:40:17.513593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.748 [2024-11-18 00:40:17.519728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.748 [2024-11-18 00:40:17.519761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.748 [2024-11-18 00:40:17.519780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.748 [2024-11-18 00:40:17.523628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.748 [2024-11-18 00:40:17.523673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.748 [2024-11-18 00:40:17.523691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.748 [2024-11-18 00:40:17.528519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.748 [2024-11-18 00:40:17.528550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.748 [2024-11-18 00:40:17.528582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.748 [2024-11-18 00:40:17.534795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.748 [2024-11-18 00:40:17.534846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.748 [2024-11-18 00:40:17.534863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.748 [2024-11-18 00:40:17.541471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.748 [2024-11-18 00:40:17.541502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.748 [2024-11-18 00:40:17.541521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.748 [2024-11-18 00:40:17.547345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.748 [2024-11-18 00:40:17.547375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.748 [2024-11-18 00:40:17.547408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.748 [2024-11-18 00:40:17.553248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.748 [2024-11-18 00:40:17.553277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.748 [2024-11-18 00:40:17.553309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.748 [2024-11-18 00:40:17.559071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.748 [2024-11-18 00:40:17.559101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.748 [2024-11-18 00:40:17.559134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.748 [2024-11-18 00:40:17.564807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:53.748 [2024-11-18 00:40:17.564852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.748 [2024-11-18 00:40:17.564869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:54.007 [2024-11-18 00:40:17.570576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.007 [2024-11-18 00:40:17.570609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.007 [2024-11-18 00:40:17.570641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:54.007 [2024-11-18 00:40:17.576341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.007 [2024-11-18 00:40:17.576371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.007 [2024-11-18 00:40:17.576404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:54.007 [2024-11-18 00:40:17.582140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.007 [2024-11-18 00:40:17.582185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.007 [2024-11-18 00:40:17.582201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:54.007 [2024-11-18 00:40:17.587899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.007 [2024-11-18 00:40:17.587928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.007 [2024-11-18 00:40:17.587960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:54.007 [2024-11-18 00:40:17.593929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.007 [2024-11-18 00:40:17.593960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.007 [2024-11-18 00:40:17.593993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:54.007 [2024-11-18 00:40:17.598758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.007 [2024-11-18 00:40:17.598787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.007 [2024-11-18 00:40:17.598819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:54.007 [2024-11-18 00:40:17.604837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.007 [2024-11-18 00:40:17.604942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.007 [2024-11-18 00:40:17.604978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:54.007 [2024-11-18 00:40:17.610016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.007 [2024-11-18 00:40:17.610061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.008 [2024-11-18 00:40:17.610078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:54.008 [2024-11-18 00:40:17.616147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.008 [2024-11-18 00:40:17.616177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.008 [2024-11-18 00:40:17.616210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:54.008 [2024-11-18 00:40:17.620893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.008 [2024-11-18 00:40:17.620923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.008 [2024-11-18 00:40:17.620957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:54.008 [2024-11-18 00:40:17.626212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.008 [2024-11-18 00:40:17.626255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.008 [2024-11-18 00:40:17.626273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:54.008 [2024-11-18 00:40:17.631595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.008 [2024-11-18 00:40:17.631624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.008 [2024-11-18 00:40:17.631647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:54.008 [2024-11-18 00:40:17.636976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.008 [2024-11-18 00:40:17.637005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.008 [2024-11-18 00:40:17.637036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:54.008 [2024-11-18 00:40:17.642390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.008 [2024-11-18 00:40:17.642420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.008 [2024-11-18 00:40:17.642452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:54.008 [2024-11-18 00:40:17.647776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.008 [2024-11-18 00:40:17.647805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.008 [2024-11-18 00:40:17.647837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:54.008 [2024-11-18 00:40:17.653218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.008 [2024-11-18 00:40:17.653247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.008 [2024-11-18 00:40:17.653278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:54.008 [2024-11-18 00:40:17.658607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.008 [2024-11-18 00:40:17.658637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.008 [2024-11-18 00:40:17.658669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:54.008 [2024-11-18 00:40:17.664019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.008 [2024-11-18 00:40:17.664049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.008 [2024-11-18 00:40:17.664080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:54.008 [2024-11-18 00:40:17.669501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.008 [2024-11-18 00:40:17.669531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.008 [2024-11-18 00:40:17.669565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:54.008 [2024-11-18 00:40:17.675224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.008 [2024-11-18 00:40:17.675254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.008 [2024-11-18 00:40:17.675287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:54.008 [2024-11-18 00:40:17.682106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.008 [2024-11-18 00:40:17.682135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.008 [2024-11-18 00:40:17.682167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:54.008 [2024-11-18 00:40:17.689937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.008 [2024-11-18 00:40:17.689969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.008 [2024-11-18 00:40:17.689988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:54.008 4964.00 IOPS, 620.50 MiB/s [2024-11-17T23:40:17.830Z] [2024-11-18 00:40:17.699100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.008 [2024-11-18 00:40:17.699144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.008 [2024-11-18 00:40:17.699161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:54.008 [2024-11-18 00:40:17.707151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.008 [2024-11-18 00:40:17.707197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.008 [2024-11-18 00:40:17.707214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:54.008 [2024-11-18 00:40:17.714897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.008 [2024-11-18 00:40:17.714946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.008 [2024-11-18 00:40:17.714964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:54.008 [2024-11-18 00:40:17.722175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.008 [2024-11-18 00:40:17.722207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.008 [2024-11-18 00:40:17.722240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:54.008 [2024-11-18 00:40:17.729955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.008 [2024-11-18 00:40:17.729986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.008 [2024-11-18 00:40:17.730019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:54.008 [2024-11-18 00:40:17.737763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.008 [2024-11-18 00:40:17.737795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.008 [2024-11-18 00:40:17.737829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:54.008 [2024-11-18 00:40:17.745517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.008 [2024-11-18 00:40:17.745550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.008 [2024-11-18 00:40:17.745574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:54.009 [2024-11-18 00:40:17.753118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.009 [2024-11-18 00:40:17.753150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.009 [2024-11-18 00:40:17.753168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:54.009 [2024-11-18 00:40:17.760781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.009 [2024-11-18 00:40:17.760813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.009 [2024-11-18 00:40:17.760847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:54.009 [2024-11-18 00:40:17.768507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.009 [2024-11-18 00:40:17.768539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.009 [2024-11-18 00:40:17.768558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:54.009 [2024-11-18 00:40:17.776259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.009 [2024-11-18 00:40:17.776304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.009 [2024-11-18 00:40:17.776329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:54.009 [2024-11-18 00:40:17.784472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.009 [2024-11-18 00:40:17.784505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.009 [2024-11-18 00:40:17.784523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:54.009 [2024-11-18 00:40:17.792428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.009 [2024-11-18 00:40:17.792474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.009 [2024-11-18 00:40:17.792493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:54.009 [2024-11-18 00:40:17.798710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.009 [2024-11-18 00:40:17.798743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.009 [2024-11-18 00:40:17.798761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:54.009 [2024-11-18 00:40:17.804888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.009 [2024-11-18 00:40:17.804920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.009 [2024-11-18 00:40:17.804938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:54.009 [2024-11-18 00:40:17.810817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.009 [2024-11-18 00:40:17.810888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.009 [2024-11-18 00:40:17.810923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:54.009 [2024-11-18 00:40:17.814928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.009 [2024-11-18 00:40:17.814958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.009 [2024-11-18 00:40:17.814992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:54.009 [2024-11-18 00:40:17.821109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.009 [2024-11-18 00:40:17.821138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.009 [2024-11-18 00:40:17.821170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:54.009 [2024-11-18 00:40:17.827457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.009 [2024-11-18 00:40:17.827490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.009 [2024-11-18 00:40:17.827508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:54.268 [2024-11-18 00:40:17.833024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.268 [2024-11-18 00:40:17.833054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.268 [2024-11-18 00:40:17.833086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:54.268 [2024-11-18 00:40:17.839115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.268 [2024-11-18 00:40:17.839144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.268 [2024-11-18 00:40:17.839177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:54.268 [2024-11-18 00:40:17.847253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.268 [2024-11-18 00:40:17.847322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.268 [2024-11-18 00:40:17.847346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:54.268 [2024-11-18 00:40:17.853559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.268 [2024-11-18 00:40:17.853606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.268 [2024-11-18 00:40:17.853623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:54.268 [2024-11-18 00:40:17.862168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.268 [2024-11-18 00:40:17.862231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.268 [2024-11-18 00:40:17.862253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:54.268 [2024-11-18 00:40:17.869548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.268 [2024-11-18 00:40:17.869579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.268 [2024-11-18 00:40:17.869611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:54.268 [2024-11-18 00:40:17.877447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.269 [2024-11-18 00:40:17.877494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.269 [2024-11-18 00:40:17.877513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:54.269 [2024-11-18 00:40:17.882757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.269 [2024-11-18 00:40:17.882788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.269 [2024-11-18 00:40:17.882821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:54.269 [2024-11-18 00:40:17.888725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.269 [2024-11-18 00:40:17.888771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.269 [2024-11-18 00:40:17.888789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:54.269 [2024-11-18 00:40:17.894637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.269 [2024-11-18 00:40:17.894669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.269 [2024-11-18 00:40:17.894686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:54.269 [2024-11-18 00:40:17.900292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.269 [2024-11-18 00:40:17.900360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.269 [2024-11-18 00:40:17.900380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:54.269 [2024-11-18 00:40:17.906463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.269 [2024-11-18 00:40:17.906497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.269 [2024-11-18 00:40:17.906515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:54.269 [2024-11-18 00:40:17.914217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.269 [2024-11-18 00:40:17.914247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.269 [2024-11-18 00:40:17.914281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:54.269 [2024-11-18 00:40:17.920549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.269 [2024-11-18 00:40:17.920580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.269 [2024-11-18 00:40:17.920623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:54.269 [2024-11-18 00:40:17.926911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.269 [2024-11-18 00:40:17.926940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.269 [2024-11-18 00:40:17.926972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:54.269 [2024-11-18 00:40:17.933407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.269 [2024-11-18 00:40:17.933440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.269 [2024-11-18 00:40:17.933459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:54.269 [2024-11-18 00:40:17.940433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.269 [2024-11-18 00:40:17.940463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.269 [2024-11-18 00:40:17.940495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:54.269 [2024-11-18 00:40:17.946321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.269 [2024-11-18 00:40:17.946351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.269 [2024-11-18 00:40:17.946383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:54.269 [2024-11-18 00:40:17.952133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.269 [2024-11-18 00:40:17.952180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.269 [2024-11-18 00:40:17.952198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:54.269 [2024-11-18 00:40:17.957884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.269 [2024-11-18 00:40:17.957929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.269 [2024-11-18 00:40:17.957946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:54.269 [2024-11-18 00:40:17.963840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.269 [2024-11-18 00:40:17.963869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.269 [2024-11-18 00:40:17.963886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:54.269 [2024-11-18 00:40:17.969510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.269 [2024-11-18 00:40:17.969541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.269 [2024-11-18 00:40:17.969577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:54.269 [2024-11-18 00:40:17.975252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.269 [2024-11-18 00:40:17.975282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.269 [2024-11-18 00:40:17.975323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:54.269 [2024-11-18 00:40:17.981168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.269 [2024-11-18 00:40:17.981198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.269 [2024-11-18 00:40:17.981230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:54.269 [2024-11-18 00:40:17.987694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.269 [2024-11-18 00:40:17.987724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.269 [2024-11-18 00:40:17.987755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:54.269 [2024-11-18 00:40:17.994100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.269 [2024-11-18 00:40:17.994146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.269 [2024-11-18 00:40:17.994166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:54.269 [2024-11-18 00:40:18.000421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.269 [2024-11-18 00:40:18.000450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.269 [2024-11-18 00:40:18.000482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:54.269 [2024-11-18 00:40:18.007623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.269 [2024-11-18 00:40:18.007653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.269 [2024-11-18 00:40:18.007670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:54.269 [2024-11-18 00:40:18.014971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.269 [2024-11-18 00:40:18.015017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.269 [2024-11-18 00:40:18.015035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:54.269 [2024-11-18 00:40:18.022150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.270 [2024-11-18 00:40:18.022178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.270 [2024-11-18 00:40:18.022211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:54.270 [2024-11-18 00:40:18.027898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.270 [2024-11-18 00:40:18.027946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.270 [2024-11-18 00:40:18.027970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:54.270 [2024-11-18 00:40:18.033412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.270 [2024-11-18 00:40:18.033444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.270 [2024-11-18 00:40:18.033461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:54.270 [2024-11-18 00:40:18.038791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.270 [2024-11-18 00:40:18.038838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.270 [2024-11-18 00:40:18.038857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:54.270 [2024-11-18 00:40:18.044286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.270 [2024-11-18 00:40:18.044322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.270 [2024-11-18 00:40:18.044358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:54.270 [2024-11-18 00:40:18.050191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.270 [2024-11-18 00:40:18.050220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.270 [2024-11-18 00:40:18.050253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:54.270 [2024-11-18 00:40:18.057253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.270 [2024-11-18 00:40:18.057282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.270 [2024-11-18 00:40:18.057299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:54.270 [2024-11-18 00:40:18.063638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.270 [2024-11-18 00:40:18.063666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.270 [2024-11-18 00:40:18.063699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:54.270 [2024-11-18 00:40:18.069203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.270 [2024-11-18 00:40:18.069247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.270 [2024-11-18 00:40:18.069264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:54.270 [2024-11-18 00:40:18.075136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.270 [2024-11-18 00:40:18.075167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.270 [2024-11-18 00:40:18.075199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:54.270 [2024-11-18 00:40:18.080753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.270 [2024-11-18 00:40:18.080808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.270 [2024-11-18 00:40:18.080826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:54.270 [2024-11-18 00:40:18.086666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.270 [2024-11-18 00:40:18.086698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.270 [2024-11-18 00:40:18.086729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:54.530 [2024-11-18 00:40:18.092323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.530 [2024-11-18 00:40:18.092371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.530 [2024-11-18 00:40:18.092390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:54.530 [2024-11-18 00:40:18.098132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.530 [2024-11-18 00:40:18.098178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.530 [2024-11-18 00:40:18.098195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:54.530 [2024-11-18 00:40:18.104071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.530 [2024-11-18 00:40:18.104117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.530 [2024-11-18 00:40:18.104134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:54.530 [2024-11-18 00:40:18.109961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.530 [2024-11-18 00:40:18.110008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.530 [2024-11-18 00:40:18.110026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:54.530 [2024-11-18 00:40:18.115774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.530 [2024-11-18 00:40:18.115820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.530 [2024-11-18 00:40:18.115838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:54.530 [2024-11-18 00:40:18.121713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.530 [2024-11-18 00:40:18.121760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.530 [2024-11-18 00:40:18.121778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:54.530 [2024-11-18 00:40:18.127473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.530 [2024-11-18 00:40:18.127505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.530 [2024-11-18 00:40:18.127524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:54.530 [2024-11-18 00:40:18.133385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.530 [2024-11-18 00:40:18.133417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.530 [2024-11-18 00:40:18.133436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:54.530 [2024-11-18 00:40:18.139116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.530 [2024-11-18 00:40:18.139160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.530 [2024-11-18 00:40:18.139178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:54.530 [2024-11-18 00:40:18.144913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.530 [2024-11-18 00:40:18.144943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.530 [2024-11-18 00:40:18.144976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:54.530 [2024-11-18 00:40:18.150998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.530 [2024-11-18 00:40:18.151030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.530 [2024-11-18 00:40:18.151062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:54.530 [2024-11-18 00:40:18.156643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.530 [2024-11-18 00:40:18.156675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.530 [2024-11-18 00:40:18.156707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:54.530 [2024-11-18 00:40:18.162194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.530 [2024-11-18 00:40:18.162225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.530 [2024-11-18 00:40:18.162256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:54.530 [2024-11-18 00:40:18.167496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.530 [2024-11-18 00:40:18.167529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.530 [2024-11-18 00:40:18.167548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:54.530 [2024-11-18 00:40:18.172989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.530 [2024-11-18 00:40:18.173035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.530 [2024-11-18 00:40:18.173053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:54.530 [2024-11-18 00:40:18.178640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.531 [2024-11-18 00:40:18.178669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.531 [2024-11-18 00:40:18.178706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:54.531 [2024-11-18 00:40:18.185109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.531 [2024-11-18 00:40:18.185156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.531 [2024-11-18 00:40:18.185173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:54.531 [2024-11-18 00:40:18.191152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.531 [2024-11-18 00:40:18.191183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.531 [2024-11-18 00:40:18.191201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:54.531 [2024-11-18 00:40:18.197151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.531 [2024-11-18 00:40:18.197181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.531 [2024-11-18 00:40:18.197213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:54.531 [2024-11-18 00:40:18.202902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.531 [2024-11-18 00:40:18.202951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.531 [2024-11-18 00:40:18.202969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:54.531 [2024-11-18 00:40:18.208579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.531 [2024-11-18 00:40:18.208611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.531 [2024-11-18 00:40:18.208630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:54.531 [2024-11-18 00:40:18.214372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.531 [2024-11-18 00:40:18.214418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.531 [2024-11-18 00:40:18.214437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:54.531 [2024-11-18 00:40:18.220069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.531 [2024-11-18 00:40:18.220100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.531 [2024-11-18 00:40:18.220135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:54.531 [2024-11-18 00:40:18.225904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.531 [2024-11-18 00:40:18.225936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.531 [2024-11-18 00:40:18.225953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:54.531 [2024-11-18 00:40:18.231643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.531 [2024-11-18 00:40:18.231681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.531 [2024-11-18 00:40:18.231700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:54.531 [2024-11-18 00:40:18.237632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.531 [2024-11-18 00:40:18.237663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.531 [2024-11-18 00:40:18.237695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:54.531 [2024-11-18 00:40:18.243162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.531 [2024-11-18 00:40:18.243195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.531 [2024-11-18 00:40:18.243213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:54.531 [2024-11-18 00:40:18.248495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.531 [2024-11-18 00:40:18.248544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.531 [2024-11-18 00:40:18.248562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:54.531 [2024-11-18 00:40:18.253897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.531 [2024-11-18 00:40:18.253943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.531 [2024-11-18 00:40:18.253961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:54.531 [2024-11-18 00:40:18.259201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.531 [2024-11-18 00:40:18.259233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.531 [2024-11-18 00:40:18.259251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:54.531 [2024-11-18 00:40:18.264547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.531 [2024-11-18 00:40:18.264580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.531 [2024-11-18 00:40:18.264599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:54.531 [2024-11-18 00:40:18.270204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.531 [2024-11-18 00:40:18.270236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.531 [2024-11-18 00:40:18.270254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:54.531 [2024-11-18 00:40:18.276940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.531 [2024-11-18 00:40:18.276986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.531 [2024-11-18 00:40:18.277006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:54.531 [2024-11-18 00:40:18.283262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.531 [2024-11-18 00:40:18.283316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.531 [2024-11-18 00:40:18.283335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:54.531 [2024-11-18 00:40:18.289087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.531 [2024-11-18 00:40:18.289119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.531 [2024-11-18 00:40:18.289138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:54.531 [2024-11-18 00:40:18.294958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.531 [2024-11-18 00:40:18.294990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.531 [2024-11-18 00:40:18.295009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:54.531 [2024-11-18 00:40:18.300946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.531 [2024-11-18 00:40:18.300977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.531 [2024-11-18 00:40:18.300994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:54.531 [2024-11-18 00:40:18.306759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.531 [2024-11-18 00:40:18.306807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.531 [2024-11-18 00:40:18.306825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:54.531 [2024-11-18 00:40:18.312789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.531 [2024-11-18 00:40:18.312821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.531 [2024-11-18 00:40:18.312839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:54.531 [2024-11-18 00:40:18.320083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.531 [2024-11-18 00:40:18.320131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.531 [2024-11-18 00:40:18.320149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:54.531 [2024-11-18 00:40:18.327675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.531 [2024-11-18 00:40:18.327723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.531 [2024-11-18 00:40:18.327741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:54.531 [2024-11-18 00:40:18.334274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.531 [2024-11-18 00:40:18.334306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.531 [2024-11-18 00:40:18.334344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:54.532 [2024-11-18 00:40:18.339966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.532 [2024-11-18 00:40:18.340012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.532 [2024-11-18 00:40:18.340030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:54.532 [2024-11-18 00:40:18.345964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.532 [2024-11-18 00:40:18.345997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.532 [2024-11-18 00:40:18.346015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:54.791 [2024-11-18 00:40:18.352397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.791 [2024-11-18 00:40:18.352430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.791 [2024-11-18 00:40:18.352449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:54.791 [2024-11-18 00:40:18.359305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.791 [2024-11-18 00:40:18.359359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.791 [2024-11-18 00:40:18.359392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:54.791 [2024-11-18 00:40:18.366616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.791 [2024-11-18 00:40:18.366663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.791 [2024-11-18 00:40:18.366681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:54.791 [2024-11-18 00:40:18.372761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.791 [2024-11-18 00:40:18.372794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.791 [2024-11-18 00:40:18.372812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:54.791 [2024-11-18 00:40:18.378288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.791 [2024-11-18 00:40:18.378343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.791 [2024-11-18 00:40:18.378361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:54.791 [2024-11-18 00:40:18.384118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.791 [2024-11-18 00:40:18.384164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.791 [2024-11-18 00:40:18.384182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:54.791 [2024-11-18 00:40:18.389816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.791 [2024-11-18 00:40:18.389847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.791 [2024-11-18 00:40:18.389881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:54.791 [2024-11-18 00:40:18.395696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.791 [2024-11-18 00:40:18.395729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.791 [2024-11-18 00:40:18.395747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:54.791 [2024-11-18 00:40:18.401372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.791 [2024-11-18 00:40:18.401405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.791 [2024-11-18 00:40:18.401423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:54.791 [2024-11-18 00:40:18.407065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.791 [2024-11-18 00:40:18.407098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.791 [2024-11-18 00:40:18.407116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:54.791 [2024-11-18 00:40:18.412750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.791 [2024-11-18 00:40:18.412781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.791 [2024-11-18 00:40:18.412813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:54.791 [2024-11-18 00:40:18.418591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.791 [2024-11-18 00:40:18.418638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.791 [2024-11-18 00:40:18.418656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:54.791 [2024-11-18 00:40:18.424384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.791 [2024-11-18 00:40:18.424416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.791 [2024-11-18 00:40:18.424434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:54.791 [2024-11-18 00:40:18.430140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.791 [2024-11-18 00:40:18.430172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.791 [2024-11-18 00:40:18.430190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:54.791 [2024-11-18 00:40:18.435924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.791 [2024-11-18 00:40:18.435971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.791 [2024-11-18 00:40:18.435993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:54.791 [2024-11-18 00:40:18.441737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.791 [2024-11-18 00:40:18.441769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.791 [2024-11-18 00:40:18.441787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:54.791 [2024-11-18 00:40:18.447495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.791 [2024-11-18 00:40:18.447527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.791 [2024-11-18 00:40:18.447545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:54.791 [2024-11-18 00:40:18.453266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.791 [2024-11-18 00:40:18.453299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.791 [2024-11-18 00:40:18.453325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:54.791 [2024-11-18 00:40:18.459059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.791 [2024-11-18 00:40:18.459092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.792 [2024-11-18 00:40:18.459109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:54.792 [2024-11-18 00:40:18.464911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.792 [2024-11-18 00:40:18.464943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.792 [2024-11-18 00:40:18.464961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:54.792 [2024-11-18 00:40:18.470437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.792 [2024-11-18 00:40:18.470470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.792 [2024-11-18 00:40:18.470488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:54.792 [2024-11-18 00:40:18.475999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.792 [2024-11-18 00:40:18.476032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.792 [2024-11-18 00:40:18.476049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:54.792 [2024-11-18 00:40:18.482122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.792 [2024-11-18 00:40:18.482154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.792 [2024-11-18 00:40:18.482172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:54.792 [2024-11-18 00:40:18.489686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.792 [2024-11-18 00:40:18.489738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.792 [2024-11-18 00:40:18.489757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:54.792 [2024-11-18 00:40:18.497271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.792 [2024-11-18 00:40:18.497303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.792 [2024-11-18 00:40:18.497329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:54.792 [2024-11-18 00:40:18.504838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.792 [2024-11-18 00:40:18.504885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.792 [2024-11-18 00:40:18.504903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:54.792 [2024-11-18 00:40:18.512388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.792 [2024-11-18 00:40:18.512420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.792 [2024-11-18 00:40:18.512438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:54.792 [2024-11-18 00:40:18.520156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.792 [2024-11-18 00:40:18.520203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.792 [2024-11-18 00:40:18.520221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:54.792 [2024-11-18 00:40:18.528085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.792 [2024-11-18 00:40:18.528132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.792 [2024-11-18 00:40:18.528150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:54.792 [2024-11-18 00:40:18.535705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.792 [2024-11-18 00:40:18.535737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.792 [2024-11-18 00:40:18.535756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:54.792 [2024-11-18 00:40:18.543248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.792 [2024-11-18 00:40:18.543279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.792 [2024-11-18 00:40:18.543320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:54.792 [2024-11-18 00:40:18.550911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.792 [2024-11-18 00:40:18.550942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.792 [2024-11-18 00:40:18.550976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:54.792 [2024-11-18 00:40:18.558475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.792 [2024-11-18 00:40:18.558508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.792 [2024-11-18 00:40:18.558526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:54.792 [2024-11-18 00:40:18.566022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.792 [2024-11-18 00:40:18.566073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.792 [2024-11-18 00:40:18.566091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:54.792 [2024-11-18 00:40:18.573648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.792 [2024-11-18 00:40:18.573680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.792 [2024-11-18 00:40:18.573698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:54.792 [2024-11-18 00:40:18.581393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.792 [2024-11-18 00:40:18.581425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.792 [2024-11-18 00:40:18.581443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:54.792 [2024-11-18 00:40:18.588996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.792 [2024-11-18 00:40:18.589027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.792 [2024-11-18 00:40:18.589045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:54.792 [2024-11-18 00:40:18.596653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.792 [2024-11-18 00:40:18.596686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.792 [2024-11-18 00:40:18.596704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:54.792 [2024-11-18 00:40:18.603382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.792 [2024-11-18 00:40:18.603414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.792 [2024-11-18 00:40:18.603432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:54.792 [2024-11-18 00:40:18.609527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:54.792 [2024-11-18 00:40:18.609560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:54.792 [2024-11-18 00:40:18.609579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:55.051 [2024-11-18 00:40:18.616362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:55.051 [2024-11-18 00:40:18.616395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.051 [2024-11-18 00:40:18.616419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:55.051 [2024-11-18 00:40:18.622618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:55.052 [2024-11-18 00:40:18.622663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.052 [2024-11-18 00:40:18.622680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:55.052 [2024-11-18 00:40:18.627081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:55.052 [2024-11-18 00:40:18.627129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.052 [2024-11-18 00:40:18.627146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:55.052 [2024-11-18 00:40:18.632768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:55.052 [2024-11-18 00:40:18.632800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.052 [2024-11-18 00:40:18.632819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:55.052 [2024-11-18 00:40:18.640027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:55.052 [2024-11-18 00:40:18.640072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.052 [2024-11-18 00:40:18.640090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:55.052 [2024-11-18 00:40:18.646065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:55.052 [2024-11-18 00:40:18.646094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.052 [2024-11-18 00:40:18.646125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:55.052 [2024-11-18 00:40:18.652026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:55.052 [2024-11-18 00:40:18.652057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.052 [2024-11-18 00:40:18.652074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:55.052 [2024-11-18 00:40:18.657833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:55.052 [2024-11-18 00:40:18.657881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.052 [2024-11-18 00:40:18.657899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:55.052 [2024-11-18 00:40:18.664819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:55.052 [2024-11-18 00:40:18.664849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.052 [2024-11-18 00:40:18.664882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:55.052 [2024-11-18 00:40:18.672391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:55.052 [2024-11-18 00:40:18.672443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.052 [2024-11-18 00:40:18.672462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:55.052 [2024-11-18 00:40:18.678433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:55.052 [2024-11-18 00:40:18.678465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.052 [2024-11-18 00:40:18.678483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:55.052 [2024-11-18 00:40:18.684488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:55.052 [2024-11-18 00:40:18.684535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.052 [2024-11-18 00:40:18.684552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:55.052 [2024-11-18 00:40:18.691968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x107f930) 00:34:55.052 [2024-11-18 00:40:18.691999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:55.052 [2024-11-18 00:40:18.692034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:55.052 4920.00 IOPS, 615.00 MiB/s 00:34:55.052 Latency(us) 00:34:55.052 [2024-11-17T23:40:18.874Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:55.052 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:34:55.052 nvme0n1 : 2.00 4922.05 615.26 0.00 0.00 3246.49 910.22 9223.59 00:34:55.052 [2024-11-17T23:40:18.874Z] =================================================================================================================== 00:34:55.052 [2024-11-17T23:40:18.874Z] Total : 4922.05 615.26 0.00 0.00 3246.49 910.22 9223.59 00:34:55.052 { 00:34:55.052 "results": [ 00:34:55.052 { 00:34:55.052 "job": "nvme0n1", 00:34:55.052 "core_mask": "0x2", 00:34:55.052 "workload": "randread", 00:34:55.052 "status": "finished", 00:34:55.052 "queue_depth": 16, 00:34:55.052 "io_size": 131072, 00:34:55.052 "runtime": 2.002418, 00:34:55.052 "iops": 4922.049242465859, 00:34:55.052 "mibps": 615.2561553082323, 00:34:55.052 "io_failed": 0, 00:34:55.052 "io_timeout": 0, 00:34:55.052 "avg_latency_us": 3246.4860413660413, 00:34:55.052 "min_latency_us": 910.2222222222222, 00:34:55.052 "max_latency_us": 9223.585185185186 00:34:55.052 } 00:34:55.052 ], 00:34:55.052 "core_count": 1 00:34:55.052 } 00:34:55.052 00:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:55.052 00:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:55.052 00:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:55.052 00:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:55.052 | .driver_specific 00:34:55.052 | .nvme_error 00:34:55.052 | .status_code 00:34:55.052 | .command_transient_transport_error' 00:34:55.310 00:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 318 > 0 )) 00:34:55.310 00:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 396791 00:34:55.310 00:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 396791 ']' 00:34:55.310 00:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 396791 00:34:55.310 00:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:34:55.310 00:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:55.310 00:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 396791 00:34:55.310 00:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:55.310 00:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:55.311 00:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 396791' 00:34:55.311 killing process with pid 396791 00:34:55.311 00:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 396791 00:34:55.311 Received shutdown signal, test time was about 2.000000 seconds 00:34:55.311 00:34:55.311 Latency(us) 00:34:55.311 [2024-11-17T23:40:19.133Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:55.311 [2024-11-17T23:40:19.133Z] =================================================================================================================== 00:34:55.311 [2024-11-17T23:40:19.133Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:55.311 00:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 396791 00:34:55.568 00:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:34:55.568 00:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:55.568 00:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:34:55.568 00:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:34:55.568 00:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:34:55.568 00:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=397200 00:34:55.568 00:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:34:55.568 00:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 397200 /var/tmp/bperf.sock 00:34:55.568 00:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 397200 ']' 00:34:55.568 00:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:55.568 00:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:55.568 00:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:55.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:55.568 00:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:55.568 00:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:55.568 [2024-11-18 00:40:19.252227] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:34:55.568 [2024-11-18 00:40:19.252349] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid397200 ] 00:34:55.568 [2024-11-18 00:40:19.323292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:55.568 [2024-11-18 00:40:19.371156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:55.825 00:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:55.825 00:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:34:55.825 00:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:55.825 00:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:56.082 00:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:56.082 00:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.082 00:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:56.082 00:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.082 00:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:56.082 00:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:56.339 nvme0n1 00:34:56.339 00:40:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:34:56.339 00:40:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.339 00:40:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:56.339 00:40:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.339 00:40:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:56.339 00:40:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:56.597 Running I/O for 2 seconds... 00:34:56.597 [2024-11-18 00:40:20.281298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166df550 00:34:56.597 [2024-11-18 00:40:20.282583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:6495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.597 [2024-11-18 00:40:20.282637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:56.597 [2024-11-18 00:40:20.294235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166f7970 00:34:56.597 [2024-11-18 00:40:20.295655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:21557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.597 [2024-11-18 00:40:20.295698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:56.597 [2024-11-18 00:40:20.306677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166f7da8 00:34:56.597 [2024-11-18 00:40:20.308252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:11158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.597 [2024-11-18 00:40:20.308296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:56.597 [2024-11-18 00:40:20.318028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166e3498 00:34:56.597 [2024-11-18 00:40:20.319330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.597 [2024-11-18 00:40:20.319359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:56.597 [2024-11-18 00:40:20.330098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166f8e88 00:34:56.597 [2024-11-18 00:40:20.331413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:4854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.597 [2024-11-18 00:40:20.331442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:56.597 [2024-11-18 00:40:20.344885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166e38d0 00:34:56.598 [2024-11-18 00:40:20.346699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:24300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.598 [2024-11-18 00:40:20.346740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:56.598 [2024-11-18 00:40:20.353454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fbcf0 00:34:56.598 [2024-11-18 00:40:20.354333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:17618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.598 [2024-11-18 00:40:20.354377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:56.598 [2024-11-18 00:40:20.365523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fb8b8 00:34:56.598 [2024-11-18 00:40:20.366462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.598 [2024-11-18 00:40:20.366491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:56.598 [2024-11-18 00:40:20.379907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:56.598 [2024-11-18 00:40:20.380211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:17503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.598 [2024-11-18 00:40:20.380254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:56.598 [2024-11-18 00:40:20.393607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:56.598 [2024-11-18 00:40:20.393833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:11507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.598 [2024-11-18 00:40:20.393859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:56.598 [2024-11-18 00:40:20.407277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:56.598 [2024-11-18 00:40:20.407606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:24359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.598 [2024-11-18 00:40:20.407637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:56.856 [2024-11-18 00:40:20.420805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:56.856 [2024-11-18 00:40:20.421100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:21917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.856 [2024-11-18 00:40:20.421144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:56.856 [2024-11-18 00:40:20.434492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:56.856 [2024-11-18 00:40:20.434725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:17525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.856 [2024-11-18 00:40:20.434779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:56.856 [2024-11-18 00:40:20.448204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:56.856 [2024-11-18 00:40:20.448493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:23009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.856 [2024-11-18 00:40:20.448522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:56.856 [2024-11-18 00:40:20.461933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:56.856 [2024-11-18 00:40:20.462167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:24520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.856 [2024-11-18 00:40:20.462193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:56.856 [2024-11-18 00:40:20.475616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:56.856 [2024-11-18 00:40:20.475841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:15826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.856 [2024-11-18 00:40:20.475867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:56.856 [2024-11-18 00:40:20.489399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:56.856 [2024-11-18 00:40:20.489645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.856 [2024-11-18 00:40:20.489689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:56.856 [2024-11-18 00:40:20.503133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:56.856 [2024-11-18 00:40:20.503402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:13054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.856 [2024-11-18 00:40:20.503444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:56.856 [2024-11-18 00:40:20.516771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:56.856 [2024-11-18 00:40:20.516972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.856 [2024-11-18 00:40:20.516998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:56.856 [2024-11-18 00:40:20.530457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:56.856 [2024-11-18 00:40:20.530629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:92 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.856 [2024-11-18 00:40:20.530657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:56.856 [2024-11-18 00:40:20.544057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:56.856 [2024-11-18 00:40:20.544299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:7941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.856 [2024-11-18 00:40:20.544336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:56.856 [2024-11-18 00:40:20.557960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:56.856 [2024-11-18 00:40:20.558228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:8513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.856 [2024-11-18 00:40:20.558275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:56.857 [2024-11-18 00:40:20.571967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:56.857 [2024-11-18 00:40:20.572197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:25262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.857 [2024-11-18 00:40:20.572239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:56.857 [2024-11-18 00:40:20.585626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:56.857 [2024-11-18 00:40:20.585860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.857 [2024-11-18 00:40:20.585902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:56.857 [2024-11-18 00:40:20.599133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:56.857 [2024-11-18 00:40:20.599356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.857 [2024-11-18 00:40:20.599382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:56.857 [2024-11-18 00:40:20.612679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:56.857 [2024-11-18 00:40:20.612919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.857 [2024-11-18 00:40:20.612960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:56.857 [2024-11-18 00:40:20.626308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:56.857 [2024-11-18 00:40:20.626650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:23371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.857 [2024-11-18 00:40:20.626693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:56.857 [2024-11-18 00:40:20.640085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:56.857 [2024-11-18 00:40:20.640395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:18780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.857 [2024-11-18 00:40:20.640423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:56.857 [2024-11-18 00:40:20.653664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:56.857 [2024-11-18 00:40:20.653896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.857 [2024-11-18 00:40:20.653921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:56.857 [2024-11-18 00:40:20.667285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:56.857 [2024-11-18 00:40:20.667513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:18312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.857 [2024-11-18 00:40:20.667539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.115 [2024-11-18 00:40:20.680888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.115 [2024-11-18 00:40:20.681139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:21219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.115 [2024-11-18 00:40:20.681166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.115 [2024-11-18 00:40:20.694442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.115 [2024-11-18 00:40:20.694664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.115 [2024-11-18 00:40:20.694691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.115 [2024-11-18 00:40:20.707800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.115 [2024-11-18 00:40:20.708089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.115 [2024-11-18 00:40:20.708115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.115 [2024-11-18 00:40:20.721210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.115 [2024-11-18 00:40:20.721464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:18923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.115 [2024-11-18 00:40:20.721507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.115 [2024-11-18 00:40:20.734549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.115 [2024-11-18 00:40:20.734762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:17048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.116 [2024-11-18 00:40:20.734803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.116 [2024-11-18 00:40:20.747906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.116 [2024-11-18 00:40:20.748142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.116 [2024-11-18 00:40:20.748183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.116 [2024-11-18 00:40:20.761282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.116 [2024-11-18 00:40:20.761558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:19658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.116 [2024-11-18 00:40:20.761600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.116 [2024-11-18 00:40:20.774540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.116 [2024-11-18 00:40:20.774801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.116 [2024-11-18 00:40:20.774828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.116 [2024-11-18 00:40:20.787955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.116 [2024-11-18 00:40:20.788190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:9407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.116 [2024-11-18 00:40:20.788216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.116 [2024-11-18 00:40:20.801220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.116 [2024-11-18 00:40:20.801478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:3633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.116 [2024-11-18 00:40:20.801505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.116 [2024-11-18 00:40:20.814993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.116 [2024-11-18 00:40:20.815193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.116 [2024-11-18 00:40:20.815219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.116 [2024-11-18 00:40:20.828227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.116 [2024-11-18 00:40:20.828510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.116 [2024-11-18 00:40:20.828553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.116 [2024-11-18 00:40:20.841669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.116 [2024-11-18 00:40:20.841900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:8922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.116 [2024-11-18 00:40:20.841926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.116 [2024-11-18 00:40:20.854809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.116 [2024-11-18 00:40:20.855050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:17420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.116 [2024-11-18 00:40:20.855091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.116 [2024-11-18 00:40:20.868175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.116 [2024-11-18 00:40:20.868476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:15927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.116 [2024-11-18 00:40:20.868518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.116 [2024-11-18 00:40:20.881601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.116 [2024-11-18 00:40:20.881849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:20924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.116 [2024-11-18 00:40:20.881891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.116 [2024-11-18 00:40:20.894825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.116 [2024-11-18 00:40:20.895107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:11167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.116 [2024-11-18 00:40:20.895148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.116 [2024-11-18 00:40:20.908157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.116 [2024-11-18 00:40:20.908367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:15128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.116 [2024-11-18 00:40:20.908399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.116 [2024-11-18 00:40:20.921598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.116 [2024-11-18 00:40:20.921861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:15847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.116 [2024-11-18 00:40:20.921903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.116 [2024-11-18 00:40:20.935010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.116 [2024-11-18 00:40:20.935243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.116 [2024-11-18 00:40:20.935285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.373 [2024-11-18 00:40:20.948778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.373 [2024-11-18 00:40:20.949026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.373 [2024-11-18 00:40:20.949068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.373 [2024-11-18 00:40:20.962236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.373 [2024-11-18 00:40:20.962522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.373 [2024-11-18 00:40:20.962550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.373 [2024-11-18 00:40:20.975572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.373 [2024-11-18 00:40:20.975782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:13605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.373 [2024-11-18 00:40:20.975808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.373 [2024-11-18 00:40:20.988882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.373 [2024-11-18 00:40:20.989106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.373 [2024-11-18 00:40:20.989147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.373 [2024-11-18 00:40:21.002213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.373 [2024-11-18 00:40:21.002511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:3553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.373 [2024-11-18 00:40:21.002553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.373 [2024-11-18 00:40:21.015559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.373 [2024-11-18 00:40:21.015743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:8945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.373 [2024-11-18 00:40:21.015770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.373 [2024-11-18 00:40:21.028827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.373 [2024-11-18 00:40:21.029059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:22240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.373 [2024-11-18 00:40:21.029086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.373 [2024-11-18 00:40:21.041958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.373 [2024-11-18 00:40:21.042157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.373 [2024-11-18 00:40:21.042184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.373 [2024-11-18 00:40:21.055050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.373 [2024-11-18 00:40:21.055252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:24757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.373 [2024-11-18 00:40:21.055278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.373 [2024-11-18 00:40:21.068403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.373 [2024-11-18 00:40:21.068621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.373 [2024-11-18 00:40:21.068647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.373 [2024-11-18 00:40:21.081519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.373 [2024-11-18 00:40:21.081706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.373 [2024-11-18 00:40:21.081732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.373 [2024-11-18 00:40:21.094870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.374 [2024-11-18 00:40:21.095079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:11356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.374 [2024-11-18 00:40:21.095107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.374 [2024-11-18 00:40:21.107997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.374 [2024-11-18 00:40:21.108193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.374 [2024-11-18 00:40:21.108219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.374 [2024-11-18 00:40:21.121225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.374 [2024-11-18 00:40:21.121484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.374 [2024-11-18 00:40:21.121511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.374 [2024-11-18 00:40:21.134371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.374 [2024-11-18 00:40:21.134557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:12903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.374 [2024-11-18 00:40:21.134584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.374 [2024-11-18 00:40:21.147533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.374 [2024-11-18 00:40:21.147740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:19380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.374 [2024-11-18 00:40:21.147767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.374 [2024-11-18 00:40:21.160842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.374 [2024-11-18 00:40:21.161048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:18692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.374 [2024-11-18 00:40:21.161075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.374 [2024-11-18 00:40:21.173869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.374 [2024-11-18 00:40:21.174068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:16081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.374 [2024-11-18 00:40:21.174093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.374 [2024-11-18 00:40:21.187057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.374 [2024-11-18 00:40:21.187362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:18681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.374 [2024-11-18 00:40:21.187405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.631 [2024-11-18 00:40:21.200432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.632 [2024-11-18 00:40:21.200665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:7020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.632 [2024-11-18 00:40:21.200692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.632 [2024-11-18 00:40:21.213866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.632 [2024-11-18 00:40:21.214192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.632 [2024-11-18 00:40:21.214219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.632 [2024-11-18 00:40:21.226933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.632 [2024-11-18 00:40:21.227130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.632 [2024-11-18 00:40:21.227157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.632 [2024-11-18 00:40:21.240220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.632 [2024-11-18 00:40:21.240511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:3988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.632 [2024-11-18 00:40:21.240553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.632 [2024-11-18 00:40:21.253418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.632 [2024-11-18 00:40:21.253606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:6763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.632 [2024-11-18 00:40:21.253637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.632 19044.00 IOPS, 74.39 MiB/s [2024-11-17T23:40:21.454Z] [2024-11-18 00:40:21.266522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.632 [2024-11-18 00:40:21.267014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:19386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.632 [2024-11-18 00:40:21.267040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.632 [2024-11-18 00:40:21.279784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.632 [2024-11-18 00:40:21.279984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:7614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.632 [2024-11-18 00:40:21.280010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.632 [2024-11-18 00:40:21.292861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.632 [2024-11-18 00:40:21.293062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:19476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.632 [2024-11-18 00:40:21.293088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.632 [2024-11-18 00:40:21.306353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.632 [2024-11-18 00:40:21.306641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:9033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.632 [2024-11-18 00:40:21.306684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.632 [2024-11-18 00:40:21.319731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.632 [2024-11-18 00:40:21.319930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.632 [2024-11-18 00:40:21.319956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.632 [2024-11-18 00:40:21.332988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.632 [2024-11-18 00:40:21.333189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.632 [2024-11-18 00:40:21.333215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.632 [2024-11-18 00:40:21.346041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.632 [2024-11-18 00:40:21.346239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:8325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.632 [2024-11-18 00:40:21.346265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.632 [2024-11-18 00:40:21.359192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.632 [2024-11-18 00:40:21.359389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.632 [2024-11-18 00:40:21.359416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.632 [2024-11-18 00:40:21.372271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.632 [2024-11-18 00:40:21.372494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.632 [2024-11-18 00:40:21.372522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.632 [2024-11-18 00:40:21.385383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.632 [2024-11-18 00:40:21.385636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:25360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.632 [2024-11-18 00:40:21.385677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.632 [2024-11-18 00:40:21.398384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.632 [2024-11-18 00:40:21.398571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:6433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.632 [2024-11-18 00:40:21.398597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.632 [2024-11-18 00:40:21.411536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.632 [2024-11-18 00:40:21.411725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:25459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.632 [2024-11-18 00:40:21.411752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.632 [2024-11-18 00:40:21.424743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.632 [2024-11-18 00:40:21.424941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.632 [2024-11-18 00:40:21.424967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.632 [2024-11-18 00:40:21.437953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.632 [2024-11-18 00:40:21.438153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:24954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.632 [2024-11-18 00:40:21.438180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.632 [2024-11-18 00:40:21.451062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.632 [2024-11-18 00:40:21.451376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:23125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.632 [2024-11-18 00:40:21.451419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.891 [2024-11-18 00:40:21.464408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.891 [2024-11-18 00:40:21.464591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.891 [2024-11-18 00:40:21.464618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.891 [2024-11-18 00:40:21.477547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.891 [2024-11-18 00:40:21.477802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:13002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.891 [2024-11-18 00:40:21.477844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.891 [2024-11-18 00:40:21.490845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.891 [2024-11-18 00:40:21.491080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.891 [2024-11-18 00:40:21.491106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.891 [2024-11-18 00:40:21.504148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.891 [2024-11-18 00:40:21.504451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:24253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.891 [2024-11-18 00:40:21.504493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.891 [2024-11-18 00:40:21.517612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.891 [2024-11-18 00:40:21.517852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:3684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.891 [2024-11-18 00:40:21.517893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.891 [2024-11-18 00:40:21.530832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.891 [2024-11-18 00:40:21.531142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.891 [2024-11-18 00:40:21.531171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.891 [2024-11-18 00:40:21.544163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.891 [2024-11-18 00:40:21.544372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:10386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.891 [2024-11-18 00:40:21.544399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.891 [2024-11-18 00:40:21.557339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.891 [2024-11-18 00:40:21.557548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.891 [2024-11-18 00:40:21.557575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.891 [2024-11-18 00:40:21.570894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.891 [2024-11-18 00:40:21.571118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:10384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.891 [2024-11-18 00:40:21.571159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.891 [2024-11-18 00:40:21.584207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.891 [2024-11-18 00:40:21.584426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.891 [2024-11-18 00:40:21.584453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.891 [2024-11-18 00:40:21.597584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.891 [2024-11-18 00:40:21.597837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:6862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.891 [2024-11-18 00:40:21.597883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.891 [2024-11-18 00:40:21.611017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.891 [2024-11-18 00:40:21.611226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:14978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.891 [2024-11-18 00:40:21.611252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.891 [2024-11-18 00:40:21.624524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.891 [2024-11-18 00:40:21.624717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.891 [2024-11-18 00:40:21.624743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.891 [2024-11-18 00:40:21.637791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.891 [2024-11-18 00:40:21.637998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.891 [2024-11-18 00:40:21.638024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.891 [2024-11-18 00:40:21.651080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.891 [2024-11-18 00:40:21.651280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.891 [2024-11-18 00:40:21.651306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.891 [2024-11-18 00:40:21.664164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.891 [2024-11-18 00:40:21.664390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:3078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.891 [2024-11-18 00:40:21.664417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.891 [2024-11-18 00:40:21.677422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.891 [2024-11-18 00:40:21.677619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:13796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.891 [2024-11-18 00:40:21.677660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.892 [2024-11-18 00:40:21.690668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.892 [2024-11-18 00:40:21.690875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:15556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.892 [2024-11-18 00:40:21.690901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:57.892 [2024-11-18 00:40:21.703921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:57.892 [2024-11-18 00:40:21.704119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.892 [2024-11-18 00:40:21.704145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:58.150 [2024-11-18 00:40:21.717124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:58.150 [2024-11-18 00:40:21.717308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:3340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.150 [2024-11-18 00:40:21.717345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:58.150 [2024-11-18 00:40:21.730138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:58.150 [2024-11-18 00:40:21.730365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:20875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.150 [2024-11-18 00:40:21.730392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:58.150 [2024-11-18 00:40:21.743405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:58.150 [2024-11-18 00:40:21.743664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.150 [2024-11-18 00:40:21.743690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:58.150 [2024-11-18 00:40:21.756317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:58.150 [2024-11-18 00:40:21.756509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:3596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.150 [2024-11-18 00:40:21.756551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:58.150 [2024-11-18 00:40:21.769358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:58.150 [2024-11-18 00:40:21.769631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.150 [2024-11-18 00:40:21.769672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:58.150 [2024-11-18 00:40:21.782530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:58.150 [2024-11-18 00:40:21.782748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:18005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.150 [2024-11-18 00:40:21.782774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:58.150 [2024-11-18 00:40:21.795518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:58.150 [2024-11-18 00:40:21.795774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.150 [2024-11-18 00:40:21.795816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:58.150 [2024-11-18 00:40:21.808699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:58.150 [2024-11-18 00:40:21.808898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:15341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.150 [2024-11-18 00:40:21.808933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:58.150 [2024-11-18 00:40:21.822104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:58.150 [2024-11-18 00:40:21.822327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.150 [2024-11-18 00:40:21.822354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:58.150 [2024-11-18 00:40:21.835244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:58.150 [2024-11-18 00:40:21.835495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.150 [2024-11-18 00:40:21.835523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:58.151 [2024-11-18 00:40:21.848452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:58.151 [2024-11-18 00:40:21.848696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.151 [2024-11-18 00:40:21.848722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:58.151 [2024-11-18 00:40:21.861668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:58.151 [2024-11-18 00:40:21.861952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:6631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.151 [2024-11-18 00:40:21.861993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:58.151 [2024-11-18 00:40:21.874919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:58.151 [2024-11-18 00:40:21.875108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:13702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.151 [2024-11-18 00:40:21.875134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:58.151 [2024-11-18 00:40:21.888046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:58.151 [2024-11-18 00:40:21.888273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.151 [2024-11-18 00:40:21.888322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:58.151 [2024-11-18 00:40:21.901185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:58.151 [2024-11-18 00:40:21.901408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.151 [2024-11-18 00:40:21.901434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:58.151 [2024-11-18 00:40:21.914415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:58.151 [2024-11-18 00:40:21.914605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:17208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.151 [2024-11-18 00:40:21.914631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:58.151 [2024-11-18 00:40:21.927588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:58.151 [2024-11-18 00:40:21.927842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:20451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.151 [2024-11-18 00:40:21.927868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:58.151 [2024-11-18 00:40:21.940761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:58.151 [2024-11-18 00:40:21.940963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.151 [2024-11-18 00:40:21.940993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:58.151 [2024-11-18 00:40:21.953846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:58.151 [2024-11-18 00:40:21.954046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:15482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.151 [2024-11-18 00:40:21.954072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:58.151 [2024-11-18 00:40:21.967039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:58.151 [2024-11-18 00:40:21.967278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:14800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.151 [2024-11-18 00:40:21.967327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:58.409 [2024-11-18 00:40:21.980511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:58.409 [2024-11-18 00:40:21.980700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.409 [2024-11-18 00:40:21.980726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:58.409 [2024-11-18 00:40:21.993758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:58.409 [2024-11-18 00:40:21.993959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:7039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.409 [2024-11-18 00:40:21.993984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:58.409 [2024-11-18 00:40:22.006993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:58.409 [2024-11-18 00:40:22.007193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:3102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.409 [2024-11-18 00:40:22.007219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:58.409 [2024-11-18 00:40:22.020116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:58.409 [2024-11-18 00:40:22.020332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.409 [2024-11-18 00:40:22.020359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:58.409 [2024-11-18 00:40:22.033340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:58.409 [2024-11-18 00:40:22.033714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:14438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.410 [2024-11-18 00:40:22.033756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:58.410 [2024-11-18 00:40:22.046407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:58.410 [2024-11-18 00:40:22.046672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:22455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.410 [2024-11-18 00:40:22.046715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:58.410 [2024-11-18 00:40:22.059515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:58.410 [2024-11-18 00:40:22.059763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.410 [2024-11-18 00:40:22.059789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:58.410 [2024-11-18 00:40:22.072841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:58.410 [2024-11-18 00:40:22.073149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:7170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.410 [2024-11-18 00:40:22.073190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:58.410 [2024-11-18 00:40:22.086338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:58.410 [2024-11-18 00:40:22.086515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.410 [2024-11-18 00:40:22.086543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:58.410 [2024-11-18 00:40:22.099416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:58.410 [2024-11-18 00:40:22.099659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.410 [2024-11-18 00:40:22.099698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:58.410 [2024-11-18 00:40:22.112237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:58.410 [2024-11-18 00:40:22.112488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.410 [2024-11-18 00:40:22.112531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:58.410 [2024-11-18 00:40:22.125666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:58.410 [2024-11-18 00:40:22.125946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.410 [2024-11-18 00:40:22.125988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:58.410 [2024-11-18 00:40:22.139031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:58.410 [2024-11-18 00:40:22.139266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.410 [2024-11-18 00:40:22.139293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:58.410 [2024-11-18 00:40:22.152157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:58.410 [2024-11-18 00:40:22.152412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.410 [2024-11-18 00:40:22.152454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:58.410 [2024-11-18 00:40:22.165461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:58.410 [2024-11-18 00:40:22.165741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:11112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.410 [2024-11-18 00:40:22.165767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:58.410 [2024-11-18 00:40:22.178531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:58.410 [2024-11-18 00:40:22.178766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.410 [2024-11-18 00:40:22.178807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:58.410 [2024-11-18 00:40:22.191620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:58.410 [2024-11-18 00:40:22.191901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:12980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.410 [2024-11-18 00:40:22.191943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:58.410 [2024-11-18 00:40:22.204940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:58.410 [2024-11-18 00:40:22.205192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.410 [2024-11-18 00:40:22.205219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:58.410 [2024-11-18 00:40:22.218044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:58.410 [2024-11-18 00:40:22.218243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.410 [2024-11-18 00:40:22.218269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:58.410 [2024-11-18 00:40:22.231407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:58.410 [2024-11-18 00:40:22.231587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.410 [2024-11-18 00:40:22.231616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:58.667 [2024-11-18 00:40:22.244717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:58.667 [2024-11-18 00:40:22.244924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.667 [2024-11-18 00:40:22.244951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:58.667 [2024-11-18 00:40:22.257998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:58.667 [2024-11-18 00:40:22.258196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:19314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.667 [2024-11-18 00:40:22.258223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:58.667 19192.00 IOPS, 74.97 MiB/s [2024-11-17T23:40:22.489Z] [2024-11-18 00:40:22.270994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d17460) with pdu=0x2000166fef90 00:34:58.667 [2024-11-18 00:40:22.271225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:58.667 [2024-11-18 00:40:22.271259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:58.667 00:34:58.667 Latency(us) 00:34:58.667 [2024-11-17T23:40:22.489Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:58.667 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:58.667 nvme0n1 : 2.01 19193.92 74.98 0.00 0.00 6654.26 2827.76 15825.73 00:34:58.667 [2024-11-17T23:40:22.489Z] =================================================================================================================== 00:34:58.667 [2024-11-17T23:40:22.489Z] Total : 19193.92 74.98 0.00 0.00 6654.26 2827.76 15825.73 00:34:58.667 { 00:34:58.667 "results": [ 00:34:58.667 { 00:34:58.667 "job": "nvme0n1", 00:34:58.667 "core_mask": "0x2", 00:34:58.667 "workload": "randwrite", 00:34:58.667 "status": "finished", 00:34:58.667 "queue_depth": 128, 00:34:58.667 "io_size": 4096, 00:34:58.667 "runtime": 2.006469, 00:34:58.667 "iops": 19193.917274575386, 00:34:58.667 "mibps": 74.9762393538101, 00:34:58.667 "io_failed": 0, 00:34:58.667 "io_timeout": 0, 00:34:58.668 "avg_latency_us": 6654.257014571697, 00:34:58.668 "min_latency_us": 2827.757037037037, 00:34:58.668 "max_latency_us": 15825.730370370371 00:34:58.668 } 00:34:58.668 ], 00:34:58.668 "core_count": 1 00:34:58.668 } 00:34:58.668 00:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:58.668 00:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:58.668 00:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:58.668 00:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:58.668 | .driver_specific 00:34:58.668 | .nvme_error 00:34:58.668 | .status_code 00:34:58.668 | .command_transient_transport_error' 00:34:58.924 00:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 151 > 0 )) 00:34:58.924 00:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 397200 00:34:58.924 00:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 397200 ']' 00:34:58.924 00:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 397200 00:34:58.924 00:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:34:58.924 00:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:58.924 00:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 397200 00:34:58.924 00:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:58.924 00:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:58.924 00:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 397200' 00:34:58.924 killing process with pid 397200 00:34:58.924 00:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 397200 00:34:58.924 Received shutdown signal, test time was about 2.000000 seconds 00:34:58.924 00:34:58.924 Latency(us) 00:34:58.924 [2024-11-17T23:40:22.746Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:58.924 [2024-11-17T23:40:22.746Z] =================================================================================================================== 00:34:58.924 [2024-11-17T23:40:22.746Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:58.924 00:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 397200 00:34:59.180 00:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:34:59.180 00:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:59.180 00:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:34:59.180 00:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:34:59.180 00:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:34:59.180 00:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=397719 00:34:59.180 00:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:34:59.180 00:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 397719 /var/tmp/bperf.sock 00:34:59.180 00:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 397719 ']' 00:34:59.180 00:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:59.180 00:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:59.180 00:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:59.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:59.180 00:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:59.180 00:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:59.180 [2024-11-18 00:40:22.819246] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:34:59.180 [2024-11-18 00:40:22.819368] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid397719 ] 00:34:59.180 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:59.180 Zero copy mechanism will not be used. 00:34:59.180 [2024-11-18 00:40:22.886262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:59.180 [2024-11-18 00:40:22.933592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:59.438 00:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:59.438 00:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:34:59.438 00:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:59.438 00:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:59.695 00:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:59.695 00:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.695 00:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:59.695 00:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.695 00:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:59.695 00:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:59.954 nvme0n1 00:34:59.954 00:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:34:59.954 00:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.954 00:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:59.954 00:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.954 00:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:59.954 00:40:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:00.216 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:00.216 Zero copy mechanism will not be used. 00:35:00.216 Running I/O for 2 seconds... 00:35:00.216 [2024-11-18 00:40:23.834940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.216 [2024-11-18 00:40:23.835183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.216 [2024-11-18 00:40:23.835223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.216 [2024-11-18 00:40:23.841189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.216 [2024-11-18 00:40:23.841303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.216 [2024-11-18 00:40:23.841345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.216 [2024-11-18 00:40:23.846193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.216 [2024-11-18 00:40:23.846673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.217 [2024-11-18 00:40:23.846766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.217 [2024-11-18 00:40:23.851121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.217 [2024-11-18 00:40:23.851494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.217 [2024-11-18 00:40:23.851526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.217 [2024-11-18 00:40:23.856975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.217 [2024-11-18 00:40:23.857260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.217 [2024-11-18 00:40:23.857290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.217 [2024-11-18 00:40:23.862832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.217 [2024-11-18 00:40:23.863106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.217 [2024-11-18 00:40:23.863137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.217 [2024-11-18 00:40:23.868584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.217 [2024-11-18 00:40:23.868909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.217 [2024-11-18 00:40:23.868940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.217 [2024-11-18 00:40:23.874927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.217 [2024-11-18 00:40:23.875144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.217 [2024-11-18 00:40:23.875175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.217 [2024-11-18 00:40:23.880643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.217 [2024-11-18 00:40:23.880955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.217 [2024-11-18 00:40:23.880986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.217 [2024-11-18 00:40:23.885739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.217 [2024-11-18 00:40:23.886015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.217 [2024-11-18 00:40:23.886046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.217 [2024-11-18 00:40:23.890415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.217 [2024-11-18 00:40:23.890737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.217 [2024-11-18 00:40:23.890796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.217 [2024-11-18 00:40:23.894797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.217 [2024-11-18 00:40:23.895275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.218 [2024-11-18 00:40:23.895323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.218 [2024-11-18 00:40:23.899227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.218 [2024-11-18 00:40:23.899485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.218 [2024-11-18 00:40:23.899596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.218 [2024-11-18 00:40:23.903794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.218 [2024-11-18 00:40:23.904097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.218 [2024-11-18 00:40:23.904172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.218 [2024-11-18 00:40:23.908113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.218 [2024-11-18 00:40:23.908491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.218 [2024-11-18 00:40:23.908560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.218 [2024-11-18 00:40:23.912727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.218 [2024-11-18 00:40:23.912939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.218 [2024-11-18 00:40:23.912970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.218 [2024-11-18 00:40:23.917166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.218 [2024-11-18 00:40:23.917416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.218 [2024-11-18 00:40:23.917447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.218 [2024-11-18 00:40:23.921847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.218 [2024-11-18 00:40:23.922166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.218 [2024-11-18 00:40:23.922203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.218 [2024-11-18 00:40:23.926373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.218 [2024-11-18 00:40:23.926901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.218 [2024-11-18 00:40:23.926953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.218 [2024-11-18 00:40:23.930965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.218 [2024-11-18 00:40:23.931308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.218 [2024-11-18 00:40:23.931379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.218 [2024-11-18 00:40:23.935530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.218 [2024-11-18 00:40:23.935701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.219 [2024-11-18 00:40:23.935732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.219 [2024-11-18 00:40:23.941117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.219 [2024-11-18 00:40:23.941423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.219 [2024-11-18 00:40:23.941473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.219 [2024-11-18 00:40:23.946678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.219 [2024-11-18 00:40:23.946953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.219 [2024-11-18 00:40:23.946984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.219 [2024-11-18 00:40:23.952474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.219 [2024-11-18 00:40:23.952777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.219 [2024-11-18 00:40:23.952808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.219 [2024-11-18 00:40:23.957579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.219 [2024-11-18 00:40:23.957930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.219 [2024-11-18 00:40:23.957961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.219 [2024-11-18 00:40:23.962422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.219 [2024-11-18 00:40:23.962607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.219 [2024-11-18 00:40:23.962689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.219 [2024-11-18 00:40:23.967560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.219 [2024-11-18 00:40:23.967879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.219 [2024-11-18 00:40:23.967910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.219 [2024-11-18 00:40:23.972763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.219 [2024-11-18 00:40:23.972978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.219 [2024-11-18 00:40:23.973048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.219 [2024-11-18 00:40:23.978901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.219 [2024-11-18 00:40:23.979377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.219 [2024-11-18 00:40:23.979409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.220 [2024-11-18 00:40:23.983543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.220 [2024-11-18 00:40:23.983935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.220 [2024-11-18 00:40:23.984013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.220 [2024-11-18 00:40:23.988231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.220 [2024-11-18 00:40:23.988550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.220 [2024-11-18 00:40:23.988634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.220 [2024-11-18 00:40:23.992821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.220 [2024-11-18 00:40:23.993147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.220 [2024-11-18 00:40:23.993177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.220 [2024-11-18 00:40:23.997598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.220 [2024-11-18 00:40:23.997858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.220 [2024-11-18 00:40:23.997933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.220 [2024-11-18 00:40:24.002627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.220 [2024-11-18 00:40:24.002886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.220 [2024-11-18 00:40:24.002956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.220 [2024-11-18 00:40:24.007935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.220 [2024-11-18 00:40:24.008187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.220 [2024-11-18 00:40:24.008255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.220 [2024-11-18 00:40:24.013073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.220 [2024-11-18 00:40:24.013254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.220 [2024-11-18 00:40:24.013373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.220 [2024-11-18 00:40:24.018584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.220 [2024-11-18 00:40:24.018797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.221 [2024-11-18 00:40:24.018830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.221 [2024-11-18 00:40:24.024478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.221 [2024-11-18 00:40:24.024618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.221 [2024-11-18 00:40:24.024695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.221 [2024-11-18 00:40:24.029670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.221 [2024-11-18 00:40:24.029872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.221 [2024-11-18 00:40:24.029935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.221 [2024-11-18 00:40:24.035018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.221 [2024-11-18 00:40:24.035236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.221 [2024-11-18 00:40:24.035268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.484 [2024-11-18 00:40:24.040290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.484 [2024-11-18 00:40:24.040530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.484 [2024-11-18 00:40:24.040585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.484 [2024-11-18 00:40:24.045126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.484 [2024-11-18 00:40:24.045374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.484 [2024-11-18 00:40:24.045464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.484 [2024-11-18 00:40:24.050149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.484 [2024-11-18 00:40:24.050384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.484 [2024-11-18 00:40:24.050465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.484 [2024-11-18 00:40:24.055396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.484 [2024-11-18 00:40:24.055592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.484 [2024-11-18 00:40:24.055623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.484 [2024-11-18 00:40:24.060650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.484 [2024-11-18 00:40:24.060923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.484 [2024-11-18 00:40:24.060984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.485 [2024-11-18 00:40:24.066003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.485 [2024-11-18 00:40:24.066256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.485 [2024-11-18 00:40:24.066323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.485 [2024-11-18 00:40:24.071175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.485 [2024-11-18 00:40:24.071431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.485 [2024-11-18 00:40:24.071506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.485 [2024-11-18 00:40:24.076327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.485 [2024-11-18 00:40:24.076635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.485 [2024-11-18 00:40:24.076713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.485 [2024-11-18 00:40:24.081532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.485 [2024-11-18 00:40:24.081813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.485 [2024-11-18 00:40:24.081844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.485 [2024-11-18 00:40:24.086747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.485 [2024-11-18 00:40:24.087050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.485 [2024-11-18 00:40:24.087081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.485 [2024-11-18 00:40:24.091939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.485 [2024-11-18 00:40:24.092272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.485 [2024-11-18 00:40:24.092353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.485 [2024-11-18 00:40:24.097177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.485 [2024-11-18 00:40:24.097430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.485 [2024-11-18 00:40:24.097474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.485 [2024-11-18 00:40:24.102435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.485 [2024-11-18 00:40:24.102719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.485 [2024-11-18 00:40:24.102787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.485 [2024-11-18 00:40:24.107560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.485 [2024-11-18 00:40:24.107848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.485 [2024-11-18 00:40:24.107894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.485 [2024-11-18 00:40:24.112625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.485 [2024-11-18 00:40:24.112860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.485 [2024-11-18 00:40:24.112906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.485 [2024-11-18 00:40:24.117923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.485 [2024-11-18 00:40:24.118066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.485 [2024-11-18 00:40:24.118124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.485 [2024-11-18 00:40:24.123264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.485 [2024-11-18 00:40:24.123440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.485 [2024-11-18 00:40:24.123471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.485 [2024-11-18 00:40:24.128593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.485 [2024-11-18 00:40:24.128724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.485 [2024-11-18 00:40:24.128789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.485 [2024-11-18 00:40:24.133889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.485 [2024-11-18 00:40:24.134100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.485 [2024-11-18 00:40:24.134155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.485 [2024-11-18 00:40:24.139146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.485 [2024-11-18 00:40:24.139340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.485 [2024-11-18 00:40:24.139370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.485 [2024-11-18 00:40:24.144463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.485 [2024-11-18 00:40:24.144626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.485 [2024-11-18 00:40:24.144666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.485 [2024-11-18 00:40:24.149795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.485 [2024-11-18 00:40:24.149998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.485 [2024-11-18 00:40:24.150029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.485 [2024-11-18 00:40:24.155144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.485 [2024-11-18 00:40:24.155285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.485 [2024-11-18 00:40:24.155368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.485 [2024-11-18 00:40:24.160321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.485 [2024-11-18 00:40:24.160536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.485 [2024-11-18 00:40:24.160567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.485 [2024-11-18 00:40:24.165472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.485 [2024-11-18 00:40:24.165568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.485 [2024-11-18 00:40:24.165599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.485 [2024-11-18 00:40:24.170605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.485 [2024-11-18 00:40:24.170774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.485 [2024-11-18 00:40:24.170855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.485 [2024-11-18 00:40:24.175806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.485 [2024-11-18 00:40:24.176045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.485 [2024-11-18 00:40:24.176090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.485 [2024-11-18 00:40:24.181055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.485 [2024-11-18 00:40:24.181199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.485 [2024-11-18 00:40:24.181281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.485 [2024-11-18 00:40:24.186210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.485 [2024-11-18 00:40:24.186400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.485 [2024-11-18 00:40:24.186480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.485 [2024-11-18 00:40:24.191307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.485 [2024-11-18 00:40:24.191478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.485 [2024-11-18 00:40:24.191558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.485 [2024-11-18 00:40:24.196402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.485 [2024-11-18 00:40:24.196584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.485 [2024-11-18 00:40:24.196614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.485 [2024-11-18 00:40:24.201721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.485 [2024-11-18 00:40:24.201947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.486 [2024-11-18 00:40:24.201977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.486 [2024-11-18 00:40:24.206950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.486 [2024-11-18 00:40:24.207085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.486 [2024-11-18 00:40:24.207160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.486 [2024-11-18 00:40:24.212145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.486 [2024-11-18 00:40:24.212329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.486 [2024-11-18 00:40:24.212362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.486 [2024-11-18 00:40:24.217478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.486 [2024-11-18 00:40:24.217734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.486 [2024-11-18 00:40:24.217764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.486 [2024-11-18 00:40:24.222831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.486 [2024-11-18 00:40:24.223037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.486 [2024-11-18 00:40:24.223068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.486 [2024-11-18 00:40:24.227976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.486 [2024-11-18 00:40:24.228188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.486 [2024-11-18 00:40:24.228218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.486 [2024-11-18 00:40:24.233342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.486 [2024-11-18 00:40:24.233507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.486 [2024-11-18 00:40:24.233593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.486 [2024-11-18 00:40:24.238681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.486 [2024-11-18 00:40:24.238908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.486 [2024-11-18 00:40:24.238938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.486 [2024-11-18 00:40:24.243891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.486 [2024-11-18 00:40:24.244072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.486 [2024-11-18 00:40:24.244102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.486 [2024-11-18 00:40:24.248986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.486 [2024-11-18 00:40:24.249143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.486 [2024-11-18 00:40:24.249173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.486 [2024-11-18 00:40:24.254135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.486 [2024-11-18 00:40:24.254404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.486 [2024-11-18 00:40:24.254434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.486 [2024-11-18 00:40:24.259410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.486 [2024-11-18 00:40:24.259551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.486 [2024-11-18 00:40:24.259581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.486 [2024-11-18 00:40:24.264620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.486 [2024-11-18 00:40:24.264800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.486 [2024-11-18 00:40:24.264830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.486 [2024-11-18 00:40:24.269966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.486 [2024-11-18 00:40:24.270190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.486 [2024-11-18 00:40:24.270251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.486 [2024-11-18 00:40:24.275153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.486 [2024-11-18 00:40:24.275368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.486 [2024-11-18 00:40:24.275399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.486 [2024-11-18 00:40:24.280454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.486 [2024-11-18 00:40:24.280667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.486 [2024-11-18 00:40:24.280732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.486 [2024-11-18 00:40:24.285787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.486 [2024-11-18 00:40:24.286020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.486 [2024-11-18 00:40:24.286050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.486 [2024-11-18 00:40:24.291017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.486 [2024-11-18 00:40:24.291217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.486 [2024-11-18 00:40:24.291251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.486 [2024-11-18 00:40:24.296179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.486 [2024-11-18 00:40:24.296397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.486 [2024-11-18 00:40:24.296427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.486 [2024-11-18 00:40:24.301347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.486 [2024-11-18 00:40:24.301547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.486 [2024-11-18 00:40:24.301578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.746 [2024-11-18 00:40:24.306645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.746 [2024-11-18 00:40:24.306894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.746 [2024-11-18 00:40:24.306966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.746 [2024-11-18 00:40:24.311812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.746 [2024-11-18 00:40:24.312060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.746 [2024-11-18 00:40:24.312090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.746 [2024-11-18 00:40:24.317051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.746 [2024-11-18 00:40:24.317226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.746 [2024-11-18 00:40:24.317302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.746 [2024-11-18 00:40:24.322307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.746 [2024-11-18 00:40:24.322566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.746 [2024-11-18 00:40:24.322621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.746 [2024-11-18 00:40:24.327545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.746 [2024-11-18 00:40:24.327766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.746 [2024-11-18 00:40:24.327845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.746 [2024-11-18 00:40:24.332955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.746 [2024-11-18 00:40:24.333166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.746 [2024-11-18 00:40:24.333196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.746 [2024-11-18 00:40:24.338113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.746 [2024-11-18 00:40:24.338298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.746 [2024-11-18 00:40:24.338386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.746 [2024-11-18 00:40:24.343245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.746 [2024-11-18 00:40:24.343471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.746 [2024-11-18 00:40:24.343504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.746 [2024-11-18 00:40:24.348436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.746 [2024-11-18 00:40:24.348656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.746 [2024-11-18 00:40:24.348685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.746 [2024-11-18 00:40:24.353751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.746 [2024-11-18 00:40:24.353921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.746 [2024-11-18 00:40:24.354004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.746 [2024-11-18 00:40:24.359010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.746 [2024-11-18 00:40:24.359211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.746 [2024-11-18 00:40:24.359251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.746 [2024-11-18 00:40:24.364201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.746 [2024-11-18 00:40:24.364463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.746 [2024-11-18 00:40:24.364493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.746 [2024-11-18 00:40:24.369618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.746 [2024-11-18 00:40:24.369810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.747 [2024-11-18 00:40:24.369849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.747 [2024-11-18 00:40:24.374929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.747 [2024-11-18 00:40:24.375182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.747 [2024-11-18 00:40:24.375212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.747 [2024-11-18 00:40:24.380245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.747 [2024-11-18 00:40:24.380414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.747 [2024-11-18 00:40:24.380470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.747 [2024-11-18 00:40:24.385366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.747 [2024-11-18 00:40:24.385578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.747 [2024-11-18 00:40:24.385636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.747 [2024-11-18 00:40:24.390532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.747 [2024-11-18 00:40:24.390787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.747 [2024-11-18 00:40:24.390817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.747 [2024-11-18 00:40:24.395732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.747 [2024-11-18 00:40:24.395918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.747 [2024-11-18 00:40:24.395953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.747 [2024-11-18 00:40:24.400947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.747 [2024-11-18 00:40:24.401219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.747 [2024-11-18 00:40:24.401250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.747 [2024-11-18 00:40:24.406061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.747 [2024-11-18 00:40:24.406255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.747 [2024-11-18 00:40:24.406337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.747 [2024-11-18 00:40:24.411318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.747 [2024-11-18 00:40:24.411536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.747 [2024-11-18 00:40:24.411567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.747 [2024-11-18 00:40:24.416483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.747 [2024-11-18 00:40:24.416661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.747 [2024-11-18 00:40:24.416751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.747 [2024-11-18 00:40:24.421810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.747 [2024-11-18 00:40:24.422068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.747 [2024-11-18 00:40:24.422152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.747 [2024-11-18 00:40:24.426998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.747 [2024-11-18 00:40:24.427206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.747 [2024-11-18 00:40:24.427238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.747 [2024-11-18 00:40:24.432266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.747 [2024-11-18 00:40:24.432466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.747 [2024-11-18 00:40:24.432501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.747 [2024-11-18 00:40:24.437530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.747 [2024-11-18 00:40:24.437779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.747 [2024-11-18 00:40:24.437810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.747 [2024-11-18 00:40:24.442677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.747 [2024-11-18 00:40:24.442894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.747 [2024-11-18 00:40:24.442930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.747 [2024-11-18 00:40:24.448004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.747 [2024-11-18 00:40:24.448195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.747 [2024-11-18 00:40:24.448260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.747 [2024-11-18 00:40:24.453365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.747 [2024-11-18 00:40:24.453551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.747 [2024-11-18 00:40:24.453581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.747 [2024-11-18 00:40:24.458596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.747 [2024-11-18 00:40:24.458869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.747 [2024-11-18 00:40:24.458912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.747 [2024-11-18 00:40:24.463911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.747 [2024-11-18 00:40:24.464071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.747 [2024-11-18 00:40:24.464120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.747 [2024-11-18 00:40:24.469228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.747 [2024-11-18 00:40:24.469442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.747 [2024-11-18 00:40:24.469498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.747 [2024-11-18 00:40:24.474428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.747 [2024-11-18 00:40:24.474569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.747 [2024-11-18 00:40:24.474607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.747 [2024-11-18 00:40:24.479697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.747 [2024-11-18 00:40:24.479911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.747 [2024-11-18 00:40:24.479977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.747 [2024-11-18 00:40:24.484868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.747 [2024-11-18 00:40:24.484996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.747 [2024-11-18 00:40:24.485074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.747 [2024-11-18 00:40:24.490048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.747 [2024-11-18 00:40:24.490212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.747 [2024-11-18 00:40:24.490242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.747 [2024-11-18 00:40:24.495195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.747 [2024-11-18 00:40:24.495412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.747 [2024-11-18 00:40:24.495486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.747 [2024-11-18 00:40:24.500298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.747 [2024-11-18 00:40:24.500499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.747 [2024-11-18 00:40:24.500572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.747 [2024-11-18 00:40:24.505507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.747 [2024-11-18 00:40:24.505758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.747 [2024-11-18 00:40:24.505793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.747 [2024-11-18 00:40:24.510782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.747 [2024-11-18 00:40:24.511001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.748 [2024-11-18 00:40:24.511084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.748 [2024-11-18 00:40:24.516059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.748 [2024-11-18 00:40:24.516284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.748 [2024-11-18 00:40:24.516322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.748 [2024-11-18 00:40:24.521307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.748 [2024-11-18 00:40:24.521541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.748 [2024-11-18 00:40:24.521572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.748 [2024-11-18 00:40:24.526645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.748 [2024-11-18 00:40:24.526876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.748 [2024-11-18 00:40:24.526949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.748 [2024-11-18 00:40:24.531896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.748 [2024-11-18 00:40:24.532152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.748 [2024-11-18 00:40:24.532193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.748 [2024-11-18 00:40:24.537105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.748 [2024-11-18 00:40:24.537390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.748 [2024-11-18 00:40:24.537425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.748 [2024-11-18 00:40:24.542399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.748 [2024-11-18 00:40:24.542662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.748 [2024-11-18 00:40:24.542730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.748 [2024-11-18 00:40:24.547782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.748 [2024-11-18 00:40:24.547998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.748 [2024-11-18 00:40:24.548028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.748 [2024-11-18 00:40:24.553136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.748 [2024-11-18 00:40:24.553326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.748 [2024-11-18 00:40:24.553413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.748 [2024-11-18 00:40:24.558383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.748 [2024-11-18 00:40:24.558657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.748 [2024-11-18 00:40:24.558688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.748 [2024-11-18 00:40:24.563499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:00.748 [2024-11-18 00:40:24.563740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.748 [2024-11-18 00:40:24.563807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:01.010 [2024-11-18 00:40:24.568622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.010 [2024-11-18 00:40:24.568888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.010 [2024-11-18 00:40:24.568926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:01.010 [2024-11-18 00:40:24.573894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.010 [2024-11-18 00:40:24.574085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.010 [2024-11-18 00:40:24.574116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:01.010 [2024-11-18 00:40:24.579149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.010 [2024-11-18 00:40:24.579453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.010 [2024-11-18 00:40:24.579530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:01.010 [2024-11-18 00:40:24.584542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.010 [2024-11-18 00:40:24.584770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.010 [2024-11-18 00:40:24.584803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:01.010 [2024-11-18 00:40:24.589704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.010 [2024-11-18 00:40:24.589993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.010 [2024-11-18 00:40:24.590070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:01.010 [2024-11-18 00:40:24.594977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.010 [2024-11-18 00:40:24.595195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.010 [2024-11-18 00:40:24.595251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:01.010 [2024-11-18 00:40:24.600232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.010 [2024-11-18 00:40:24.600546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.010 [2024-11-18 00:40:24.600628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:01.010 [2024-11-18 00:40:24.605520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.010 [2024-11-18 00:40:24.605740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.010 [2024-11-18 00:40:24.605793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:01.010 [2024-11-18 00:40:24.610725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.010 [2024-11-18 00:40:24.610987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.011 [2024-11-18 00:40:24.611018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:01.011 [2024-11-18 00:40:24.616217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.011 [2024-11-18 00:40:24.616541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.011 [2024-11-18 00:40:24.616640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:01.011 [2024-11-18 00:40:24.621409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.011 [2024-11-18 00:40:24.621606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.011 [2024-11-18 00:40:24.621660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:01.011 [2024-11-18 00:40:24.626630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.011 [2024-11-18 00:40:24.626833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.011 [2024-11-18 00:40:24.626864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:01.011 [2024-11-18 00:40:24.631857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.011 [2024-11-18 00:40:24.632190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.011 [2024-11-18 00:40:24.632220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:01.011 [2024-11-18 00:40:24.637001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.011 [2024-11-18 00:40:24.637151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.011 [2024-11-18 00:40:24.637233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:01.011 [2024-11-18 00:40:24.642163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.011 [2024-11-18 00:40:24.642362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.011 [2024-11-18 00:40:24.642445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:01.011 [2024-11-18 00:40:24.647367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.011 [2024-11-18 00:40:24.647568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.011 [2024-11-18 00:40:24.647644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:01.012 [2024-11-18 00:40:24.652631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.012 [2024-11-18 00:40:24.652781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.012 [2024-11-18 00:40:24.652859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:01.012 [2024-11-18 00:40:24.657723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.012 [2024-11-18 00:40:24.657914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.012 [2024-11-18 00:40:24.657953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:01.012 [2024-11-18 00:40:24.662870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.012 [2024-11-18 00:40:24.663097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.012 [2024-11-18 00:40:24.663158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:01.012 [2024-11-18 00:40:24.668082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.012 [2024-11-18 00:40:24.668329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.012 [2024-11-18 00:40:24.668360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:01.012 [2024-11-18 00:40:24.673197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.012 [2024-11-18 00:40:24.673376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.012 [2024-11-18 00:40:24.673406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:01.012 [2024-11-18 00:40:24.678538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.012 [2024-11-18 00:40:24.678735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.012 [2024-11-18 00:40:24.678765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:01.012 [2024-11-18 00:40:24.683806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.012 [2024-11-18 00:40:24.684016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.012 [2024-11-18 00:40:24.684095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:01.012 [2024-11-18 00:40:24.688962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.012 [2024-11-18 00:40:24.689100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.012 [2024-11-18 00:40:24.689130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:01.012 [2024-11-18 00:40:24.694250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.013 [2024-11-18 00:40:24.694442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.013 [2024-11-18 00:40:24.694516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:01.013 [2024-11-18 00:40:24.699296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.013 [2024-11-18 00:40:24.699515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.013 [2024-11-18 00:40:24.699595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:01.013 [2024-11-18 00:40:24.704531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.013 [2024-11-18 00:40:24.704737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.013 [2024-11-18 00:40:24.704766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:01.013 [2024-11-18 00:40:24.709601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.013 [2024-11-18 00:40:24.709800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.013 [2024-11-18 00:40:24.709878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:01.013 [2024-11-18 00:40:24.714867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.013 [2024-11-18 00:40:24.715065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.013 [2024-11-18 00:40:24.715095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:01.013 [2024-11-18 00:40:24.720040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.013 [2024-11-18 00:40:24.720236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.013 [2024-11-18 00:40:24.720266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:01.013 [2024-11-18 00:40:24.725225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.013 [2024-11-18 00:40:24.725415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.013 [2024-11-18 00:40:24.725491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:01.013 [2024-11-18 00:40:24.730375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.013 [2024-11-18 00:40:24.730562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.014 [2024-11-18 00:40:24.730593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:01.014 [2024-11-18 00:40:24.735515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.014 [2024-11-18 00:40:24.735726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.014 [2024-11-18 00:40:24.735758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:01.014 [2024-11-18 00:40:24.740768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.014 [2024-11-18 00:40:24.740922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.014 [2024-11-18 00:40:24.740999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:01.014 [2024-11-18 00:40:24.745796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.014 [2024-11-18 00:40:24.746022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.014 [2024-11-18 00:40:24.746053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:01.014 [2024-11-18 00:40:24.750947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.014 [2024-11-18 00:40:24.751087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.014 [2024-11-18 00:40:24.751149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:01.014 [2024-11-18 00:40:24.756137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.014 [2024-11-18 00:40:24.756375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.015 [2024-11-18 00:40:24.756410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:01.015 [2024-11-18 00:40:24.761389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.015 [2024-11-18 00:40:24.761508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.015 [2024-11-18 00:40:24.761552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:01.015 [2024-11-18 00:40:24.766608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.015 [2024-11-18 00:40:24.766840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.015 [2024-11-18 00:40:24.766870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:01.015 [2024-11-18 00:40:24.771716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.015 [2024-11-18 00:40:24.771920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.015 [2024-11-18 00:40:24.771951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:01.015 [2024-11-18 00:40:24.776917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.015 [2024-11-18 00:40:24.777115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.015 [2024-11-18 00:40:24.777193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:01.015 [2024-11-18 00:40:24.782050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.015 [2024-11-18 00:40:24.782255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.015 [2024-11-18 00:40:24.782285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:01.015 [2024-11-18 00:40:24.787174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.015 [2024-11-18 00:40:24.787290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.015 [2024-11-18 00:40:24.787341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:01.015 [2024-11-18 00:40:24.792359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.015 [2024-11-18 00:40:24.792497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.015 [2024-11-18 00:40:24.792545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:01.015 [2024-11-18 00:40:24.797629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.016 [2024-11-18 00:40:24.797871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.016 [2024-11-18 00:40:24.797911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:01.016 [2024-11-18 00:40:24.802903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.016 [2024-11-18 00:40:24.803167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.016 [2024-11-18 00:40:24.803208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:01.016 [2024-11-18 00:40:24.807964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.016 [2024-11-18 00:40:24.808263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.016 [2024-11-18 00:40:24.808294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:01.016 [2024-11-18 00:40:24.813112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.016 [2024-11-18 00:40:24.813395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.016 [2024-11-18 00:40:24.813447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:01.016 [2024-11-18 00:40:24.818385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.016 [2024-11-18 00:40:24.818616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.016 [2024-11-18 00:40:24.818646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:01.016 [2024-11-18 00:40:24.823479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.016 [2024-11-18 00:40:24.823654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.016 [2024-11-18 00:40:24.823683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:01.016 [2024-11-18 00:40:24.828794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.016 [2024-11-18 00:40:24.830655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.016 [2024-11-18 00:40:24.830686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:01.282 5920.00 IOPS, 740.00 MiB/s [2024-11-17T23:40:25.104Z] [2024-11-18 00:40:24.835035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.282 [2024-11-18 00:40:24.835603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.282 [2024-11-18 00:40:24.835713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:01.282 [2024-11-18 00:40:24.839807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.282 [2024-11-18 00:40:24.840291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.282 [2024-11-18 00:40:24.840355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:01.282 [2024-11-18 00:40:24.844295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.282 [2024-11-18 00:40:24.844659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.282 [2024-11-18 00:40:24.844724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:01.282 [2024-11-18 00:40:24.849224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.282 [2024-11-18 00:40:24.849382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.282 [2024-11-18 00:40:24.849413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:01.282 [2024-11-18 00:40:24.854552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.282 [2024-11-18 00:40:24.854961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.282 [2024-11-18 00:40:24.854992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:01.282 [2024-11-18 00:40:24.858962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.282 [2024-11-18 00:40:24.859186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.282 [2024-11-18 00:40:24.859232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:01.282 [2024-11-18 00:40:24.863267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.282 [2024-11-18 00:40:24.863748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.282 [2024-11-18 00:40:24.863780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:01.282 [2024-11-18 00:40:24.867724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.282 [2024-11-18 00:40:24.867982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.282 [2024-11-18 00:40:24.868057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:01.282 [2024-11-18 00:40:24.872196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.282 [2024-11-18 00:40:24.872445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.282 [2024-11-18 00:40:24.872493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:01.282 [2024-11-18 00:40:24.876526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.282 [2024-11-18 00:40:24.877002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.282 [2024-11-18 00:40:24.877067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:01.282 [2024-11-18 00:40:24.880895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.282 [2024-11-18 00:40:24.881133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.282 [2024-11-18 00:40:24.881199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:01.282 [2024-11-18 00:40:24.885268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.283 [2024-11-18 00:40:24.885521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.283 [2024-11-18 00:40:24.885610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:01.283 [2024-11-18 00:40:24.889616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.283 [2024-11-18 00:40:24.889910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.283 [2024-11-18 00:40:24.889962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:01.283 [2024-11-18 00:40:24.893924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.283 [2024-11-18 00:40:24.894235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.283 [2024-11-18 00:40:24.894266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:01.283 [2024-11-18 00:40:24.898290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.283 [2024-11-18 00:40:24.898630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.283 [2024-11-18 00:40:24.898663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:01.283 [2024-11-18 00:40:24.902638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.283 [2024-11-18 00:40:24.902866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.283 [2024-11-18 00:40:24.902929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:01.283 [2024-11-18 00:40:24.907016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.283 [2024-11-18 00:40:24.907295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.283 [2024-11-18 00:40:24.907334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:01.283 [2024-11-18 00:40:24.911402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.283 [2024-11-18 00:40:24.911607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.283 [2024-11-18 00:40:24.911701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:01.283 [2024-11-18 00:40:24.915757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.283 [2024-11-18 00:40:24.915961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.283 [2024-11-18 00:40:24.916014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:01.283 [2024-11-18 00:40:24.920035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.283 [2024-11-18 00:40:24.920272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.283 [2024-11-18 00:40:24.920377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:01.283 [2024-11-18 00:40:24.924383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.283 [2024-11-18 00:40:24.924776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.283 [2024-11-18 00:40:24.924807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:01.283 [2024-11-18 00:40:24.928744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.283 [2024-11-18 00:40:24.929163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.283 [2024-11-18 00:40:24.929217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:01.283 [2024-11-18 00:40:24.933080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.283 [2024-11-18 00:40:24.933410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.283 [2024-11-18 00:40:24.933455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:01.283 [2024-11-18 00:40:24.937439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.283 [2024-11-18 00:40:24.937752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.283 [2024-11-18 00:40:24.937843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:01.283 [2024-11-18 00:40:24.941658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.283 [2024-11-18 00:40:24.941924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.283 [2024-11-18 00:40:24.942015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:01.283 [2024-11-18 00:40:24.946009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.283 [2024-11-18 00:40:24.946319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.283 [2024-11-18 00:40:24.946371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:01.283 [2024-11-18 00:40:24.950258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.283 [2024-11-18 00:40:24.950640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.283 [2024-11-18 00:40:24.950733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:01.283 [2024-11-18 00:40:24.954622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.283 [2024-11-18 00:40:24.954866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.283 [2024-11-18 00:40:24.954963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:01.283 [2024-11-18 00:40:24.958998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.283 [2024-11-18 00:40:24.959336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.283 [2024-11-18 00:40:24.959368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:01.283 [2024-11-18 00:40:24.963348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.283 [2024-11-18 00:40:24.963754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.283 [2024-11-18 00:40:24.963785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:01.283 [2024-11-18 00:40:24.967601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.283 [2024-11-18 00:40:24.967803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.283 [2024-11-18 00:40:24.967834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:01.283 [2024-11-18 00:40:24.971927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.283 [2024-11-18 00:40:24.972224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.283 [2024-11-18 00:40:24.972255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:01.283 [2024-11-18 00:40:24.976154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.283 [2024-11-18 00:40:24.976466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.283 [2024-11-18 00:40:24.976498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:01.283 [2024-11-18 00:40:24.980488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.283 [2024-11-18 00:40:24.980919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.283 [2024-11-18 00:40:24.980950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:01.283 [2024-11-18 00:40:24.984819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.283 [2024-11-18 00:40:24.985175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.283 [2024-11-18 00:40:24.985287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:01.283 [2024-11-18 00:40:24.989106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.283 [2024-11-18 00:40:24.989477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.283 [2024-11-18 00:40:24.989529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:01.283 [2024-11-18 00:40:24.993347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.283 [2024-11-18 00:40:24.993591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.283 [2024-11-18 00:40:24.993622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:01.283 [2024-11-18 00:40:24.997729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.283 [2024-11-18 00:40:24.998042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.283 [2024-11-18 00:40:24.998074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:01.283 [2024-11-18 00:40:25.002237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.284 [2024-11-18 00:40:25.002534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.284 [2024-11-18 00:40:25.002624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:01.284 [2024-11-18 00:40:25.006675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.284 [2024-11-18 00:40:25.006884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.284 [2024-11-18 00:40:25.006962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:01.284 [2024-11-18 00:40:25.011015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.284 [2024-11-18 00:40:25.011397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.284 [2024-11-18 00:40:25.011439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:01.284 [2024-11-18 00:40:25.015388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.284 [2024-11-18 00:40:25.015634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.284 [2024-11-18 00:40:25.015732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:01.284 [2024-11-18 00:40:25.019721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.284 [2024-11-18 00:40:25.020079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.284 [2024-11-18 00:40:25.020109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:01.284 [2024-11-18 00:40:25.024073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.284 [2024-11-18 00:40:25.024478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.284 [2024-11-18 00:40:25.024510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:01.284 [2024-11-18 00:40:25.028341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.284 [2024-11-18 00:40:25.028718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.284 [2024-11-18 00:40:25.028748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:01.284 [2024-11-18 00:40:25.032670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.284 [2024-11-18 00:40:25.032980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.284 [2024-11-18 00:40:25.033011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:01.284 [2024-11-18 00:40:25.036929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.284 [2024-11-18 00:40:25.037268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.284 [2024-11-18 00:40:25.037299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:01.284 [2024-11-18 00:40:25.041296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.284 [2024-11-18 00:40:25.041665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.284 [2024-11-18 00:40:25.041696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:01.284 [2024-11-18 00:40:25.045676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.284 [2024-11-18 00:40:25.046077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.284 [2024-11-18 00:40:25.046132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:01.284 [2024-11-18 00:40:25.049920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.284 [2024-11-18 00:40:25.050173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.284 [2024-11-18 00:40:25.050205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:01.284 [2024-11-18 00:40:25.054295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.284 [2024-11-18 00:40:25.054422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.284 [2024-11-18 00:40:25.054453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:01.284 [2024-11-18 00:40:25.058880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.284 [2024-11-18 00:40:25.059060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.284 [2024-11-18 00:40:25.059156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:01.284 [2024-11-18 00:40:25.063252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.284 [2024-11-18 00:40:25.063553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.284 [2024-11-18 00:40:25.063585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:01.284 [2024-11-18 00:40:25.067993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.284 [2024-11-18 00:40:25.068171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.284 [2024-11-18 00:40:25.068206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:01.284 [2024-11-18 00:40:25.073147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.284 [2024-11-18 00:40:25.073502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.284 [2024-11-18 00:40:25.073552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:01.284 [2024-11-18 00:40:25.078577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.284 [2024-11-18 00:40:25.078783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.284 [2024-11-18 00:40:25.078814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:01.284 [2024-11-18 00:40:25.084289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.284 [2024-11-18 00:40:25.084502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.284 [2024-11-18 00:40:25.084534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:01.284 [2024-11-18 00:40:25.088779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.284 [2024-11-18 00:40:25.089080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.284 [2024-11-18 00:40:25.089110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:01.284 [2024-11-18 00:40:25.093195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.284 [2024-11-18 00:40:25.093399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.284 [2024-11-18 00:40:25.093437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:01.284 [2024-11-18 00:40:25.097667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.284 [2024-11-18 00:40:25.097943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.284 [2024-11-18 00:40:25.098023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:01.284 [2024-11-18 00:40:25.102229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.284 [2024-11-18 00:40:25.102469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.284 [2024-11-18 00:40:25.102502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:01.544 [2024-11-18 00:40:25.106682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.544 [2024-11-18 00:40:25.106859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.544 [2024-11-18 00:40:25.106890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:01.544 [2024-11-18 00:40:25.111231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.544 [2024-11-18 00:40:25.111541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.544 [2024-11-18 00:40:25.111573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:01.544 [2024-11-18 00:40:25.115721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.544 [2024-11-18 00:40:25.116048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.544 [2024-11-18 00:40:25.116126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:01.544 [2024-11-18 00:40:25.120295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.544 [2024-11-18 00:40:25.120633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.544 [2024-11-18 00:40:25.120711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:01.544 [2024-11-18 00:40:25.124805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.544 [2024-11-18 00:40:25.125362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.544 [2024-11-18 00:40:25.125425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:01.544 [2024-11-18 00:40:25.129352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.544 [2024-11-18 00:40:25.129721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.544 [2024-11-18 00:40:25.129752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:01.544 [2024-11-18 00:40:25.133875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.544 [2024-11-18 00:40:25.134124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.544 [2024-11-18 00:40:25.134218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:01.544 [2024-11-18 00:40:25.138965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.544 [2024-11-18 00:40:25.139213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.544 [2024-11-18 00:40:25.139294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:01.544 [2024-11-18 00:40:25.144028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.544 [2024-11-18 00:40:25.144284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.544 [2024-11-18 00:40:25.144380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:01.544 [2024-11-18 00:40:25.149142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.544 [2024-11-18 00:40:25.149360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.545 [2024-11-18 00:40:25.149390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:01.545 [2024-11-18 00:40:25.154190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.545 [2024-11-18 00:40:25.154566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.545 [2024-11-18 00:40:25.154597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:01.545 [2024-11-18 00:40:25.159420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.545 [2024-11-18 00:40:25.159592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.545 [2024-11-18 00:40:25.159665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:01.545 [2024-11-18 00:40:25.165144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.545 [2024-11-18 00:40:25.165282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.545 [2024-11-18 00:40:25.165339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:01.545 [2024-11-18 00:40:25.170800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.545 [2024-11-18 00:40:25.170955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.545 [2024-11-18 00:40:25.170986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:01.545 [2024-11-18 00:40:25.176907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.545 [2024-11-18 00:40:25.177116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.545 [2024-11-18 00:40:25.177147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:01.545 [2024-11-18 00:40:25.182476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.545 [2024-11-18 00:40:25.182575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.545 [2024-11-18 00:40:25.182605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:01.545 [2024-11-18 00:40:25.187278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.545 [2024-11-18 00:40:25.187381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.545 [2024-11-18 00:40:25.187411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:01.545 [2024-11-18 00:40:25.192061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.545 [2024-11-18 00:40:25.192189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.545 [2024-11-18 00:40:25.192219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:01.545 [2024-11-18 00:40:25.196793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.545 [2024-11-18 00:40:25.196931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.545 [2024-11-18 00:40:25.196962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:01.545 [2024-11-18 00:40:25.201861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.545 [2024-11-18 00:40:25.201955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.545 [2024-11-18 00:40:25.201983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:01.545 [2024-11-18 00:40:25.206897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.545 [2024-11-18 00:40:25.206981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.545 [2024-11-18 00:40:25.207010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:01.545 [2024-11-18 00:40:25.211636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.545 [2024-11-18 00:40:25.211832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.545 [2024-11-18 00:40:25.211862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:01.545 [2024-11-18 00:40:25.216573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.545 [2024-11-18 00:40:25.216665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.545 [2024-11-18 00:40:25.216756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:01.545 [2024-11-18 00:40:25.221356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.545 [2024-11-18 00:40:25.221522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.545 [2024-11-18 00:40:25.221552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:01.545 [2024-11-18 00:40:25.226164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.545 [2024-11-18 00:40:25.226265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.545 [2024-11-18 00:40:25.226297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:01.545 [2024-11-18 00:40:25.230733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.545 [2024-11-18 00:40:25.231031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.545 [2024-11-18 00:40:25.231103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:01.545 [2024-11-18 00:40:25.235491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.545 [2024-11-18 00:40:25.235687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.545 [2024-11-18 00:40:25.235717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:01.545 [2024-11-18 00:40:25.240239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.545 [2024-11-18 00:40:25.240602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.545 [2024-11-18 00:40:25.240672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:01.545 [2024-11-18 00:40:25.244470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.545 [2024-11-18 00:40:25.244759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.545 [2024-11-18 00:40:25.244851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:01.545 [2024-11-18 00:40:25.248842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.545 [2024-11-18 00:40:25.249084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.545 [2024-11-18 00:40:25.249114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:01.545 [2024-11-18 00:40:25.253144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.545 [2024-11-18 00:40:25.253474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.545 [2024-11-18 00:40:25.253525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:01.545 [2024-11-18 00:40:25.257477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.545 [2024-11-18 00:40:25.257968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.545 [2024-11-18 00:40:25.258064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:01.545 [2024-11-18 00:40:25.261783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.545 [2024-11-18 00:40:25.262092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.545 [2024-11-18 00:40:25.262203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:01.545 [2024-11-18 00:40:25.266091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.545 [2024-11-18 00:40:25.266362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.545 [2024-11-18 00:40:25.266437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:01.545 [2024-11-18 00:40:25.270528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.545 [2024-11-18 00:40:25.270846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.545 [2024-11-18 00:40:25.270895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:01.545 [2024-11-18 00:40:25.275385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.545 [2024-11-18 00:40:25.275577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.545 [2024-11-18 00:40:25.275608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:01.545 [2024-11-18 00:40:25.280455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.546 [2024-11-18 00:40:25.280656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.546 [2024-11-18 00:40:25.280745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:01.546 [2024-11-18 00:40:25.286156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.546 [2024-11-18 00:40:25.286363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.546 [2024-11-18 00:40:25.286394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:01.546 [2024-11-18 00:40:25.291392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.546 [2024-11-18 00:40:25.291751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.546 [2024-11-18 00:40:25.291834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:01.546 [2024-11-18 00:40:25.295687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.546 [2024-11-18 00:40:25.295935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.546 [2024-11-18 00:40:25.296013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:01.546 [2024-11-18 00:40:25.300268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.546 [2024-11-18 00:40:25.300575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.546 [2024-11-18 00:40:25.300606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:01.546 [2024-11-18 00:40:25.304872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.546 [2024-11-18 00:40:25.305118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.546 [2024-11-18 00:40:25.305156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:01.546 [2024-11-18 00:40:25.309359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.546 [2024-11-18 00:40:25.309648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.546 [2024-11-18 00:40:25.309721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:01.546 [2024-11-18 00:40:25.313855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.546 [2024-11-18 00:40:25.314188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.546 [2024-11-18 00:40:25.314267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:01.546 [2024-11-18 00:40:25.318401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.546 [2024-11-18 00:40:25.318664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.546 [2024-11-18 00:40:25.318695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:01.546 [2024-11-18 00:40:25.323028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.546 [2024-11-18 00:40:25.323153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.546 [2024-11-18 00:40:25.323215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:01.546 [2024-11-18 00:40:25.327605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.546 [2024-11-18 00:40:25.327790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.546 [2024-11-18 00:40:25.327820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:01.546 [2024-11-18 00:40:25.332066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.546 [2024-11-18 00:40:25.332264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.546 [2024-11-18 00:40:25.332295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:01.546 [2024-11-18 00:40:25.336513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.546 [2024-11-18 00:40:25.336736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.546 [2024-11-18 00:40:25.336766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:01.546 [2024-11-18 00:40:25.341050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.546 [2024-11-18 00:40:25.341442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.546 [2024-11-18 00:40:25.341497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:01.546 [2024-11-18 00:40:25.345558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.546 [2024-11-18 00:40:25.345756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.546 [2024-11-18 00:40:25.345821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:01.546 [2024-11-18 00:40:25.350036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.546 [2024-11-18 00:40:25.350148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.546 [2024-11-18 00:40:25.350228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:01.546 [2024-11-18 00:40:25.354450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.546 [2024-11-18 00:40:25.354685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.546 [2024-11-18 00:40:25.354715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:01.546 [2024-11-18 00:40:25.358917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.546 [2024-11-18 00:40:25.359166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.546 [2024-11-18 00:40:25.359196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:01.546 [2024-11-18 00:40:25.363462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.546 [2024-11-18 00:40:25.363656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.546 [2024-11-18 00:40:25.363735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:01.806 [2024-11-18 00:40:25.367985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.806 [2024-11-18 00:40:25.368327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.806 [2024-11-18 00:40:25.368358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:01.806 [2024-11-18 00:40:25.372581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.806 [2024-11-18 00:40:25.372823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.806 [2024-11-18 00:40:25.372863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:01.806 [2024-11-18 00:40:25.377100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.806 [2024-11-18 00:40:25.377283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.806 [2024-11-18 00:40:25.377320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:01.806 [2024-11-18 00:40:25.381658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.806 [2024-11-18 00:40:25.381917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.806 [2024-11-18 00:40:25.382017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:01.806 [2024-11-18 00:40:25.386271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.806 [2024-11-18 00:40:25.386590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.806 [2024-11-18 00:40:25.386650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:01.806 [2024-11-18 00:40:25.391000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.806 [2024-11-18 00:40:25.391207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.806 [2024-11-18 00:40:25.391237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:01.806 [2024-11-18 00:40:25.395503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.806 [2024-11-18 00:40:25.395728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.806 [2024-11-18 00:40:25.395759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:01.806 [2024-11-18 00:40:25.400069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.806 [2024-11-18 00:40:25.400361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.806 [2024-11-18 00:40:25.400429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:01.806 [2024-11-18 00:40:25.404526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.806 [2024-11-18 00:40:25.404856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.806 [2024-11-18 00:40:25.404886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:01.806 [2024-11-18 00:40:25.409015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.806 [2024-11-18 00:40:25.409424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.806 [2024-11-18 00:40:25.409456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:01.806 [2024-11-18 00:40:25.413560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.806 [2024-11-18 00:40:25.413889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.806 [2024-11-18 00:40:25.413960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:01.806 [2024-11-18 00:40:25.418047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.806 [2024-11-18 00:40:25.418362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.806 [2024-11-18 00:40:25.418394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:01.806 [2024-11-18 00:40:25.422653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.806 [2024-11-18 00:40:25.422874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.806 [2024-11-18 00:40:25.422924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:01.806 [2024-11-18 00:40:25.427260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.806 [2024-11-18 00:40:25.427563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.806 [2024-11-18 00:40:25.427594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:01.806 [2024-11-18 00:40:25.431685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.806 [2024-11-18 00:40:25.432027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.806 [2024-11-18 00:40:25.432058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:01.806 [2024-11-18 00:40:25.436057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.806 [2024-11-18 00:40:25.436390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.806 [2024-11-18 00:40:25.436439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:01.806 [2024-11-18 00:40:25.440669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.806 [2024-11-18 00:40:25.440938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.806 [2024-11-18 00:40:25.441017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:01.806 [2024-11-18 00:40:25.445331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.806 [2024-11-18 00:40:25.445549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.806 [2024-11-18 00:40:25.445579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:01.806 [2024-11-18 00:40:25.449945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.806 [2024-11-18 00:40:25.450192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.806 [2024-11-18 00:40:25.450291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:01.806 [2024-11-18 00:40:25.454530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.806 [2024-11-18 00:40:25.454779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.806 [2024-11-18 00:40:25.454822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:01.806 [2024-11-18 00:40:25.459019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.806 [2024-11-18 00:40:25.459458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.806 [2024-11-18 00:40:25.459489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:01.806 [2024-11-18 00:40:25.463749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.806 [2024-11-18 00:40:25.464059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.807 [2024-11-18 00:40:25.464141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:01.807 [2024-11-18 00:40:25.468269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.807 [2024-11-18 00:40:25.468510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.807 [2024-11-18 00:40:25.468542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:01.807 [2024-11-18 00:40:25.472950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.807 [2024-11-18 00:40:25.473172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.807 [2024-11-18 00:40:25.473203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:01.807 [2024-11-18 00:40:25.477515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.807 [2024-11-18 00:40:25.477726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.807 [2024-11-18 00:40:25.477756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:01.807 [2024-11-18 00:40:25.482012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.807 [2024-11-18 00:40:25.482238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.807 [2024-11-18 00:40:25.482346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:01.807 [2024-11-18 00:40:25.486484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.807 [2024-11-18 00:40:25.486816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.807 [2024-11-18 00:40:25.486862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:01.807 [2024-11-18 00:40:25.490901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.807 [2024-11-18 00:40:25.491201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.807 [2024-11-18 00:40:25.491252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:01.807 [2024-11-18 00:40:25.495385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.807 [2024-11-18 00:40:25.495627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.807 [2024-11-18 00:40:25.495721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:01.807 [2024-11-18 00:40:25.499842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.807 [2024-11-18 00:40:25.500159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.807 [2024-11-18 00:40:25.500236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:01.807 [2024-11-18 00:40:25.504631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.807 [2024-11-18 00:40:25.504779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.807 [2024-11-18 00:40:25.504810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:01.807 [2024-11-18 00:40:25.509652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.807 [2024-11-18 00:40:25.509902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.807 [2024-11-18 00:40:25.509946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:01.807 [2024-11-18 00:40:25.514864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.807 [2024-11-18 00:40:25.515080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.807 [2024-11-18 00:40:25.515119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:01.807 [2024-11-18 00:40:25.521092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.807 [2024-11-18 00:40:25.521278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.807 [2024-11-18 00:40:25.521309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:01.807 [2024-11-18 00:40:25.526495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.807 [2024-11-18 00:40:25.526688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.807 [2024-11-18 00:40:25.526766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:01.807 [2024-11-18 00:40:25.530970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.807 [2024-11-18 00:40:25.531308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.807 [2024-11-18 00:40:25.531346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:01.807 [2024-11-18 00:40:25.535387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.807 [2024-11-18 00:40:25.535587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.807 [2024-11-18 00:40:25.535685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:01.807 [2024-11-18 00:40:25.541019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.807 [2024-11-18 00:40:25.541210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.807 [2024-11-18 00:40:25.541240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:01.807 [2024-11-18 00:40:25.545571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.807 [2024-11-18 00:40:25.545822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.807 [2024-11-18 00:40:25.545916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:01.807 [2024-11-18 00:40:25.549852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.807 [2024-11-18 00:40:25.550143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.807 [2024-11-18 00:40:25.550216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:01.807 [2024-11-18 00:40:25.554221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.807 [2024-11-18 00:40:25.554453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.807 [2024-11-18 00:40:25.554538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:01.807 [2024-11-18 00:40:25.558566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.807 [2024-11-18 00:40:25.558958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.807 [2024-11-18 00:40:25.559026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:01.807 [2024-11-18 00:40:25.562844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.807 [2024-11-18 00:40:25.563067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.807 [2024-11-18 00:40:25.563115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:01.807 [2024-11-18 00:40:25.567104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.807 [2024-11-18 00:40:25.567494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.807 [2024-11-18 00:40:25.567525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:01.807 [2024-11-18 00:40:25.571360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.807 [2024-11-18 00:40:25.571717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.807 [2024-11-18 00:40:25.571748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:01.807 [2024-11-18 00:40:25.575660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.807 [2024-11-18 00:40:25.575853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.807 [2024-11-18 00:40:25.575966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:01.807 [2024-11-18 00:40:25.580170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.807 [2024-11-18 00:40:25.580379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.807 [2024-11-18 00:40:25.580434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:01.807 [2024-11-18 00:40:25.585136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.807 [2024-11-18 00:40:25.585336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.807 [2024-11-18 00:40:25.585417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:01.807 [2024-11-18 00:40:25.590282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.807 [2024-11-18 00:40:25.590502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.808 [2024-11-18 00:40:25.590534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:01.808 [2024-11-18 00:40:25.596216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.808 [2024-11-18 00:40:25.596384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.808 [2024-11-18 00:40:25.596415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:01.808 [2024-11-18 00:40:25.601829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.808 [2024-11-18 00:40:25.602054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.808 [2024-11-18 00:40:25.602126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:01.808 [2024-11-18 00:40:25.607045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.808 [2024-11-18 00:40:25.607241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.808 [2024-11-18 00:40:25.607271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:01.808 [2024-11-18 00:40:25.612234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.808 [2024-11-18 00:40:25.612525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.808 [2024-11-18 00:40:25.612594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:01.808 [2024-11-18 00:40:25.617388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.808 [2024-11-18 00:40:25.617659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.808 [2024-11-18 00:40:25.617694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:01.808 [2024-11-18 00:40:25.622542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:01.808 [2024-11-18 00:40:25.622802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.808 [2024-11-18 00:40:25.622834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:02.067 [2024-11-18 00:40:25.627805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:02.067 [2024-11-18 00:40:25.628137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.067 [2024-11-18 00:40:25.628174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:02.067 [2024-11-18 00:40:25.633093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:02.067 [2024-11-18 00:40:25.633369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.067 [2024-11-18 00:40:25.633399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:02.067 [2024-11-18 00:40:25.638273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:02.067 [2024-11-18 00:40:25.638603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.067 [2024-11-18 00:40:25.638648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:02.067 [2024-11-18 00:40:25.643602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:02.067 [2024-11-18 00:40:25.643934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.067 [2024-11-18 00:40:25.643964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:02.068 [2024-11-18 00:40:25.648792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:02.068 [2024-11-18 00:40:25.649100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.068 [2024-11-18 00:40:25.649130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:02.068 [2024-11-18 00:40:25.653979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:02.068 [2024-11-18 00:40:25.654268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.068 [2024-11-18 00:40:25.654357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:02.068 [2024-11-18 00:40:25.659151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:02.068 [2024-11-18 00:40:25.659425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.068 [2024-11-18 00:40:25.659493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:02.068 [2024-11-18 00:40:25.664720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:02.068 [2024-11-18 00:40:25.664910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.068 [2024-11-18 00:40:25.664941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:02.068 [2024-11-18 00:40:25.670398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:02.068 [2024-11-18 00:40:25.670657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.068 [2024-11-18 00:40:25.670687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:02.068 [2024-11-18 00:40:25.675061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:02.068 [2024-11-18 00:40:25.675400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.068 [2024-11-18 00:40:25.675431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:02.068 [2024-11-18 00:40:25.679809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:02.068 [2024-11-18 00:40:25.680113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.068 [2024-11-18 00:40:25.680190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:02.068 [2024-11-18 00:40:25.684927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:02.068 [2024-11-18 00:40:25.685082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.068 [2024-11-18 00:40:25.685113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:02.068 [2024-11-18 00:40:25.690175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:02.068 [2024-11-18 00:40:25.690508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.068 [2024-11-18 00:40:25.690539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:02.068 [2024-11-18 00:40:25.694744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:02.068 [2024-11-18 00:40:25.695090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.068 [2024-11-18 00:40:25.695120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:02.068 [2024-11-18 00:40:25.699210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:02.068 [2024-11-18 00:40:25.699528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.068 [2024-11-18 00:40:25.699559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:02.068 [2024-11-18 00:40:25.703585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:02.068 [2024-11-18 00:40:25.703863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.068 [2024-11-18 00:40:25.703898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:02.068 [2024-11-18 00:40:25.708199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:02.068 [2024-11-18 00:40:25.708479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.068 [2024-11-18 00:40:25.708512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:02.068 [2024-11-18 00:40:25.712781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:02.068 [2024-11-18 00:40:25.713051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.068 [2024-11-18 00:40:25.713118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:02.068 [2024-11-18 00:40:25.717231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:02.068 [2024-11-18 00:40:25.717569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.068 [2024-11-18 00:40:25.717600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:02.068 [2024-11-18 00:40:25.721850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:02.068 [2024-11-18 00:40:25.722125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.068 [2024-11-18 00:40:25.722155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:02.068 [2024-11-18 00:40:25.726415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:02.068 [2024-11-18 00:40:25.726754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.068 [2024-11-18 00:40:25.726788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:02.068 [2024-11-18 00:40:25.730892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:02.068 [2024-11-18 00:40:25.731134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.068 [2024-11-18 00:40:25.731200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:02.068 [2024-11-18 00:40:25.735494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:02.068 [2024-11-18 00:40:25.735703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.068 [2024-11-18 00:40:25.735782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:02.068 [2024-11-18 00:40:25.740627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:02.068 [2024-11-18 00:40:25.740880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.068 [2024-11-18 00:40:25.740910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:02.068 [2024-11-18 00:40:25.745952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:02.068 [2024-11-18 00:40:25.746104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.068 [2024-11-18 00:40:25.746162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:02.068 [2024-11-18 00:40:25.751878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:02.068 [2024-11-18 00:40:25.752134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.068 [2024-11-18 00:40:25.752223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:02.068 [2024-11-18 00:40:25.756499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:02.068 [2024-11-18 00:40:25.756890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.068 [2024-11-18 00:40:25.756974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:02.068 [2024-11-18 00:40:25.761010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:02.068 [2024-11-18 00:40:25.761264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.068 [2024-11-18 00:40:25.761355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:02.068 [2024-11-18 00:40:25.765374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:02.068 [2024-11-18 00:40:25.765614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.068 [2024-11-18 00:40:25.765667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:02.068 [2024-11-18 00:40:25.769895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:02.068 [2024-11-18 00:40:25.770237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.068 [2024-11-18 00:40:25.770273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:02.068 [2024-11-18 00:40:25.774382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:02.068 [2024-11-18 00:40:25.774673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.068 [2024-11-18 00:40:25.774704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:02.069 [2024-11-18 00:40:25.778857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:02.069 [2024-11-18 00:40:25.779178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.069 [2024-11-18 00:40:25.779262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:02.069 [2024-11-18 00:40:25.783269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:02.069 [2024-11-18 00:40:25.783588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.069 [2024-11-18 00:40:25.783640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:02.069 [2024-11-18 00:40:25.787770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:02.069 [2024-11-18 00:40:25.788001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.069 [2024-11-18 00:40:25.788031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:02.069 [2024-11-18 00:40:25.792395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:02.069 [2024-11-18 00:40:25.792554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.069 [2024-11-18 00:40:25.792607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:02.069 [2024-11-18 00:40:25.796995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:02.069 [2024-11-18 00:40:25.797349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.069 [2024-11-18 00:40:25.797380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:02.069 [2024-11-18 00:40:25.801486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:02.069 [2024-11-18 00:40:25.801746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.069 [2024-11-18 00:40:25.801785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:02.069 [2024-11-18 00:40:25.806001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:02.069 [2024-11-18 00:40:25.806231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.069 [2024-11-18 00:40:25.806292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:02.069 [2024-11-18 00:40:25.810499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:02.069 [2024-11-18 00:40:25.810826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.069 [2024-11-18 00:40:25.810910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:02.069 [2024-11-18 00:40:25.814869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:02.069 [2024-11-18 00:40:25.815165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.069 [2024-11-18 00:40:25.815201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:02.069 [2024-11-18 00:40:25.819421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:02.069 [2024-11-18 00:40:25.819659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.069 [2024-11-18 00:40:25.819690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:02.069 [2024-11-18 00:40:25.823919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:02.069 [2024-11-18 00:40:25.824241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.069 [2024-11-18 00:40:25.824281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:02.069 [2024-11-18 00:40:25.828426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:02.069 [2024-11-18 00:40:25.828737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.069 [2024-11-18 00:40:25.828768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:02.069 6287.00 IOPS, 785.88 MiB/s [2024-11-17T23:40:25.891Z] [2024-11-18 00:40:25.834445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d177a0) with pdu=0x2000166ff3c8 00:35:02.069 [2024-11-18 00:40:25.834592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.069 [2024-11-18 00:40:25.834669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:02.069 00:35:02.069 Latency(us) 00:35:02.069 [2024-11-17T23:40:25.891Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:02.069 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:02.069 nvme0n1 : 2.00 6282.19 785.27 0.00 0.00 2536.64 1735.49 12621.75 00:35:02.069 [2024-11-17T23:40:25.891Z] =================================================================================================================== 00:35:02.069 [2024-11-17T23:40:25.891Z] Total : 6282.19 785.27 0.00 0.00 2536.64 1735.49 12621.75 00:35:02.069 { 00:35:02.069 "results": [ 00:35:02.069 { 00:35:02.069 "job": "nvme0n1", 00:35:02.069 "core_mask": "0x2", 00:35:02.069 "workload": "randwrite", 00:35:02.069 "status": "finished", 00:35:02.069 "queue_depth": 16, 00:35:02.069 "io_size": 131072, 00:35:02.069 "runtime": 2.004556, 00:35:02.069 "iops": 6282.189173063761, 00:35:02.069 "mibps": 785.2736466329701, 00:35:02.069 "io_failed": 0, 00:35:02.069 "io_timeout": 0, 00:35:02.069 "avg_latency_us": 2536.636750222787, 00:35:02.069 "min_latency_us": 1735.4903703703703, 00:35:02.069 "max_latency_us": 12621.748148148148 00:35:02.069 } 00:35:02.069 ], 00:35:02.069 "core_count": 1 00:35:02.069 } 00:35:02.069 00:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:02.069 00:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:02.069 00:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:02.069 00:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:02.069 | .driver_specific 00:35:02.069 | .nvme_error 00:35:02.069 | .status_code 00:35:02.069 | .command_transient_transport_error' 00:35:02.332 00:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 407 > 0 )) 00:35:02.332 00:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 397719 00:35:02.332 00:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 397719 ']' 00:35:02.332 00:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 397719 00:35:02.332 00:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:02.332 00:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:02.332 00:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 397719 00:35:02.629 00:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:02.629 00:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:02.629 00:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 397719' 00:35:02.629 killing process with pid 397719 00:35:02.629 00:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 397719 00:35:02.629 Received shutdown signal, test time was about 2.000000 seconds 00:35:02.629 00:35:02.629 Latency(us) 00:35:02.629 [2024-11-17T23:40:26.451Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:02.629 [2024-11-17T23:40:26.451Z] =================================================================================================================== 00:35:02.629 [2024-11-17T23:40:26.451Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:02.629 00:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 397719 00:35:02.629 00:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 396354 00:35:02.629 00:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 396354 ']' 00:35:02.629 00:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 396354 00:35:02.629 00:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:02.629 00:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:02.629 00:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 396354 00:35:02.629 00:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:02.629 00:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:02.629 00:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 396354' 00:35:02.629 killing process with pid 396354 00:35:02.629 00:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 396354 00:35:02.629 00:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 396354 00:35:02.927 00:35:02.927 real 0m15.132s 00:35:02.927 user 0m29.093s 00:35:02.927 sys 0m4.648s 00:35:02.927 00:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:02.927 00:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:02.927 ************************************ 00:35:02.927 END TEST nvmf_digest_error 00:35:02.927 ************************************ 00:35:02.927 00:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:35:02.927 00:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:35:02.927 00:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:02.927 00:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:35:02.927 00:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:02.927 00:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:35:02.927 00:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:02.927 00:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:02.927 rmmod nvme_tcp 00:35:02.927 rmmod nvme_fabrics 00:35:02.927 rmmod nvme_keyring 00:35:02.927 00:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:02.927 00:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:35:02.927 00:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:35:02.927 00:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 396354 ']' 00:35:02.927 00:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 396354 00:35:02.927 00:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 396354 ']' 00:35:02.927 00:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 396354 00:35:02.927 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (396354) - No such process 00:35:02.927 00:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 396354 is not found' 00:35:02.927 Process with pid 396354 is not found 00:35:02.927 00:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:02.927 00:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:02.927 00:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:02.927 00:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:35:02.927 00:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:35:02.927 00:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:02.927 00:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:35:02.927 00:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:02.927 00:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:02.927 00:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:02.927 00:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:02.927 00:40:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:05.505 00:40:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:05.505 00:35:05.506 real 0m35.457s 00:35:05.506 user 1m1.799s 00:35:05.506 sys 0m10.466s 00:35:05.506 00:40:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:05.506 00:40:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:05.506 ************************************ 00:35:05.506 END TEST nvmf_digest 00:35:05.506 ************************************ 00:35:05.506 00:40:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:35:05.506 00:40:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:35:05.506 00:40:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:35:05.506 00:40:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:05.506 00:40:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:05.506 00:40:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:05.506 00:40:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.506 ************************************ 00:35:05.506 START TEST nvmf_bdevperf 00:35:05.506 ************************************ 00:35:05.506 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:05.506 * Looking for test storage... 00:35:05.506 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:05.506 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:05.506 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:35:05.506 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:05.506 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:05.506 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:05.506 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:05.506 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:05.506 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:35:05.506 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:35:05.506 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:35:05.506 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:35:05.506 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:35:05.507 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:35:05.507 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:35:05.507 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:05.507 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:35:05.507 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:35:05.507 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:05.507 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:05.507 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:35:05.507 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:35:05.507 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:05.507 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:35:05.507 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:35:05.507 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:35:05.507 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:35:05.507 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:05.507 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:35:05.507 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:35:05.507 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:05.507 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:05.507 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:35:05.507 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:05.507 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:05.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:05.507 --rc genhtml_branch_coverage=1 00:35:05.507 --rc genhtml_function_coverage=1 00:35:05.507 --rc genhtml_legend=1 00:35:05.507 --rc geninfo_all_blocks=1 00:35:05.507 --rc geninfo_unexecuted_blocks=1 00:35:05.507 00:35:05.507 ' 00:35:05.507 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:05.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:05.507 --rc genhtml_branch_coverage=1 00:35:05.507 --rc genhtml_function_coverage=1 00:35:05.507 --rc genhtml_legend=1 00:35:05.507 --rc geninfo_all_blocks=1 00:35:05.508 --rc geninfo_unexecuted_blocks=1 00:35:05.508 00:35:05.508 ' 00:35:05.508 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:05.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:05.508 --rc genhtml_branch_coverage=1 00:35:05.508 --rc genhtml_function_coverage=1 00:35:05.508 --rc genhtml_legend=1 00:35:05.508 --rc geninfo_all_blocks=1 00:35:05.508 --rc geninfo_unexecuted_blocks=1 00:35:05.508 00:35:05.508 ' 00:35:05.508 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:05.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:05.508 --rc genhtml_branch_coverage=1 00:35:05.508 --rc genhtml_function_coverage=1 00:35:05.508 --rc genhtml_legend=1 00:35:05.508 --rc geninfo_all_blocks=1 00:35:05.508 --rc geninfo_unexecuted_blocks=1 00:35:05.508 00:35:05.508 ' 00:35:05.508 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:05.508 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:35:05.508 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:05.508 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:05.508 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:05.508 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:05.508 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:05.508 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:05.508 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:05.508 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:05.508 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:05.508 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:05.508 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:05.508 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:05.509 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:05.509 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:05.509 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:05.509 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:05.509 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:05.509 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:35:05.509 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:05.509 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:05.509 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:05.509 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:05.509 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:05.510 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:05.510 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:35:05.510 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:05.510 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:35:05.510 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:05.510 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:05.510 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:05.510 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:05.510 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:05.510 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:05.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:05.510 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:05.510 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:05.510 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:05.510 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:05.510 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:05.510 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:35:05.510 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:05.510 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:05.510 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:05.510 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:05.510 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:05.510 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:05.510 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:05.510 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:05.510 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:05.510 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:05.510 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:35:05.510 00:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:07.416 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:07.416 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:07.416 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:07.416 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:07.416 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:07.675 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:07.675 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:07.675 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:07.675 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:07.675 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:07.675 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:35:07.675 00:35:07.675 --- 10.0.0.2 ping statistics --- 00:35:07.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:07.675 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:35:07.675 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:07.675 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:07.675 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:35:07.675 00:35:07.675 --- 10.0.0.1 ping statistics --- 00:35:07.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:07.675 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:35:07.675 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:07.675 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:35:07.675 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:07.675 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:07.675 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:07.675 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:07.675 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:07.675 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:07.675 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:07.675 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:35:07.675 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:07.675 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:07.675 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:07.675 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:07.675 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=400088 00:35:07.675 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:07.675 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 400088 00:35:07.675 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 400088 ']' 00:35:07.675 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:07.675 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:07.675 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:07.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:07.675 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:07.675 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:07.675 [2024-11-18 00:40:31.372014] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:35:07.675 [2024-11-18 00:40:31.372105] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:07.676 [2024-11-18 00:40:31.444241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:07.676 [2024-11-18 00:40:31.489069] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:07.676 [2024-11-18 00:40:31.489138] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:07.676 [2024-11-18 00:40:31.489151] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:07.676 [2024-11-18 00:40:31.489162] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:07.676 [2024-11-18 00:40:31.489185] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:07.676 [2024-11-18 00:40:31.490592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:07.676 [2024-11-18 00:40:31.490645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:07.676 [2024-11-18 00:40:31.490649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:07.934 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:07.934 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:35:07.934 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:07.934 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:07.934 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:07.934 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:07.934 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:07.934 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.934 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:07.934 [2024-11-18 00:40:31.627247] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:07.934 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.934 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:07.934 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.934 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:07.934 Malloc0 00:35:07.934 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.934 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:07.934 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.934 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:07.934 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.934 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:07.934 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.934 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:07.934 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.934 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:07.934 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.934 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:07.934 [2024-11-18 00:40:31.693164] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:07.934 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.934 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:35:07.934 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:35:07.934 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:35:07.934 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:35:07.934 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:07.934 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:07.934 { 00:35:07.934 "params": { 00:35:07.934 "name": "Nvme$subsystem", 00:35:07.934 "trtype": "$TEST_TRANSPORT", 00:35:07.934 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:07.934 "adrfam": "ipv4", 00:35:07.934 "trsvcid": "$NVMF_PORT", 00:35:07.934 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:07.934 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:07.934 "hdgst": ${hdgst:-false}, 00:35:07.934 "ddgst": ${ddgst:-false} 00:35:07.934 }, 00:35:07.934 "method": "bdev_nvme_attach_controller" 00:35:07.934 } 00:35:07.934 EOF 00:35:07.934 )") 00:35:07.934 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:35:07.934 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:35:07.934 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:35:07.934 00:40:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:07.934 "params": { 00:35:07.934 "name": "Nvme1", 00:35:07.934 "trtype": "tcp", 00:35:07.934 "traddr": "10.0.0.2", 00:35:07.934 "adrfam": "ipv4", 00:35:07.934 "trsvcid": "4420", 00:35:07.934 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:07.934 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:07.934 "hdgst": false, 00:35:07.934 "ddgst": false 00:35:07.934 }, 00:35:07.934 "method": "bdev_nvme_attach_controller" 00:35:07.934 }' 00:35:07.934 [2024-11-18 00:40:31.741064] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:35:07.934 [2024-11-18 00:40:31.741138] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid400114 ] 00:35:08.192 [2024-11-18 00:40:31.813397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:08.192 [2024-11-18 00:40:31.860760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:08.449 Running I/O for 1 seconds... 00:35:09.384 8521.00 IOPS, 33.29 MiB/s 00:35:09.384 Latency(us) 00:35:09.384 [2024-11-17T23:40:33.206Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:09.384 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:09.384 Verification LBA range: start 0x0 length 0x4000 00:35:09.384 Nvme1n1 : 1.05 8237.25 32.18 0.00 0.00 14887.86 3422.44 43884.85 00:35:09.384 [2024-11-17T23:40:33.206Z] =================================================================================================================== 00:35:09.384 [2024-11-17T23:40:33.206Z] Total : 8237.25 32.18 0.00 0.00 14887.86 3422.44 43884.85 00:35:09.642 00:40:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=400374 00:35:09.642 00:40:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:35:09.642 00:40:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:35:09.642 00:40:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:35:09.642 00:40:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:35:09.642 00:40:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:35:09.642 00:40:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:09.642 00:40:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:09.642 { 00:35:09.642 "params": { 00:35:09.642 "name": "Nvme$subsystem", 00:35:09.642 "trtype": "$TEST_TRANSPORT", 00:35:09.642 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:09.642 "adrfam": "ipv4", 00:35:09.642 "trsvcid": "$NVMF_PORT", 00:35:09.642 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:09.642 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:09.642 "hdgst": ${hdgst:-false}, 00:35:09.642 "ddgst": ${ddgst:-false} 00:35:09.642 }, 00:35:09.642 "method": "bdev_nvme_attach_controller" 00:35:09.642 } 00:35:09.642 EOF 00:35:09.642 )") 00:35:09.642 00:40:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:35:09.642 00:40:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:35:09.642 00:40:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:35:09.642 00:40:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:09.642 "params": { 00:35:09.642 "name": "Nvme1", 00:35:09.642 "trtype": "tcp", 00:35:09.642 "traddr": "10.0.0.2", 00:35:09.642 "adrfam": "ipv4", 00:35:09.642 "trsvcid": "4420", 00:35:09.642 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:09.642 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:09.642 "hdgst": false, 00:35:09.642 "ddgst": false 00:35:09.642 }, 00:35:09.642 "method": "bdev_nvme_attach_controller" 00:35:09.642 }' 00:35:09.642 [2024-11-18 00:40:33.413548] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:35:09.642 [2024-11-18 00:40:33.413644] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid400374 ] 00:35:09.901 [2024-11-18 00:40:33.482443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:09.901 [2024-11-18 00:40:33.527853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:10.159 Running I/O for 15 seconds... 00:35:12.027 8616.00 IOPS, 33.66 MiB/s [2024-11-17T23:40:36.419Z] 8688.50 IOPS, 33.94 MiB/s [2024-11-17T23:40:36.419Z] 00:40:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 400088 00:35:12.597 00:40:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:35:12.597 [2024-11-18 00:40:36.381505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:51120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.597 [2024-11-18 00:40:36.381558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.597 [2024-11-18 00:40:36.381590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:51128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.597 [2024-11-18 00:40:36.381632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.597 [2024-11-18 00:40:36.381648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:51136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.597 [2024-11-18 00:40:36.381670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.597 [2024-11-18 00:40:36.381695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:51144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.597 [2024-11-18 00:40:36.381709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.597 [2024-11-18 00:40:36.381724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:51152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.597 [2024-11-18 00:40:36.381738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.597 [2024-11-18 00:40:36.381752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:51160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.597 [2024-11-18 00:40:36.381765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.597 [2024-11-18 00:40:36.381795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:51168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.597 [2024-11-18 00:40:36.381808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.597 [2024-11-18 00:40:36.381822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:51176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.597 [2024-11-18 00:40:36.381835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.597 [2024-11-18 00:40:36.381848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:51184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.597 [2024-11-18 00:40:36.381861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.597 [2024-11-18 00:40:36.381876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:51192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.597 [2024-11-18 00:40:36.381890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.597 [2024-11-18 00:40:36.381910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:51200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.597 [2024-11-18 00:40:36.381922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.597 [2024-11-18 00:40:36.381937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:51208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.597 [2024-11-18 00:40:36.381950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.597 [2024-11-18 00:40:36.381964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:51216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.597 [2024-11-18 00:40:36.381978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.597 [2024-11-18 00:40:36.381992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:51224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.597 [2024-11-18 00:40:36.382004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.597 [2024-11-18 00:40:36.382024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:51232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.597 [2024-11-18 00:40:36.382045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.597 [2024-11-18 00:40:36.382073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:52136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.597 [2024-11-18 00:40:36.382092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.597 [2024-11-18 00:40:36.382114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:51240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.597 [2024-11-18 00:40:36.382135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.597 [2024-11-18 00:40:36.382153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:51248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.597 [2024-11-18 00:40:36.382166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.597 [2024-11-18 00:40:36.382179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:51256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.597 [2024-11-18 00:40:36.382191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.597 [2024-11-18 00:40:36.382204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:51264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.597 [2024-11-18 00:40:36.382227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.597 [2024-11-18 00:40:36.382240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:51272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.597 [2024-11-18 00:40:36.382252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.597 [2024-11-18 00:40:36.382265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:51280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.597 [2024-11-18 00:40:36.382277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.597 [2024-11-18 00:40:36.382305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:51288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.597 [2024-11-18 00:40:36.382335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.597 [2024-11-18 00:40:36.382353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:51296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.597 [2024-11-18 00:40:36.382367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.597 [2024-11-18 00:40:36.382382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:51304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.597 [2024-11-18 00:40:36.382395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.597 [2024-11-18 00:40:36.382411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:51312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.597 [2024-11-18 00:40:36.382424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.597 [2024-11-18 00:40:36.382438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:51320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.597 [2024-11-18 00:40:36.382451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.597 [2024-11-18 00:40:36.382466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:51328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.597 [2024-11-18 00:40:36.382483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.597 [2024-11-18 00:40:36.382499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:51336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.597 [2024-11-18 00:40:36.382512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.597 [2024-11-18 00:40:36.382528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:51344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.597 [2024-11-18 00:40:36.382541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.597 [2024-11-18 00:40:36.382557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:51352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.597 [2024-11-18 00:40:36.382570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.597 [2024-11-18 00:40:36.382584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:51360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.597 [2024-11-18 00:40:36.382612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.597 [2024-11-18 00:40:36.382637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:51368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.597 [2024-11-18 00:40:36.382650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.597 [2024-11-18 00:40:36.382679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:51376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.597 [2024-11-18 00:40:36.382691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.597 [2024-11-18 00:40:36.382704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:51384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.597 [2024-11-18 00:40:36.382715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.597 [2024-11-18 00:40:36.382729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:51392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.598 [2024-11-18 00:40:36.382742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.598 [2024-11-18 00:40:36.382755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:51400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.598 [2024-11-18 00:40:36.382767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.598 [2024-11-18 00:40:36.382780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:51408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.598 [2024-11-18 00:40:36.382791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.598 [2024-11-18 00:40:36.382804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:51416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.598 [2024-11-18 00:40:36.382816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.598 [2024-11-18 00:40:36.382830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:51424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.598 [2024-11-18 00:40:36.382841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.598 [2024-11-18 00:40:36.382854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:51432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.598 [2024-11-18 00:40:36.382869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.598 [2024-11-18 00:40:36.382883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:51440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.598 [2024-11-18 00:40:36.382895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.598 [2024-11-18 00:40:36.382907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:51448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.598 [2024-11-18 00:40:36.382919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.598 [2024-11-18 00:40:36.382932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:51456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.598 [2024-11-18 00:40:36.382944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.598 [2024-11-18 00:40:36.382957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:51464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.598 [2024-11-18 00:40:36.382969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.598 [2024-11-18 00:40:36.382982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:51472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.598 [2024-11-18 00:40:36.382993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.598 [2024-11-18 00:40:36.383006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:51480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.598 [2024-11-18 00:40:36.383017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.598 [2024-11-18 00:40:36.383030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:51488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.598 [2024-11-18 00:40:36.383042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.598 [2024-11-18 00:40:36.383063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:51496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.598 [2024-11-18 00:40:36.383075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.598 [2024-11-18 00:40:36.383088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:51504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.598 [2024-11-18 00:40:36.383099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.598 [2024-11-18 00:40:36.383112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:51512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.598 [2024-11-18 00:40:36.383125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.598 [2024-11-18 00:40:36.383139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:51520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.598 [2024-11-18 00:40:36.383151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.598 [2024-11-18 00:40:36.383164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:51528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.598 [2024-11-18 00:40:36.383176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.598 [2024-11-18 00:40:36.383192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:51536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.598 [2024-11-18 00:40:36.383205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.598 [2024-11-18 00:40:36.383218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:51544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.598 [2024-11-18 00:40:36.383230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.598 [2024-11-18 00:40:36.383243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:51552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.598 [2024-11-18 00:40:36.383254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.598 [2024-11-18 00:40:36.383268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:51560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.598 [2024-11-18 00:40:36.383279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.598 [2024-11-18 00:40:36.383308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:51568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.598 [2024-11-18 00:40:36.383333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.598 [2024-11-18 00:40:36.383350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:51576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.598 [2024-11-18 00:40:36.383364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.598 [2024-11-18 00:40:36.383379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:51584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.598 [2024-11-18 00:40:36.383392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.598 [2024-11-18 00:40:36.383406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:51592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.598 [2024-11-18 00:40:36.383419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.598 [2024-11-18 00:40:36.383434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:51600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.598 [2024-11-18 00:40:36.383448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.598 [2024-11-18 00:40:36.383462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:51608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.598 [2024-11-18 00:40:36.383475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.598 [2024-11-18 00:40:36.383490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:51616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.598 [2024-11-18 00:40:36.383503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.598 [2024-11-18 00:40:36.383518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:51624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.598 [2024-11-18 00:40:36.383531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.598 [2024-11-18 00:40:36.383546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:51632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.598 [2024-11-18 00:40:36.383562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.598 [2024-11-18 00:40:36.383578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:51640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.598 [2024-11-18 00:40:36.383620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.598 [2024-11-18 00:40:36.383635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:51648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.598 [2024-11-18 00:40:36.383648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.598 [2024-11-18 00:40:36.383687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:51656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.598 [2024-11-18 00:40:36.383698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.598 [2024-11-18 00:40:36.383712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:51664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.598 [2024-11-18 00:40:36.383723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.598 [2024-11-18 00:40:36.383736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:51672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.598 [2024-11-18 00:40:36.383748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.598 [2024-11-18 00:40:36.383761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:51680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.598 [2024-11-18 00:40:36.383772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.598 [2024-11-18 00:40:36.383786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:51688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.598 [2024-11-18 00:40:36.383798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.598 [2024-11-18 00:40:36.383811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:51696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.598 [2024-11-18 00:40:36.383829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.598 [2024-11-18 00:40:36.383841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:51704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.599 [2024-11-18 00:40:36.383853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.599 [2024-11-18 00:40:36.383866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:51712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.599 [2024-11-18 00:40:36.383878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.599 [2024-11-18 00:40:36.383895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:51720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.599 [2024-11-18 00:40:36.383907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.599 [2024-11-18 00:40:36.383920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:51728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.599 [2024-11-18 00:40:36.383932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.599 [2024-11-18 00:40:36.383948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:51736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.599 [2024-11-18 00:40:36.383960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.599 [2024-11-18 00:40:36.383973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:51744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.599 [2024-11-18 00:40:36.383985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.599 [2024-11-18 00:40:36.383998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:51752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.599 [2024-11-18 00:40:36.384010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.599 [2024-11-18 00:40:36.384023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:51760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.599 [2024-11-18 00:40:36.384034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.599 [2024-11-18 00:40:36.384047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:51768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.599 [2024-11-18 00:40:36.384059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.599 [2024-11-18 00:40:36.384072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:51776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.599 [2024-11-18 00:40:36.384084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.599 [2024-11-18 00:40:36.384097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:51784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.599 [2024-11-18 00:40:36.384109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.599 [2024-11-18 00:40:36.384122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:51792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.599 [2024-11-18 00:40:36.384133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.599 [2024-11-18 00:40:36.384147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:51800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.599 [2024-11-18 00:40:36.384158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.599 [2024-11-18 00:40:36.384171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:51808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.599 [2024-11-18 00:40:36.384183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.599 [2024-11-18 00:40:36.384196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:51816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.599 [2024-11-18 00:40:36.384208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.599 [2024-11-18 00:40:36.384220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:51824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.599 [2024-11-18 00:40:36.384232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.599 [2024-11-18 00:40:36.384245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:51832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.599 [2024-11-18 00:40:36.384259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.599 [2024-11-18 00:40:36.384273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:51840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.599 [2024-11-18 00:40:36.384285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.599 [2024-11-18 00:40:36.384328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:51848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.599 [2024-11-18 00:40:36.384346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.599 [2024-11-18 00:40:36.384361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:51856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.599 [2024-11-18 00:40:36.384375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.599 [2024-11-18 00:40:36.384389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:51864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.599 [2024-11-18 00:40:36.384402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.599 [2024-11-18 00:40:36.384417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:51872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.599 [2024-11-18 00:40:36.384431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.599 [2024-11-18 00:40:36.384445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:51880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.599 [2024-11-18 00:40:36.384459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.599 [2024-11-18 00:40:36.384474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:51888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.599 [2024-11-18 00:40:36.384487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.599 [2024-11-18 00:40:36.384502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:51896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.599 [2024-11-18 00:40:36.384521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.599 [2024-11-18 00:40:36.384537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:51904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.599 [2024-11-18 00:40:36.384550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.599 [2024-11-18 00:40:36.384565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:51912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.599 [2024-11-18 00:40:36.384578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.599 [2024-11-18 00:40:36.384593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:51920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.599 [2024-11-18 00:40:36.384623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.599 [2024-11-18 00:40:36.384637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:51928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.599 [2024-11-18 00:40:36.384649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.599 [2024-11-18 00:40:36.384662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:51936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.599 [2024-11-18 00:40:36.384691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.599 [2024-11-18 00:40:36.384705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:51944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.599 [2024-11-18 00:40:36.384717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.599 [2024-11-18 00:40:36.384730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:51952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.599 [2024-11-18 00:40:36.384742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.599 [2024-11-18 00:40:36.384755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:51960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.599 [2024-11-18 00:40:36.384767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.599 [2024-11-18 00:40:36.384780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:51968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.599 [2024-11-18 00:40:36.384791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.599 [2024-11-18 00:40:36.384804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:51976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.599 [2024-11-18 00:40:36.384816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.599 [2024-11-18 00:40:36.384828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:51984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.599 [2024-11-18 00:40:36.384846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.599 [2024-11-18 00:40:36.384859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:51992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.599 [2024-11-18 00:40:36.384871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.599 [2024-11-18 00:40:36.384884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.599 [2024-11-18 00:40:36.384896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.599 [2024-11-18 00:40:36.384912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:52008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.599 [2024-11-18 00:40:36.384928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.599 [2024-11-18 00:40:36.384942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:52016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.599 [2024-11-18 00:40:36.384955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.600 [2024-11-18 00:40:36.384967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:52024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.600 [2024-11-18 00:40:36.384983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.600 [2024-11-18 00:40:36.384996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:52032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.600 [2024-11-18 00:40:36.385008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.600 [2024-11-18 00:40:36.385024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:52040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.600 [2024-11-18 00:40:36.385037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.600 [2024-11-18 00:40:36.385050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:52048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.600 [2024-11-18 00:40:36.385062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.600 [2024-11-18 00:40:36.385075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:52056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.600 [2024-11-18 00:40:36.385086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.600 [2024-11-18 00:40:36.385099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:52064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.600 [2024-11-18 00:40:36.385118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.600 [2024-11-18 00:40:36.385131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:52072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.600 [2024-11-18 00:40:36.385142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.600 [2024-11-18 00:40:36.385155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:52080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.600 [2024-11-18 00:40:36.385167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.600 [2024-11-18 00:40:36.385179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:52088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.600 [2024-11-18 00:40:36.385191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.600 [2024-11-18 00:40:36.385204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:52096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.600 [2024-11-18 00:40:36.385215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.600 [2024-11-18 00:40:36.385228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:52104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.600 [2024-11-18 00:40:36.385240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.600 [2024-11-18 00:40:36.385253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:52112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.600 [2024-11-18 00:40:36.385264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.600 [2024-11-18 00:40:36.385277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:52120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.600 [2024-11-18 00:40:36.385304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.600 [2024-11-18 00:40:36.385334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bf30 is same with the state(6) to be set 00:35:12.600 [2024-11-18 00:40:36.385354] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:12.600 [2024-11-18 00:40:36.385366] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:12.600 [2024-11-18 00:40:36.385378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52128 len:8 PRP1 0x0 PRP2 0x0 00:35:12.600 [2024-11-18 00:40:36.385398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.600 [2024-11-18 00:40:36.388664] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.600 [2024-11-18 00:40:36.388752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:12.600 [2024-11-18 00:40:36.389450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.600 [2024-11-18 00:40:36.389480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:12.600 [2024-11-18 00:40:36.389497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:12.600 [2024-11-18 00:40:36.389725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:12.600 [2024-11-18 00:40:36.389935] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.600 [2024-11-18 00:40:36.389954] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.600 [2024-11-18 00:40:36.389970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.600 [2024-11-18 00:40:36.389983] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.600 [2024-11-18 00:40:36.402114] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.600 [2024-11-18 00:40:36.402483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.600 [2024-11-18 00:40:36.402513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:12.600 [2024-11-18 00:40:36.402530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:12.600 [2024-11-18 00:40:36.402757] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:12.600 [2024-11-18 00:40:36.402965] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.600 [2024-11-18 00:40:36.402983] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.600 [2024-11-18 00:40:36.402995] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.600 [2024-11-18 00:40:36.403006] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.600 [2024-11-18 00:40:36.415997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.600 [2024-11-18 00:40:36.416443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.600 [2024-11-18 00:40:36.416485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:12.600 [2024-11-18 00:40:36.416512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:12.861 [2024-11-18 00:40:36.416776] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:12.861 [2024-11-18 00:40:36.416996] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.861 [2024-11-18 00:40:36.417016] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.861 [2024-11-18 00:40:36.417030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.861 [2024-11-18 00:40:36.417042] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.861 [2024-11-18 00:40:36.429169] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.861 [2024-11-18 00:40:36.429572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.861 [2024-11-18 00:40:36.429624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:12.861 [2024-11-18 00:40:36.429640] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:12.861 [2024-11-18 00:40:36.429904] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:12.861 [2024-11-18 00:40:36.430096] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.861 [2024-11-18 00:40:36.430115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.861 [2024-11-18 00:40:36.430127] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.861 [2024-11-18 00:40:36.430138] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.861 [2024-11-18 00:40:36.442284] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.861 [2024-11-18 00:40:36.442654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.861 [2024-11-18 00:40:36.442682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:12.861 [2024-11-18 00:40:36.442698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:12.861 [2024-11-18 00:40:36.442920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:12.861 [2024-11-18 00:40:36.443146] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.861 [2024-11-18 00:40:36.443165] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.861 [2024-11-18 00:40:36.443177] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.861 [2024-11-18 00:40:36.443188] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.861 [2024-11-18 00:40:36.455388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.861 [2024-11-18 00:40:36.455753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.861 [2024-11-18 00:40:36.455781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:12.861 [2024-11-18 00:40:36.455811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:12.861 [2024-11-18 00:40:36.456056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:12.861 [2024-11-18 00:40:36.456248] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.861 [2024-11-18 00:40:36.456266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.861 [2024-11-18 00:40:36.456278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.861 [2024-11-18 00:40:36.456289] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.861 [2024-11-18 00:40:36.468501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.861 [2024-11-18 00:40:36.468882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.861 [2024-11-18 00:40:36.468914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:12.861 [2024-11-18 00:40:36.468931] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:12.861 [2024-11-18 00:40:36.469164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:12.861 [2024-11-18 00:40:36.469413] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.861 [2024-11-18 00:40:36.469434] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.861 [2024-11-18 00:40:36.469447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.861 [2024-11-18 00:40:36.469459] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.861 [2024-11-18 00:40:36.481466] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.861 [2024-11-18 00:40:36.481896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.861 [2024-11-18 00:40:36.481923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:12.861 [2024-11-18 00:40:36.481939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:12.861 [2024-11-18 00:40:36.482176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:12.861 [2024-11-18 00:40:36.482415] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.861 [2024-11-18 00:40:36.482436] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.861 [2024-11-18 00:40:36.482449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.861 [2024-11-18 00:40:36.482462] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.861 [2024-11-18 00:40:36.494636] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.861 [2024-11-18 00:40:36.495060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.861 [2024-11-18 00:40:36.495100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:12.861 [2024-11-18 00:40:36.495117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:12.861 [2024-11-18 00:40:36.495364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:12.861 [2024-11-18 00:40:36.495599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.861 [2024-11-18 00:40:36.495618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.861 [2024-11-18 00:40:36.495630] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.861 [2024-11-18 00:40:36.495642] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.861 [2024-11-18 00:40:36.507683] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.861 [2024-11-18 00:40:36.508047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.861 [2024-11-18 00:40:36.508075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:12.861 [2024-11-18 00:40:36.508091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:12.861 [2024-11-18 00:40:36.508345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:12.861 [2024-11-18 00:40:36.508554] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.861 [2024-11-18 00:40:36.508574] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.861 [2024-11-18 00:40:36.508587] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.861 [2024-11-18 00:40:36.508617] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.861 [2024-11-18 00:40:36.520660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.861 [2024-11-18 00:40:36.521028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.861 [2024-11-18 00:40:36.521072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:12.861 [2024-11-18 00:40:36.521089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:12.861 [2024-11-18 00:40:36.521371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:12.861 [2024-11-18 00:40:36.521576] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.861 [2024-11-18 00:40:36.521595] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.861 [2024-11-18 00:40:36.521622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.861 [2024-11-18 00:40:36.521634] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.861 [2024-11-18 00:40:36.533662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.861 [2024-11-18 00:40:36.534026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.861 [2024-11-18 00:40:36.534053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:12.862 [2024-11-18 00:40:36.534069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:12.862 [2024-11-18 00:40:36.534303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:12.862 [2024-11-18 00:40:36.534530] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.862 [2024-11-18 00:40:36.534550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.862 [2024-11-18 00:40:36.534563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.862 [2024-11-18 00:40:36.534575] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.862 [2024-11-18 00:40:36.546915] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.862 [2024-11-18 00:40:36.547297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.862 [2024-11-18 00:40:36.547344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:12.862 [2024-11-18 00:40:36.547361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:12.862 [2024-11-18 00:40:36.547568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:12.862 [2024-11-18 00:40:36.547794] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.862 [2024-11-18 00:40:36.547813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.862 [2024-11-18 00:40:36.547829] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.862 [2024-11-18 00:40:36.547840] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.862 [2024-11-18 00:40:36.559966] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.862 [2024-11-18 00:40:36.560366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.862 [2024-11-18 00:40:36.560394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:12.862 [2024-11-18 00:40:36.560410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:12.862 [2024-11-18 00:40:36.560631] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:12.862 [2024-11-18 00:40:36.560856] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.862 [2024-11-18 00:40:36.560874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.862 [2024-11-18 00:40:36.560886] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.862 [2024-11-18 00:40:36.560897] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.862 [2024-11-18 00:40:36.573051] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.862 [2024-11-18 00:40:36.573371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.862 [2024-11-18 00:40:36.573398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:12.862 [2024-11-18 00:40:36.573414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:12.862 [2024-11-18 00:40:36.573629] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:12.862 [2024-11-18 00:40:36.573827] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.862 [2024-11-18 00:40:36.573846] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.862 [2024-11-18 00:40:36.573858] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.862 [2024-11-18 00:40:36.573870] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.862 [2024-11-18 00:40:36.586129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.862 [2024-11-18 00:40:36.586515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.862 [2024-11-18 00:40:36.586557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:12.862 [2024-11-18 00:40:36.586574] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:12.862 [2024-11-18 00:40:36.586797] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:12.862 [2024-11-18 00:40:36.587006] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.862 [2024-11-18 00:40:36.587024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.862 [2024-11-18 00:40:36.587036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.862 [2024-11-18 00:40:36.587047] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.862 [2024-11-18 00:40:36.599082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.862 [2024-11-18 00:40:36.599523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.862 [2024-11-18 00:40:36.599565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:12.862 [2024-11-18 00:40:36.599581] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:12.862 [2024-11-18 00:40:36.599821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:12.862 [2024-11-18 00:40:36.600027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.862 [2024-11-18 00:40:36.600046] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.862 [2024-11-18 00:40:36.600058] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.862 [2024-11-18 00:40:36.600069] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.862 [2024-11-18 00:40:36.612185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.862 [2024-11-18 00:40:36.612595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.862 [2024-11-18 00:40:36.612623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:12.862 [2024-11-18 00:40:36.612655] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:12.862 [2024-11-18 00:40:36.612886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:12.862 [2024-11-18 00:40:36.613078] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.862 [2024-11-18 00:40:36.613096] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.862 [2024-11-18 00:40:36.613108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.862 [2024-11-18 00:40:36.613119] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.862 [2024-11-18 00:40:36.625191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.862 [2024-11-18 00:40:36.625688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.862 [2024-11-18 00:40:36.625730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:12.862 [2024-11-18 00:40:36.625747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:12.862 [2024-11-18 00:40:36.625996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:12.862 [2024-11-18 00:40:36.626204] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.862 [2024-11-18 00:40:36.626222] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.862 [2024-11-18 00:40:36.626234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.862 [2024-11-18 00:40:36.626245] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.862 [2024-11-18 00:40:36.638251] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.862 [2024-11-18 00:40:36.638713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.862 [2024-11-18 00:40:36.638756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:12.862 [2024-11-18 00:40:36.638777] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:12.862 [2024-11-18 00:40:36.639044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:12.862 [2024-11-18 00:40:36.639255] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.862 [2024-11-18 00:40:36.639274] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.862 [2024-11-18 00:40:36.639287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.862 [2024-11-18 00:40:36.639323] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.862 [2024-11-18 00:40:36.651730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.862 [2024-11-18 00:40:36.652093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.862 [2024-11-18 00:40:36.652136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:12.862 [2024-11-18 00:40:36.652153] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:12.862 [2024-11-18 00:40:36.652403] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:12.862 [2024-11-18 00:40:36.652607] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.862 [2024-11-18 00:40:36.652640] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.862 [2024-11-18 00:40:36.652652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.862 [2024-11-18 00:40:36.652663] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.862 [2024-11-18 00:40:36.664942] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.862 [2024-11-18 00:40:36.665353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.863 [2024-11-18 00:40:36.665381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:12.863 [2024-11-18 00:40:36.665412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:12.863 [2024-11-18 00:40:36.665655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:12.863 [2024-11-18 00:40:36.665863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.863 [2024-11-18 00:40:36.665881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.863 [2024-11-18 00:40:36.665894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.863 [2024-11-18 00:40:36.665905] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.863 [2024-11-18 00:40:36.678390] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.863 [2024-11-18 00:40:36.678798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.863 [2024-11-18 00:40:36.678829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:12.863 [2024-11-18 00:40:36.678847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:12.863 [2024-11-18 00:40:36.679088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:12.863 [2024-11-18 00:40:36.679371] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.863 [2024-11-18 00:40:36.679410] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.863 [2024-11-18 00:40:36.679424] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.863 [2024-11-18 00:40:36.679437] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.122 [2024-11-18 00:40:36.691570] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.122 [2024-11-18 00:40:36.691983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.122 [2024-11-18 00:40:36.692012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.122 [2024-11-18 00:40:36.692028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.122 [2024-11-18 00:40:36.692244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.122 [2024-11-18 00:40:36.692489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.122 [2024-11-18 00:40:36.692511] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.122 [2024-11-18 00:40:36.692524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.122 [2024-11-18 00:40:36.692536] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.122 [2024-11-18 00:40:36.704653] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.122 [2024-11-18 00:40:36.705143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.122 [2024-11-18 00:40:36.705186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.122 [2024-11-18 00:40:36.705204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.122 [2024-11-18 00:40:36.705450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.122 [2024-11-18 00:40:36.705666] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.122 [2024-11-18 00:40:36.705685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.122 [2024-11-18 00:40:36.705697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.122 [2024-11-18 00:40:36.705708] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.122 [2024-11-18 00:40:36.717774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.122 [2024-11-18 00:40:36.718200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.122 [2024-11-18 00:40:36.718227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.122 [2024-11-18 00:40:36.718259] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.122 [2024-11-18 00:40:36.718511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.122 [2024-11-18 00:40:36.718724] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.122 [2024-11-18 00:40:36.718743] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.122 [2024-11-18 00:40:36.718760] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.122 [2024-11-18 00:40:36.718772] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.122 [2024-11-18 00:40:36.731022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.122 [2024-11-18 00:40:36.731449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.122 [2024-11-18 00:40:36.731492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.122 [2024-11-18 00:40:36.731508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.122 [2024-11-18 00:40:36.731742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.122 [2024-11-18 00:40:36.731934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.122 [2024-11-18 00:40:36.731952] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.122 [2024-11-18 00:40:36.731964] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.122 [2024-11-18 00:40:36.731975] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.122 [2024-11-18 00:40:36.744129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.122 [2024-11-18 00:40:36.744505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.122 [2024-11-18 00:40:36.744534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.122 [2024-11-18 00:40:36.744550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.122 [2024-11-18 00:40:36.744777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.122 [2024-11-18 00:40:36.744991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.122 [2024-11-18 00:40:36.745010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.122 [2024-11-18 00:40:36.745022] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.122 [2024-11-18 00:40:36.745034] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.122 [2024-11-18 00:40:36.757264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.122 7591.33 IOPS, 29.65 MiB/s [2024-11-17T23:40:36.944Z] [2024-11-18 00:40:36.759017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.122 [2024-11-18 00:40:36.759045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.122 [2024-11-18 00:40:36.759060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.122 [2024-11-18 00:40:36.759297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.122 [2024-11-18 00:40:36.759535] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.122 [2024-11-18 00:40:36.759554] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.122 [2024-11-18 00:40:36.759566] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.122 [2024-11-18 00:40:36.759577] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.122 [2024-11-18 00:40:36.770526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.122 [2024-11-18 00:40:36.770924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.122 [2024-11-18 00:40:36.770967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.122 [2024-11-18 00:40:36.770983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.122 [2024-11-18 00:40:36.771234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.122 [2024-11-18 00:40:36.771490] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.122 [2024-11-18 00:40:36.771511] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.122 [2024-11-18 00:40:36.771524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.122 [2024-11-18 00:40:36.771535] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.122 [2024-11-18 00:40:36.783585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.122 [2024-11-18 00:40:36.783902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.122 [2024-11-18 00:40:36.783929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.122 [2024-11-18 00:40:36.783944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.122 [2024-11-18 00:40:36.784161] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.122 [2024-11-18 00:40:36.784412] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.122 [2024-11-18 00:40:36.784433] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.122 [2024-11-18 00:40:36.784445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.123 [2024-11-18 00:40:36.784457] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.123 [2024-11-18 00:40:36.796708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.123 [2024-11-18 00:40:36.797060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.123 [2024-11-18 00:40:36.797086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.123 [2024-11-18 00:40:36.797102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.123 [2024-11-18 00:40:36.797301] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.123 [2024-11-18 00:40:36.797524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.123 [2024-11-18 00:40:36.797544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.123 [2024-11-18 00:40:36.797556] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.123 [2024-11-18 00:40:36.797567] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.123 [2024-11-18 00:40:36.809770] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.123 [2024-11-18 00:40:36.810252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.123 [2024-11-18 00:40:36.810303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.123 [2024-11-18 00:40:36.810351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.123 [2024-11-18 00:40:36.810604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.123 [2024-11-18 00:40:36.810812] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.123 [2024-11-18 00:40:36.810830] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.123 [2024-11-18 00:40:36.810842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.123 [2024-11-18 00:40:36.810852] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.123 [2024-11-18 00:40:36.822757] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.123 [2024-11-18 00:40:36.823187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.123 [2024-11-18 00:40:36.823215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.123 [2024-11-18 00:40:36.823247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.123 [2024-11-18 00:40:36.823486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.123 [2024-11-18 00:40:36.823731] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.123 [2024-11-18 00:40:36.823750] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.123 [2024-11-18 00:40:36.823762] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.123 [2024-11-18 00:40:36.823772] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.123 [2024-11-18 00:40:36.835817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.123 [2024-11-18 00:40:36.836214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.123 [2024-11-18 00:40:36.836241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.123 [2024-11-18 00:40:36.836256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.123 [2024-11-18 00:40:36.836523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.123 [2024-11-18 00:40:36.836737] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.123 [2024-11-18 00:40:36.836755] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.123 [2024-11-18 00:40:36.836767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.123 [2024-11-18 00:40:36.836778] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.123 [2024-11-18 00:40:36.848810] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.123 [2024-11-18 00:40:36.849173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.123 [2024-11-18 00:40:36.849215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.123 [2024-11-18 00:40:36.849232] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.123 [2024-11-18 00:40:36.849497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.123 [2024-11-18 00:40:36.849732] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.123 [2024-11-18 00:40:36.849751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.123 [2024-11-18 00:40:36.849763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.123 [2024-11-18 00:40:36.849774] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.123 [2024-11-18 00:40:36.861796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.123 [2024-11-18 00:40:36.862203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.123 [2024-11-18 00:40:36.862231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.123 [2024-11-18 00:40:36.862247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.123 [2024-11-18 00:40:36.862497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.123 [2024-11-18 00:40:36.862729] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.123 [2024-11-18 00:40:36.862747] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.123 [2024-11-18 00:40:36.862758] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.123 [2024-11-18 00:40:36.862770] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.123 [2024-11-18 00:40:36.874973] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.123 [2024-11-18 00:40:36.875346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.123 [2024-11-18 00:40:36.875375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.123 [2024-11-18 00:40:36.875391] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.123 [2024-11-18 00:40:36.875630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.123 [2024-11-18 00:40:36.875823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.123 [2024-11-18 00:40:36.875841] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.123 [2024-11-18 00:40:36.875852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.123 [2024-11-18 00:40:36.875863] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.123 [2024-11-18 00:40:36.888009] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.123 [2024-11-18 00:40:36.888420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.123 [2024-11-18 00:40:36.888447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.123 [2024-11-18 00:40:36.888477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.123 [2024-11-18 00:40:36.888712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.123 [2024-11-18 00:40:36.888904] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.123 [2024-11-18 00:40:36.888922] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.123 [2024-11-18 00:40:36.888939] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.123 [2024-11-18 00:40:36.888951] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.123 [2024-11-18 00:40:36.901183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.123 [2024-11-18 00:40:36.901570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.123 [2024-11-18 00:40:36.901598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.123 [2024-11-18 00:40:36.901614] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.123 [2024-11-18 00:40:36.901855] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.123 [2024-11-18 00:40:36.902048] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.123 [2024-11-18 00:40:36.902066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.123 [2024-11-18 00:40:36.902078] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.123 [2024-11-18 00:40:36.902089] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.123 [2024-11-18 00:40:36.914225] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.123 [2024-11-18 00:40:36.914588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.123 [2024-11-18 00:40:36.914617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.123 [2024-11-18 00:40:36.914633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.123 [2024-11-18 00:40:36.914860] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.123 [2024-11-18 00:40:36.915068] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.123 [2024-11-18 00:40:36.915086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.124 [2024-11-18 00:40:36.915098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.124 [2024-11-18 00:40:36.915109] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.124 [2024-11-18 00:40:36.927202] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.124 [2024-11-18 00:40:36.927633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.124 [2024-11-18 00:40:36.927660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.124 [2024-11-18 00:40:36.927675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.124 [2024-11-18 00:40:36.927915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.124 [2024-11-18 00:40:36.928124] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.124 [2024-11-18 00:40:36.928142] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.124 [2024-11-18 00:40:36.928153] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.124 [2024-11-18 00:40:36.928164] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.124 [2024-11-18 00:40:36.940540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.124 [2024-11-18 00:40:36.940905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.124 [2024-11-18 00:40:36.940950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.124 [2024-11-18 00:40:36.940967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.124 [2024-11-18 00:40:36.941221] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.124 [2024-11-18 00:40:36.941515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.124 [2024-11-18 00:40:36.941547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.124 [2024-11-18 00:40:36.941586] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.124 [2024-11-18 00:40:36.941602] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.383 [2024-11-18 00:40:36.953578] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.383 [2024-11-18 00:40:36.953950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.383 [2024-11-18 00:40:36.953995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.383 [2024-11-18 00:40:36.954011] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.383 [2024-11-18 00:40:36.954277] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.383 [2024-11-18 00:40:36.954522] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.383 [2024-11-18 00:40:36.954543] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.383 [2024-11-18 00:40:36.954557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.383 [2024-11-18 00:40:36.954569] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.383 [2024-11-18 00:40:36.966766] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.383 [2024-11-18 00:40:36.967098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.383 [2024-11-18 00:40:36.967174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.383 [2024-11-18 00:40:36.967192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.383 [2024-11-18 00:40:36.967436] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.383 [2024-11-18 00:40:36.967649] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.384 [2024-11-18 00:40:36.967668] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.384 [2024-11-18 00:40:36.967680] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.384 [2024-11-18 00:40:36.967691] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.384 [2024-11-18 00:40:36.979977] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.384 [2024-11-18 00:40:36.980442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.384 [2024-11-18 00:40:36.980472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.384 [2024-11-18 00:40:36.980494] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.384 [2024-11-18 00:40:36.980735] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.384 [2024-11-18 00:40:36.980927] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.384 [2024-11-18 00:40:36.980945] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.384 [2024-11-18 00:40:36.980957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.384 [2024-11-18 00:40:36.980968] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.384 [2024-11-18 00:40:36.993180] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.384 [2024-11-18 00:40:36.993609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.384 [2024-11-18 00:40:36.993638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.384 [2024-11-18 00:40:36.993669] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.384 [2024-11-18 00:40:36.993890] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.384 [2024-11-18 00:40:36.994117] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.384 [2024-11-18 00:40:36.994135] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.384 [2024-11-18 00:40:36.994147] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.384 [2024-11-18 00:40:36.994158] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.384 [2024-11-18 00:40:37.006275] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.384 [2024-11-18 00:40:37.006715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.384 [2024-11-18 00:40:37.006758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.384 [2024-11-18 00:40:37.006776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.384 [2024-11-18 00:40:37.007014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.384 [2024-11-18 00:40:37.007221] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.384 [2024-11-18 00:40:37.007240] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.384 [2024-11-18 00:40:37.007252] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.384 [2024-11-18 00:40:37.007263] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.384 [2024-11-18 00:40:37.019483] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.384 [2024-11-18 00:40:37.019868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.384 [2024-11-18 00:40:37.019909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.384 [2024-11-18 00:40:37.019925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.384 [2024-11-18 00:40:37.020176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.384 [2024-11-18 00:40:37.020421] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.384 [2024-11-18 00:40:37.020442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.384 [2024-11-18 00:40:37.020456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.384 [2024-11-18 00:40:37.020467] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.384 [2024-11-18 00:40:37.032708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.384 [2024-11-18 00:40:37.033008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.384 [2024-11-18 00:40:37.033049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.384 [2024-11-18 00:40:37.033065] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.384 [2024-11-18 00:40:37.033281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.384 [2024-11-18 00:40:37.033509] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.384 [2024-11-18 00:40:37.033530] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.384 [2024-11-18 00:40:37.033543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.384 [2024-11-18 00:40:37.033555] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.384 [2024-11-18 00:40:37.046005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.384 [2024-11-18 00:40:37.046494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.384 [2024-11-18 00:40:37.046537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.384 [2024-11-18 00:40:37.046554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.384 [2024-11-18 00:40:37.046804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.384 [2024-11-18 00:40:37.047011] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.384 [2024-11-18 00:40:37.047030] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.384 [2024-11-18 00:40:37.047042] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.384 [2024-11-18 00:40:37.047053] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.384 [2024-11-18 00:40:37.059241] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.384 [2024-11-18 00:40:37.059600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.384 [2024-11-18 00:40:37.059629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.384 [2024-11-18 00:40:37.059646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.384 [2024-11-18 00:40:37.059881] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.384 [2024-11-18 00:40:37.060090] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.384 [2024-11-18 00:40:37.060108] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.384 [2024-11-18 00:40:37.060124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.384 [2024-11-18 00:40:37.060137] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.384 [2024-11-18 00:40:37.072399] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.384 [2024-11-18 00:40:37.072846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.384 [2024-11-18 00:40:37.072889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.384 [2024-11-18 00:40:37.072907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.384 [2024-11-18 00:40:37.073145] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.384 [2024-11-18 00:40:37.073398] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.384 [2024-11-18 00:40:37.073419] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.384 [2024-11-18 00:40:37.073432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.384 [2024-11-18 00:40:37.073444] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.384 [2024-11-18 00:40:37.085554] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.384 [2024-11-18 00:40:37.085921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.384 [2024-11-18 00:40:37.085948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.384 [2024-11-18 00:40:37.085964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.384 [2024-11-18 00:40:37.086198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.384 [2024-11-18 00:40:37.086454] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.384 [2024-11-18 00:40:37.086475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.384 [2024-11-18 00:40:37.086488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.384 [2024-11-18 00:40:37.086499] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.384 [2024-11-18 00:40:37.098582] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.384 [2024-11-18 00:40:37.098946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.384 [2024-11-18 00:40:37.098974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.384 [2024-11-18 00:40:37.098990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.385 [2024-11-18 00:40:37.099232] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.385 [2024-11-18 00:40:37.099469] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.385 [2024-11-18 00:40:37.099489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.385 [2024-11-18 00:40:37.099501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.385 [2024-11-18 00:40:37.099513] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.385 [2024-11-18 00:40:37.111824] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.385 [2024-11-18 00:40:37.112331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.385 [2024-11-18 00:40:37.112380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.385 [2024-11-18 00:40:37.112396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.385 [2024-11-18 00:40:37.112638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.385 [2024-11-18 00:40:37.112845] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.385 [2024-11-18 00:40:37.112864] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.385 [2024-11-18 00:40:37.112875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.385 [2024-11-18 00:40:37.112886] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.385 [2024-11-18 00:40:37.124917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.385 [2024-11-18 00:40:37.125284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.385 [2024-11-18 00:40:37.125319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.385 [2024-11-18 00:40:37.125338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.385 [2024-11-18 00:40:37.125566] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.385 [2024-11-18 00:40:37.125778] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.385 [2024-11-18 00:40:37.125796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.385 [2024-11-18 00:40:37.125808] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.385 [2024-11-18 00:40:37.125819] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.385 [2024-11-18 00:40:37.138069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.385 [2024-11-18 00:40:37.138425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.385 [2024-11-18 00:40:37.138454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.385 [2024-11-18 00:40:37.138470] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.385 [2024-11-18 00:40:37.138699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.385 [2024-11-18 00:40:37.138907] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.385 [2024-11-18 00:40:37.138926] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.385 [2024-11-18 00:40:37.138938] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.385 [2024-11-18 00:40:37.138949] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.385 [2024-11-18 00:40:37.151556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.385 [2024-11-18 00:40:37.151990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.385 [2024-11-18 00:40:37.152018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.385 [2024-11-18 00:40:37.152040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.385 [2024-11-18 00:40:37.152291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.385 [2024-11-18 00:40:37.152512] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.385 [2024-11-18 00:40:37.152531] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.385 [2024-11-18 00:40:37.152544] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.385 [2024-11-18 00:40:37.152556] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.385 [2024-11-18 00:40:37.164726] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.385 [2024-11-18 00:40:37.165157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.385 [2024-11-18 00:40:37.165184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.385 [2024-11-18 00:40:37.165201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.385 [2024-11-18 00:40:37.165463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.385 [2024-11-18 00:40:37.165676] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.385 [2024-11-18 00:40:37.165695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.385 [2024-11-18 00:40:37.165707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.385 [2024-11-18 00:40:37.165718] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.385 [2024-11-18 00:40:37.177825] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.385 [2024-11-18 00:40:37.178212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.385 [2024-11-18 00:40:37.178239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.385 [2024-11-18 00:40:37.178255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.385 [2024-11-18 00:40:37.178504] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.385 [2024-11-18 00:40:37.178731] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.385 [2024-11-18 00:40:37.178750] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.385 [2024-11-18 00:40:37.178762] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.385 [2024-11-18 00:40:37.178773] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.385 [2024-11-18 00:40:37.190999] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.385 [2024-11-18 00:40:37.191337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.385 [2024-11-18 00:40:37.191371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.385 [2024-11-18 00:40:37.191402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.385 [2024-11-18 00:40:37.191645] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.385 [2024-11-18 00:40:37.191854] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.385 [2024-11-18 00:40:37.191878] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.385 [2024-11-18 00:40:37.191890] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.385 [2024-11-18 00:40:37.191901] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.385 [2024-11-18 00:40:37.204574] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.644 [2024-11-18 00:40:37.205157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.644 [2024-11-18 00:40:37.205189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.644 [2024-11-18 00:40:37.205207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.644 [2024-11-18 00:40:37.205433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.644 [2024-11-18 00:40:37.205662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.644 [2024-11-18 00:40:37.205681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.644 [2024-11-18 00:40:37.205694] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.644 [2024-11-18 00:40:37.205705] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.644 [2024-11-18 00:40:37.217771] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.644 [2024-11-18 00:40:37.218171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.644 [2024-11-18 00:40:37.218201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.644 [2024-11-18 00:40:37.218218] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.644 [2024-11-18 00:40:37.218472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.644 [2024-11-18 00:40:37.218719] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.644 [2024-11-18 00:40:37.218737] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.645 [2024-11-18 00:40:37.218749] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.645 [2024-11-18 00:40:37.218761] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.645 [2024-11-18 00:40:37.230945] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.645 [2024-11-18 00:40:37.231355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.645 [2024-11-18 00:40:37.231399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.645 [2024-11-18 00:40:37.231415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.645 [2024-11-18 00:40:37.231679] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.645 [2024-11-18 00:40:37.231871] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.645 [2024-11-18 00:40:37.231889] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.645 [2024-11-18 00:40:37.231901] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.645 [2024-11-18 00:40:37.231917] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.645 [2024-11-18 00:40:37.244091] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.645 [2024-11-18 00:40:37.244431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.645 [2024-11-18 00:40:37.244459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.645 [2024-11-18 00:40:37.244475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.645 [2024-11-18 00:40:37.244695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.645 [2024-11-18 00:40:37.244903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.645 [2024-11-18 00:40:37.244921] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.645 [2024-11-18 00:40:37.244933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.645 [2024-11-18 00:40:37.244944] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.645 [2024-11-18 00:40:37.257254] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.645 [2024-11-18 00:40:37.257616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.645 [2024-11-18 00:40:37.257645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.645 [2024-11-18 00:40:37.257677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.645 [2024-11-18 00:40:37.257898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.645 [2024-11-18 00:40:37.258106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.645 [2024-11-18 00:40:37.258124] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.645 [2024-11-18 00:40:37.258137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.645 [2024-11-18 00:40:37.258148] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.645 [2024-11-18 00:40:37.270430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.645 [2024-11-18 00:40:37.270889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.645 [2024-11-18 00:40:37.270916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.645 [2024-11-18 00:40:37.270932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.645 [2024-11-18 00:40:37.271172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.645 [2024-11-18 00:40:37.271414] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.645 [2024-11-18 00:40:37.271435] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.645 [2024-11-18 00:40:37.271449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.645 [2024-11-18 00:40:37.271461] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.645 [2024-11-18 00:40:37.283580] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.645 [2024-11-18 00:40:37.284015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.645 [2024-11-18 00:40:37.284042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.645 [2024-11-18 00:40:37.284074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.645 [2024-11-18 00:40:37.284325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.645 [2024-11-18 00:40:37.284545] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.645 [2024-11-18 00:40:37.284565] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.645 [2024-11-18 00:40:37.284577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.645 [2024-11-18 00:40:37.284589] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.645 [2024-11-18 00:40:37.296699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.645 [2024-11-18 00:40:37.297065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.645 [2024-11-18 00:40:37.297107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.645 [2024-11-18 00:40:37.297123] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.645 [2024-11-18 00:40:37.297395] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.645 [2024-11-18 00:40:37.297600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.645 [2024-11-18 00:40:37.297619] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.645 [2024-11-18 00:40:37.297632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.645 [2024-11-18 00:40:37.297643] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.645 [2024-11-18 00:40:37.309723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.645 [2024-11-18 00:40:37.310085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.645 [2024-11-18 00:40:37.310129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.645 [2024-11-18 00:40:37.310145] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.645 [2024-11-18 00:40:37.310409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.645 [2024-11-18 00:40:37.310629] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.645 [2024-11-18 00:40:37.310648] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.645 [2024-11-18 00:40:37.310674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.645 [2024-11-18 00:40:37.310685] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.645 [2024-11-18 00:40:37.322830] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.645 [2024-11-18 00:40:37.323500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.645 [2024-11-18 00:40:37.323539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.645 [2024-11-18 00:40:37.323572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.645 [2024-11-18 00:40:37.323814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.645 [2024-11-18 00:40:37.324008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.645 [2024-11-18 00:40:37.324027] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.645 [2024-11-18 00:40:37.324039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.645 [2024-11-18 00:40:37.324050] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.645 [2024-11-18 00:40:37.335803] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.645 [2024-11-18 00:40:37.336179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.645 [2024-11-18 00:40:37.336207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.645 [2024-11-18 00:40:37.336224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.645 [2024-11-18 00:40:37.336491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.645 [2024-11-18 00:40:37.336722] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.645 [2024-11-18 00:40:37.336741] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.645 [2024-11-18 00:40:37.336753] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.645 [2024-11-18 00:40:37.336764] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.645 [2024-11-18 00:40:37.349019] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.645 [2024-11-18 00:40:37.349354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.645 [2024-11-18 00:40:37.349384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.645 [2024-11-18 00:40:37.349401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.646 [2024-11-18 00:40:37.349629] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.646 [2024-11-18 00:40:37.349854] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.646 [2024-11-18 00:40:37.349873] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.646 [2024-11-18 00:40:37.349886] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.646 [2024-11-18 00:40:37.349897] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.646 [2024-11-18 00:40:37.362055] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.646 [2024-11-18 00:40:37.362452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.646 [2024-11-18 00:40:37.362480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.646 [2024-11-18 00:40:37.362497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.646 [2024-11-18 00:40:37.362718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.646 [2024-11-18 00:40:37.362926] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.646 [2024-11-18 00:40:37.362949] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.646 [2024-11-18 00:40:37.362961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.646 [2024-11-18 00:40:37.362972] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.646 [2024-11-18 00:40:37.375170] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.646 [2024-11-18 00:40:37.375606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.646 [2024-11-18 00:40:37.375634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.646 [2024-11-18 00:40:37.375650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.646 [2024-11-18 00:40:37.375884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.646 [2024-11-18 00:40:37.376094] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.646 [2024-11-18 00:40:37.376112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.646 [2024-11-18 00:40:37.376124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.646 [2024-11-18 00:40:37.376135] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.646 [2024-11-18 00:40:37.388284] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.646 [2024-11-18 00:40:37.388716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.646 [2024-11-18 00:40:37.388745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.646 [2024-11-18 00:40:37.388761] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.646 [2024-11-18 00:40:37.389001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.646 [2024-11-18 00:40:37.389209] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.646 [2024-11-18 00:40:37.389229] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.646 [2024-11-18 00:40:37.389241] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.646 [2024-11-18 00:40:37.389252] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.646 [2024-11-18 00:40:37.402034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.646 [2024-11-18 00:40:37.402389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.646 [2024-11-18 00:40:37.402418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.646 [2024-11-18 00:40:37.402434] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.646 [2024-11-18 00:40:37.402663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.646 [2024-11-18 00:40:37.402877] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.646 [2024-11-18 00:40:37.402896] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.646 [2024-11-18 00:40:37.402908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.646 [2024-11-18 00:40:37.402924] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.646 [2024-11-18 00:40:37.415321] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.646 [2024-11-18 00:40:37.415746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.646 [2024-11-18 00:40:37.415817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.646 [2024-11-18 00:40:37.415834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.646 [2024-11-18 00:40:37.416082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.646 [2024-11-18 00:40:37.416319] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.646 [2024-11-18 00:40:37.416340] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.646 [2024-11-18 00:40:37.416353] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.646 [2024-11-18 00:40:37.416382] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.646 [2024-11-18 00:40:37.428590] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.646 [2024-11-18 00:40:37.429068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.646 [2024-11-18 00:40:37.429118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.646 [2024-11-18 00:40:37.429134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.646 [2024-11-18 00:40:37.429379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.646 [2024-11-18 00:40:37.429619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.646 [2024-11-18 00:40:37.429639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.646 [2024-11-18 00:40:37.429666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.646 [2024-11-18 00:40:37.429678] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.646 [2024-11-18 00:40:37.441762] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.646 [2024-11-18 00:40:37.442131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.646 [2024-11-18 00:40:37.442174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.646 [2024-11-18 00:40:37.442190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.646 [2024-11-18 00:40:37.442443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.646 [2024-11-18 00:40:37.442676] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.646 [2024-11-18 00:40:37.442695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.646 [2024-11-18 00:40:37.442707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.646 [2024-11-18 00:40:37.442718] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.646 [2024-11-18 00:40:37.454878] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.646 [2024-11-18 00:40:37.455215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.646 [2024-11-18 00:40:37.455243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.646 [2024-11-18 00:40:37.455259] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.646 [2024-11-18 00:40:37.455510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.646 [2024-11-18 00:40:37.455738] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.646 [2024-11-18 00:40:37.455757] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.646 [2024-11-18 00:40:37.455768] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.646 [2024-11-18 00:40:37.455780] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.906 [2024-11-18 00:40:37.468277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.906 [2024-11-18 00:40:37.468627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.906 [2024-11-18 00:40:37.468675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.906 [2024-11-18 00:40:37.468692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.906 [2024-11-18 00:40:37.468926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.906 [2024-11-18 00:40:37.469136] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.906 [2024-11-18 00:40:37.469155] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.906 [2024-11-18 00:40:37.469167] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.906 [2024-11-18 00:40:37.469178] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.906 [2024-11-18 00:40:37.481489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.906 [2024-11-18 00:40:37.481961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.906 [2024-11-18 00:40:37.482006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.906 [2024-11-18 00:40:37.482023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.906 [2024-11-18 00:40:37.482262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.906 [2024-11-18 00:40:37.482491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.906 [2024-11-18 00:40:37.482512] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.906 [2024-11-18 00:40:37.482525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.906 [2024-11-18 00:40:37.482537] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.906 [2024-11-18 00:40:37.494645] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.906 [2024-11-18 00:40:37.495097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.906 [2024-11-18 00:40:37.495152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.906 [2024-11-18 00:40:37.495187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.906 [2024-11-18 00:40:37.495440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.906 [2024-11-18 00:40:37.495658] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.906 [2024-11-18 00:40:37.495676] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.906 [2024-11-18 00:40:37.495688] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.906 [2024-11-18 00:40:37.495699] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.906 [2024-11-18 00:40:37.507874] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.906 [2024-11-18 00:40:37.508268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.906 [2024-11-18 00:40:37.508297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.906 [2024-11-18 00:40:37.508336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.906 [2024-11-18 00:40:37.508582] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.906 [2024-11-18 00:40:37.508791] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.906 [2024-11-18 00:40:37.508810] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.906 [2024-11-18 00:40:37.508822] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.906 [2024-11-18 00:40:37.508833] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.906 [2024-11-18 00:40:37.521090] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.906 [2024-11-18 00:40:37.521564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.906 [2024-11-18 00:40:37.521621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.906 [2024-11-18 00:40:37.521657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.906 [2024-11-18 00:40:37.521907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.906 [2024-11-18 00:40:37.522099] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.906 [2024-11-18 00:40:37.522117] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.906 [2024-11-18 00:40:37.522129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.906 [2024-11-18 00:40:37.522141] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.906 [2024-11-18 00:40:37.534264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.906 [2024-11-18 00:40:37.534760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.906 [2024-11-18 00:40:37.534808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.906 [2024-11-18 00:40:37.534825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.906 [2024-11-18 00:40:37.535088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.906 [2024-11-18 00:40:37.535280] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.906 [2024-11-18 00:40:37.535330] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.906 [2024-11-18 00:40:37.535346] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.906 [2024-11-18 00:40:37.535358] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.906 [2024-11-18 00:40:37.547626] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.906 [2024-11-18 00:40:37.548033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.906 [2024-11-18 00:40:37.548061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.906 [2024-11-18 00:40:37.548077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.906 [2024-11-18 00:40:37.548297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.906 [2024-11-18 00:40:37.548531] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.906 [2024-11-18 00:40:37.548551] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.906 [2024-11-18 00:40:37.548564] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.906 [2024-11-18 00:40:37.548576] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.906 [2024-11-18 00:40:37.560873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.906 [2024-11-18 00:40:37.561286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.906 [2024-11-18 00:40:37.561345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.906 [2024-11-18 00:40:37.561361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.906 [2024-11-18 00:40:37.561627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.906 [2024-11-18 00:40:37.561834] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.906 [2024-11-18 00:40:37.561852] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.906 [2024-11-18 00:40:37.561864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.906 [2024-11-18 00:40:37.561875] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.906 [2024-11-18 00:40:37.574052] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.906 [2024-11-18 00:40:37.574440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.906 [2024-11-18 00:40:37.574470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.906 [2024-11-18 00:40:37.574486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.906 [2024-11-18 00:40:37.574726] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.906 [2024-11-18 00:40:37.574935] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.906 [2024-11-18 00:40:37.574953] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.906 [2024-11-18 00:40:37.574965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.906 [2024-11-18 00:40:37.574983] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.906 [2024-11-18 00:40:37.587168] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.906 [2024-11-18 00:40:37.587559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.906 [2024-11-18 00:40:37.587586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.906 [2024-11-18 00:40:37.587602] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.907 [2024-11-18 00:40:37.587836] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.907 [2024-11-18 00:40:37.588045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.907 [2024-11-18 00:40:37.588063] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.907 [2024-11-18 00:40:37.588075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.907 [2024-11-18 00:40:37.588087] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.907 [2024-11-18 00:40:37.600127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.907 [2024-11-18 00:40:37.600433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.907 [2024-11-18 00:40:37.600475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.907 [2024-11-18 00:40:37.600490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.907 [2024-11-18 00:40:37.600704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.907 [2024-11-18 00:40:37.600913] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.907 [2024-11-18 00:40:37.600931] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.907 [2024-11-18 00:40:37.600943] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.907 [2024-11-18 00:40:37.600954] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.907 [2024-11-18 00:40:37.613193] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.907 [2024-11-18 00:40:37.613641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.907 [2024-11-18 00:40:37.613690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.907 [2024-11-18 00:40:37.613706] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.907 [2024-11-18 00:40:37.613971] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.907 [2024-11-18 00:40:37.614163] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.907 [2024-11-18 00:40:37.614181] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.907 [2024-11-18 00:40:37.614193] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.907 [2024-11-18 00:40:37.614204] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.907 [2024-11-18 00:40:37.626324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.907 [2024-11-18 00:40:37.626798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.907 [2024-11-18 00:40:37.626854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.907 [2024-11-18 00:40:37.626870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.907 [2024-11-18 00:40:37.627132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.907 [2024-11-18 00:40:37.627350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.907 [2024-11-18 00:40:37.627370] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.907 [2024-11-18 00:40:37.627383] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.907 [2024-11-18 00:40:37.627394] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.907 [2024-11-18 00:40:37.639369] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.907 [2024-11-18 00:40:37.639795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.907 [2024-11-18 00:40:37.639822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.907 [2024-11-18 00:40:37.639837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.907 [2024-11-18 00:40:37.640072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.907 [2024-11-18 00:40:37.640279] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.907 [2024-11-18 00:40:37.640298] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.907 [2024-11-18 00:40:37.640335] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.907 [2024-11-18 00:40:37.640350] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.907 [2024-11-18 00:40:37.652811] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.907 [2024-11-18 00:40:37.653173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.907 [2024-11-18 00:40:37.653240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.907 [2024-11-18 00:40:37.653256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.907 [2024-11-18 00:40:37.653520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.907 [2024-11-18 00:40:37.653732] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.907 [2024-11-18 00:40:37.653750] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.907 [2024-11-18 00:40:37.653762] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.907 [2024-11-18 00:40:37.653773] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.907 [2024-11-18 00:40:37.665825] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.907 [2024-11-18 00:40:37.666190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.907 [2024-11-18 00:40:37.666217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.907 [2024-11-18 00:40:37.666233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.907 [2024-11-18 00:40:37.666503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.907 [2024-11-18 00:40:37.666731] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.907 [2024-11-18 00:40:37.666750] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.907 [2024-11-18 00:40:37.666762] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.907 [2024-11-18 00:40:37.666773] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.907 [2024-11-18 00:40:37.679062] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.907 [2024-11-18 00:40:37.679395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.907 [2024-11-18 00:40:37.679425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.907 [2024-11-18 00:40:37.679441] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.907 [2024-11-18 00:40:37.679669] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.907 [2024-11-18 00:40:37.679878] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.907 [2024-11-18 00:40:37.679896] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.907 [2024-11-18 00:40:37.679907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.907 [2024-11-18 00:40:37.679918] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.907 [2024-11-18 00:40:37.692473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.907 [2024-11-18 00:40:37.692881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.907 [2024-11-18 00:40:37.692926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.907 [2024-11-18 00:40:37.692942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.907 [2024-11-18 00:40:37.693182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.907 [2024-11-18 00:40:37.693391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.907 [2024-11-18 00:40:37.693411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.907 [2024-11-18 00:40:37.693423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.907 [2024-11-18 00:40:37.693434] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.907 [2024-11-18 00:40:37.705708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.907 [2024-11-18 00:40:37.706078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.907 [2024-11-18 00:40:37.706105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.907 [2024-11-18 00:40:37.706121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.907 [2024-11-18 00:40:37.706372] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.907 [2024-11-18 00:40:37.706584] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.907 [2024-11-18 00:40:37.706609] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.907 [2024-11-18 00:40:37.706623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.907 [2024-11-18 00:40:37.706636] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.907 [2024-11-18 00:40:37.718993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.907 [2024-11-18 00:40:37.719409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.907 [2024-11-18 00:40:37.719438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:13.908 [2024-11-18 00:40:37.719455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:13.908 [2024-11-18 00:40:37.719683] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:13.908 [2024-11-18 00:40:37.719896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.908 [2024-11-18 00:40:37.719915] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.908 [2024-11-18 00:40:37.719927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.908 [2024-11-18 00:40:37.719939] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.168 [2024-11-18 00:40:37.732487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.168 [2024-11-18 00:40:37.732887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.168 [2024-11-18 00:40:37.732920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.168 [2024-11-18 00:40:37.732937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.168 [2024-11-18 00:40:37.733152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.168 [2024-11-18 00:40:37.733405] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.168 [2024-11-18 00:40:37.733427] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.168 [2024-11-18 00:40:37.733441] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.168 [2024-11-18 00:40:37.733454] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.168 [2024-11-18 00:40:37.745816] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.168 [2024-11-18 00:40:37.746218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.168 [2024-11-18 00:40:37.746261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.168 [2024-11-18 00:40:37.746278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.168 [2024-11-18 00:40:37.746515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.168 [2024-11-18 00:40:37.746750] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.168 [2024-11-18 00:40:37.746769] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.168 [2024-11-18 00:40:37.746782] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.168 [2024-11-18 00:40:37.746793] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.168 [2024-11-18 00:40:37.759096] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.168 [2024-11-18 00:40:37.759489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.168 [2024-11-18 00:40:37.759518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.168 [2024-11-18 00:40:37.759534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.168 [2024-11-18 00:40:37.759761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.168 [2024-11-18 00:40:37.759974] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.168 [2024-11-18 00:40:37.759993] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.168 [2024-11-18 00:40:37.760006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.168 [2024-11-18 00:40:37.760017] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.168 5693.50 IOPS, 22.24 MiB/s [2024-11-17T23:40:37.990Z] [2024-11-18 00:40:37.772414] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.168 [2024-11-18 00:40:37.772812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.168 [2024-11-18 00:40:37.772840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.168 [2024-11-18 00:40:37.772857] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.168 [2024-11-18 00:40:37.773086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.168 [2024-11-18 00:40:37.773340] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.168 [2024-11-18 00:40:37.773360] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.168 [2024-11-18 00:40:37.773373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.168 [2024-11-18 00:40:37.773399] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.168 [2024-11-18 00:40:37.785660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.168 [2024-11-18 00:40:37.786082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.168 [2024-11-18 00:40:37.786110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.168 [2024-11-18 00:40:37.786126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.168 [2024-11-18 00:40:37.786367] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.168 [2024-11-18 00:40:37.786577] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.168 [2024-11-18 00:40:37.786597] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.168 [2024-11-18 00:40:37.786610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.168 [2024-11-18 00:40:37.786637] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.168 [2024-11-18 00:40:37.798929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.168 [2024-11-18 00:40:37.799340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.168 [2024-11-18 00:40:37.799374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.168 [2024-11-18 00:40:37.799391] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.168 [2024-11-18 00:40:37.799620] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.168 [2024-11-18 00:40:37.799834] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.168 [2024-11-18 00:40:37.799853] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.168 [2024-11-18 00:40:37.799865] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.168 [2024-11-18 00:40:37.799877] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.168 [2024-11-18 00:40:37.812219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.168 [2024-11-18 00:40:37.812680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.168 [2024-11-18 00:40:37.812723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.168 [2024-11-18 00:40:37.812740] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.168 [2024-11-18 00:40:37.812983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.168 [2024-11-18 00:40:37.813201] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.168 [2024-11-18 00:40:37.813220] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.168 [2024-11-18 00:40:37.813233] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.168 [2024-11-18 00:40:37.813244] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.168 [2024-11-18 00:40:37.825551] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.168 [2024-11-18 00:40:37.825884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.168 [2024-11-18 00:40:37.825912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.168 [2024-11-18 00:40:37.825928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.168 [2024-11-18 00:40:37.826154] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.168 [2024-11-18 00:40:37.826412] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.168 [2024-11-18 00:40:37.826433] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.168 [2024-11-18 00:40:37.826447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.168 [2024-11-18 00:40:37.826459] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.168 [2024-11-18 00:40:37.838742] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.168 [2024-11-18 00:40:37.839131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.168 [2024-11-18 00:40:37.839160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.168 [2024-11-18 00:40:37.839177] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.168 [2024-11-18 00:40:37.839422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.168 [2024-11-18 00:40:37.839652] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.168 [2024-11-18 00:40:37.839672] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.168 [2024-11-18 00:40:37.839685] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.169 [2024-11-18 00:40:37.839712] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.169 [2024-11-18 00:40:37.852066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.169 [2024-11-18 00:40:37.852425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.169 [2024-11-18 00:40:37.852454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.169 [2024-11-18 00:40:37.852470] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.169 [2024-11-18 00:40:37.852709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.169 [2024-11-18 00:40:37.852907] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.169 [2024-11-18 00:40:37.852925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.169 [2024-11-18 00:40:37.852938] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.169 [2024-11-18 00:40:37.852949] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.169 [2024-11-18 00:40:37.865304] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.169 [2024-11-18 00:40:37.865679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.169 [2024-11-18 00:40:37.865708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.169 [2024-11-18 00:40:37.865724] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.169 [2024-11-18 00:40:37.865951] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.169 [2024-11-18 00:40:37.866165] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.169 [2024-11-18 00:40:37.866184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.169 [2024-11-18 00:40:37.866196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.169 [2024-11-18 00:40:37.866207] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.169 [2024-11-18 00:40:37.878677] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.169 [2024-11-18 00:40:37.879052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.169 [2024-11-18 00:40:37.879080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.169 [2024-11-18 00:40:37.879096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.169 [2024-11-18 00:40:37.879349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.169 [2024-11-18 00:40:37.879575] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.169 [2024-11-18 00:40:37.879596] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.169 [2024-11-18 00:40:37.879614] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.169 [2024-11-18 00:40:37.879626] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.169 [2024-11-18 00:40:37.891962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.169 [2024-11-18 00:40:37.892331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.169 [2024-11-18 00:40:37.892359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.169 [2024-11-18 00:40:37.892376] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.169 [2024-11-18 00:40:37.892605] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.169 [2024-11-18 00:40:37.892822] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.169 [2024-11-18 00:40:37.892841] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.169 [2024-11-18 00:40:37.892854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.169 [2024-11-18 00:40:37.892866] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.169 [2024-11-18 00:40:37.905430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.169 [2024-11-18 00:40:37.905747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.169 [2024-11-18 00:40:37.905789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.169 [2024-11-18 00:40:37.905805] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.169 [2024-11-18 00:40:37.906027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.169 [2024-11-18 00:40:37.906240] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.169 [2024-11-18 00:40:37.906259] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.169 [2024-11-18 00:40:37.906271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.169 [2024-11-18 00:40:37.906283] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.169 [2024-11-18 00:40:37.918706] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.169 [2024-11-18 00:40:37.919081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.169 [2024-11-18 00:40:37.919124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.169 [2024-11-18 00:40:37.919140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.169 [2024-11-18 00:40:37.919402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.169 [2024-11-18 00:40:37.919621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.169 [2024-11-18 00:40:37.919640] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.169 [2024-11-18 00:40:37.919652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.169 [2024-11-18 00:40:37.919664] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.169 [2024-11-18 00:40:37.931952] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.169 [2024-11-18 00:40:37.932344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.169 [2024-11-18 00:40:37.932373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.169 [2024-11-18 00:40:37.932389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.169 [2024-11-18 00:40:37.932618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.169 [2024-11-18 00:40:37.932848] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.169 [2024-11-18 00:40:37.932868] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.169 [2024-11-18 00:40:37.932880] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.169 [2024-11-18 00:40:37.932891] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.169 [2024-11-18 00:40:37.945357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.169 [2024-11-18 00:40:37.945752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.169 [2024-11-18 00:40:37.945794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.169 [2024-11-18 00:40:37.945809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.169 [2024-11-18 00:40:37.946044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.169 [2024-11-18 00:40:37.946256] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.169 [2024-11-18 00:40:37.946274] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.169 [2024-11-18 00:40:37.946287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.169 [2024-11-18 00:40:37.946298] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.169 [2024-11-18 00:40:37.958673] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.169 [2024-11-18 00:40:37.959046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.169 [2024-11-18 00:40:37.959089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.169 [2024-11-18 00:40:37.959105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.169 [2024-11-18 00:40:37.959390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.169 [2024-11-18 00:40:37.959600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.169 [2024-11-18 00:40:37.959635] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.169 [2024-11-18 00:40:37.959648] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.169 [2024-11-18 00:40:37.959660] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.169 [2024-11-18 00:40:37.971852] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.169 [2024-11-18 00:40:37.972164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.169 [2024-11-18 00:40:37.972210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.169 [2024-11-18 00:40:37.972227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.169 [2024-11-18 00:40:37.972466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.169 [2024-11-18 00:40:37.972704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.169 [2024-11-18 00:40:37.972723] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.170 [2024-11-18 00:40:37.972735] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.170 [2024-11-18 00:40:37.972747] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.170 [2024-11-18 00:40:37.985262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.170 [2024-11-18 00:40:37.985708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.170 [2024-11-18 00:40:37.985739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.170 [2024-11-18 00:40:37.985756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.170 [2024-11-18 00:40:37.985984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.170 [2024-11-18 00:40:37.986200] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.170 [2024-11-18 00:40:37.986220] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.170 [2024-11-18 00:40:37.986232] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.170 [2024-11-18 00:40:37.986243] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.435 [2024-11-18 00:40:37.998647] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.435 [2024-11-18 00:40:37.999091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.435 [2024-11-18 00:40:37.999121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.435 [2024-11-18 00:40:37.999138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.435 [2024-11-18 00:40:37.999377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.435 [2024-11-18 00:40:37.999602] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.435 [2024-11-18 00:40:37.999622] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.435 [2024-11-18 00:40:37.999636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.435 [2024-11-18 00:40:37.999648] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.435 [2024-11-18 00:40:38.011782] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.436 [2024-11-18 00:40:38.012223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.436 [2024-11-18 00:40:38.012252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.436 [2024-11-18 00:40:38.012269] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.436 [2024-11-18 00:40:38.012507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.436 [2024-11-18 00:40:38.012753] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.436 [2024-11-18 00:40:38.012772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.436 [2024-11-18 00:40:38.012785] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.436 [2024-11-18 00:40:38.012797] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.436 [2024-11-18 00:40:38.024960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.436 [2024-11-18 00:40:38.025396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.436 [2024-11-18 00:40:38.025425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.436 [2024-11-18 00:40:38.025442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.436 [2024-11-18 00:40:38.025670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.436 [2024-11-18 00:40:38.025885] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.436 [2024-11-18 00:40:38.025904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.436 [2024-11-18 00:40:38.025916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.436 [2024-11-18 00:40:38.025928] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.436 [2024-11-18 00:40:38.038240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.436 [2024-11-18 00:40:38.038627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.436 [2024-11-18 00:40:38.038656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.436 [2024-11-18 00:40:38.038673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.436 [2024-11-18 00:40:38.038905] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.436 [2024-11-18 00:40:38.039119] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.436 [2024-11-18 00:40:38.039137] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.436 [2024-11-18 00:40:38.039150] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.436 [2024-11-18 00:40:38.039161] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.436 [2024-11-18 00:40:38.051477] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.436 [2024-11-18 00:40:38.051924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.436 [2024-11-18 00:40:38.051966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.437 [2024-11-18 00:40:38.051983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.437 [2024-11-18 00:40:38.052228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.437 [2024-11-18 00:40:38.052475] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.437 [2024-11-18 00:40:38.052497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.437 [2024-11-18 00:40:38.052515] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.437 [2024-11-18 00:40:38.052528] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.437 [2024-11-18 00:40:38.064684] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.437 [2024-11-18 00:40:38.065123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.437 [2024-11-18 00:40:38.065151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.437 [2024-11-18 00:40:38.065168] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.437 [2024-11-18 00:40:38.065407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.437 [2024-11-18 00:40:38.065632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.437 [2024-11-18 00:40:38.065651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.437 [2024-11-18 00:40:38.065664] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.437 [2024-11-18 00:40:38.065675] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.437 [2024-11-18 00:40:38.077950] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.437 [2024-11-18 00:40:38.078327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.437 [2024-11-18 00:40:38.078356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.437 [2024-11-18 00:40:38.078372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.437 [2024-11-18 00:40:38.078600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.437 [2024-11-18 00:40:38.078815] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.437 [2024-11-18 00:40:38.078834] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.437 [2024-11-18 00:40:38.078847] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.437 [2024-11-18 00:40:38.078858] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.437 [2024-11-18 00:40:38.091228] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.437 [2024-11-18 00:40:38.091615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.437 [2024-11-18 00:40:38.091644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.437 [2024-11-18 00:40:38.091660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.437 [2024-11-18 00:40:38.091901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.437 [2024-11-18 00:40:38.092115] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.437 [2024-11-18 00:40:38.092134] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.437 [2024-11-18 00:40:38.092146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.437 [2024-11-18 00:40:38.092158] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.437 [2024-11-18 00:40:38.104528] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.437 [2024-11-18 00:40:38.104963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.437 [2024-11-18 00:40:38.105006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.437 [2024-11-18 00:40:38.105022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.437 [2024-11-18 00:40:38.105263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.437 [2024-11-18 00:40:38.105509] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.437 [2024-11-18 00:40:38.105531] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.437 [2024-11-18 00:40:38.105544] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.437 [2024-11-18 00:40:38.105556] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.437 [2024-11-18 00:40:38.117771] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.437 [2024-11-18 00:40:38.118147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.437 [2024-11-18 00:40:38.118191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.437 [2024-11-18 00:40:38.118206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.437 [2024-11-18 00:40:38.118457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.437 [2024-11-18 00:40:38.118696] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.437 [2024-11-18 00:40:38.118715] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.437 [2024-11-18 00:40:38.118727] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.438 [2024-11-18 00:40:38.118739] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.438 [2024-11-18 00:40:38.131196] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.438 [2024-11-18 00:40:38.131610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.438 [2024-11-18 00:40:38.131639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.438 [2024-11-18 00:40:38.131655] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.438 [2024-11-18 00:40:38.131896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.438 [2024-11-18 00:40:38.132121] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.438 [2024-11-18 00:40:38.132141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.438 [2024-11-18 00:40:38.132155] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.438 [2024-11-18 00:40:38.132182] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.438 [2024-11-18 00:40:38.144522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.438 [2024-11-18 00:40:38.144884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.438 [2024-11-18 00:40:38.144911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.438 [2024-11-18 00:40:38.144932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.438 [2024-11-18 00:40:38.145140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.438 [2024-11-18 00:40:38.145396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.438 [2024-11-18 00:40:38.145417] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.438 [2024-11-18 00:40:38.145430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.438 [2024-11-18 00:40:38.145457] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.438 [2024-11-18 00:40:38.157916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.438 [2024-11-18 00:40:38.158353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.438 [2024-11-18 00:40:38.158382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.438 [2024-11-18 00:40:38.158399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.438 [2024-11-18 00:40:38.158640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.438 [2024-11-18 00:40:38.158839] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.438 [2024-11-18 00:40:38.158857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.438 [2024-11-18 00:40:38.158870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.438 [2024-11-18 00:40:38.158881] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.438 [2024-11-18 00:40:38.171136] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.438 [2024-11-18 00:40:38.171501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.438 [2024-11-18 00:40:38.171530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.438 [2024-11-18 00:40:38.171547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.438 [2024-11-18 00:40:38.171776] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.438 [2024-11-18 00:40:38.171991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.438 [2024-11-18 00:40:38.172010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.438 [2024-11-18 00:40:38.172022] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.438 [2024-11-18 00:40:38.172033] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.438 [2024-11-18 00:40:38.184460] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.438 [2024-11-18 00:40:38.184938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.438 [2024-11-18 00:40:38.184967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.438 [2024-11-18 00:40:38.184983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.438 [2024-11-18 00:40:38.185225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.438 [2024-11-18 00:40:38.185465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.438 [2024-11-18 00:40:38.185487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.438 [2024-11-18 00:40:38.185500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.438 [2024-11-18 00:40:38.185512] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.438 [2024-11-18 00:40:38.197796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.438 [2024-11-18 00:40:38.198235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.438 [2024-11-18 00:40:38.198263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.438 [2024-11-18 00:40:38.198279] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.438 [2024-11-18 00:40:38.198528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.438 [2024-11-18 00:40:38.198744] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.438 [2024-11-18 00:40:38.198763] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.438 [2024-11-18 00:40:38.198776] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.438 [2024-11-18 00:40:38.198787] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.438 [2024-11-18 00:40:38.211060] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.438 [2024-11-18 00:40:38.211472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.438 [2024-11-18 00:40:38.211500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.438 [2024-11-18 00:40:38.211531] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.438 [2024-11-18 00:40:38.211771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.438 [2024-11-18 00:40:38.211969] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.438 [2024-11-18 00:40:38.211987] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.438 [2024-11-18 00:40:38.211999] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.438 [2024-11-18 00:40:38.212011] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.438 [2024-11-18 00:40:38.224255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.438 [2024-11-18 00:40:38.224634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.438 [2024-11-18 00:40:38.224663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.438 [2024-11-18 00:40:38.224680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.438 [2024-11-18 00:40:38.224910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.438 [2024-11-18 00:40:38.225123] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.438 [2024-11-18 00:40:38.225142] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.438 [2024-11-18 00:40:38.225163] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.438 [2024-11-18 00:40:38.225176] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.438 [2024-11-18 00:40:38.237547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.438 [2024-11-18 00:40:38.237954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.438 [2024-11-18 00:40:38.237982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.438 [2024-11-18 00:40:38.237999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.438 [2024-11-18 00:40:38.238240] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.438 [2024-11-18 00:40:38.238487] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.438 [2024-11-18 00:40:38.238509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.438 [2024-11-18 00:40:38.238522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.438 [2024-11-18 00:40:38.238534] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.438 [2024-11-18 00:40:38.251082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.438 [2024-11-18 00:40:38.251488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.438 [2024-11-18 00:40:38.251519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.438 [2024-11-18 00:40:38.251536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.438 [2024-11-18 00:40:38.251785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.438 [2024-11-18 00:40:38.252088] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.438 [2024-11-18 00:40:38.252112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.438 [2024-11-18 00:40:38.252125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.438 [2024-11-18 00:40:38.252153] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.699 [2024-11-18 00:40:38.264715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.699 [2024-11-18 00:40:38.265118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.699 [2024-11-18 00:40:38.265162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.699 [2024-11-18 00:40:38.265180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.699 [2024-11-18 00:40:38.265417] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.699 [2024-11-18 00:40:38.265636] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.699 [2024-11-18 00:40:38.265655] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.699 [2024-11-18 00:40:38.265668] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.699 [2024-11-18 00:40:38.265680] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.699 [2024-11-18 00:40:38.278082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.699 [2024-11-18 00:40:38.278434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.699 [2024-11-18 00:40:38.278479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.699 [2024-11-18 00:40:38.278496] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.699 [2024-11-18 00:40:38.278724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.699 [2024-11-18 00:40:38.278937] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.699 [2024-11-18 00:40:38.278956] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.699 [2024-11-18 00:40:38.278969] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.699 [2024-11-18 00:40:38.278980] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.699 [2024-11-18 00:40:38.291404] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.699 [2024-11-18 00:40:38.291865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.699 [2024-11-18 00:40:38.291893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.699 [2024-11-18 00:40:38.291910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.699 [2024-11-18 00:40:38.292151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.699 [2024-11-18 00:40:38.292376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.699 [2024-11-18 00:40:38.292411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.699 [2024-11-18 00:40:38.292425] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.699 [2024-11-18 00:40:38.292437] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.699 [2024-11-18 00:40:38.304716] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.699 [2024-11-18 00:40:38.305053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.699 [2024-11-18 00:40:38.305082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.699 [2024-11-18 00:40:38.305098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.699 [2024-11-18 00:40:38.305338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.699 [2024-11-18 00:40:38.305549] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.699 [2024-11-18 00:40:38.305569] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.699 [2024-11-18 00:40:38.305582] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.699 [2024-11-18 00:40:38.305594] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.699 [2024-11-18 00:40:38.317879] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.699 [2024-11-18 00:40:38.318252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.699 [2024-11-18 00:40:38.318295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.699 [2024-11-18 00:40:38.318326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.700 [2024-11-18 00:40:38.318571] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.700 [2024-11-18 00:40:38.318788] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.700 [2024-11-18 00:40:38.318807] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.700 [2024-11-18 00:40:38.318820] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.700 [2024-11-18 00:40:38.318831] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.700 [2024-11-18 00:40:38.331224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.700 [2024-11-18 00:40:38.331563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.700 [2024-11-18 00:40:38.331606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.700 [2024-11-18 00:40:38.331623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.700 [2024-11-18 00:40:38.331860] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.700 [2024-11-18 00:40:38.332092] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.700 [2024-11-18 00:40:38.332111] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.700 [2024-11-18 00:40:38.332123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.700 [2024-11-18 00:40:38.332135] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.700 [2024-11-18 00:40:38.344572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.700 [2024-11-18 00:40:38.345017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.700 [2024-11-18 00:40:38.345046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.700 [2024-11-18 00:40:38.345062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.700 [2024-11-18 00:40:38.345291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.700 [2024-11-18 00:40:38.345540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.700 [2024-11-18 00:40:38.345561] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.700 [2024-11-18 00:40:38.345574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.700 [2024-11-18 00:40:38.345586] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.700 [2024-11-18 00:40:38.357773] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.700 [2024-11-18 00:40:38.358174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.700 [2024-11-18 00:40:38.358216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.700 [2024-11-18 00:40:38.358233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.700 [2024-11-18 00:40:38.358470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.700 [2024-11-18 00:40:38.358712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.700 [2024-11-18 00:40:38.358732] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.700 [2024-11-18 00:40:38.358744] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.700 [2024-11-18 00:40:38.358756] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.700 [2024-11-18 00:40:38.371082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.700 [2024-11-18 00:40:38.371466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.700 [2024-11-18 00:40:38.371494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.700 [2024-11-18 00:40:38.371511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.700 [2024-11-18 00:40:38.371741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.700 [2024-11-18 00:40:38.371955] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.700 [2024-11-18 00:40:38.371974] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.700 [2024-11-18 00:40:38.371986] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.700 [2024-11-18 00:40:38.371997] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.700 [2024-11-18 00:40:38.384265] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.700 [2024-11-18 00:40:38.384705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.700 [2024-11-18 00:40:38.384734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.700 [2024-11-18 00:40:38.384750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.700 [2024-11-18 00:40:38.384978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.700 [2024-11-18 00:40:38.385192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.700 [2024-11-18 00:40:38.385210] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.700 [2024-11-18 00:40:38.385224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.700 [2024-11-18 00:40:38.385235] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.700 [2024-11-18 00:40:38.397441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.700 [2024-11-18 00:40:38.397834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.700 [2024-11-18 00:40:38.397862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.700 [2024-11-18 00:40:38.397879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.700 [2024-11-18 00:40:38.398120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.700 [2024-11-18 00:40:38.398344] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.700 [2024-11-18 00:40:38.398380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.700 [2024-11-18 00:40:38.398399] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.700 [2024-11-18 00:40:38.398412] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.700 [2024-11-18 00:40:38.411050] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.700 [2024-11-18 00:40:38.411506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.700 [2024-11-18 00:40:38.411535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.700 [2024-11-18 00:40:38.411551] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.700 [2024-11-18 00:40:38.411805] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.700 [2024-11-18 00:40:38.412003] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.700 [2024-11-18 00:40:38.412022] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.700 [2024-11-18 00:40:38.412035] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.700 [2024-11-18 00:40:38.412046] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.700 [2024-11-18 00:40:38.424279] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.701 [2024-11-18 00:40:38.424654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.701 [2024-11-18 00:40:38.424682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.701 [2024-11-18 00:40:38.424698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.701 [2024-11-18 00:40:38.424919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.701 [2024-11-18 00:40:38.425132] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.701 [2024-11-18 00:40:38.425150] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.701 [2024-11-18 00:40:38.425163] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.701 [2024-11-18 00:40:38.425174] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.701 [2024-11-18 00:40:38.437530] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.701 [2024-11-18 00:40:38.437985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.701 [2024-11-18 00:40:38.438013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.701 [2024-11-18 00:40:38.438029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.701 [2024-11-18 00:40:38.438269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.701 [2024-11-18 00:40:38.438516] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.701 [2024-11-18 00:40:38.438538] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.701 [2024-11-18 00:40:38.438551] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.701 [2024-11-18 00:40:38.438563] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.701 [2024-11-18 00:40:38.450827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.701 [2024-11-18 00:40:38.451265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.701 [2024-11-18 00:40:38.451293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.701 [2024-11-18 00:40:38.451309] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.701 [2024-11-18 00:40:38.451549] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.701 [2024-11-18 00:40:38.451765] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.701 [2024-11-18 00:40:38.451784] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.701 [2024-11-18 00:40:38.451796] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.701 [2024-11-18 00:40:38.451808] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.701 [2024-11-18 00:40:38.463997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.701 [2024-11-18 00:40:38.464358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.701 [2024-11-18 00:40:38.464388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.701 [2024-11-18 00:40:38.464406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.701 [2024-11-18 00:40:38.464634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.701 [2024-11-18 00:40:38.464848] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.701 [2024-11-18 00:40:38.464866] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.701 [2024-11-18 00:40:38.464878] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.701 [2024-11-18 00:40:38.464890] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.701 [2024-11-18 00:40:38.477331] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.701 [2024-11-18 00:40:38.477748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.701 [2024-11-18 00:40:38.477777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.701 [2024-11-18 00:40:38.477793] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.701 [2024-11-18 00:40:38.478021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.701 [2024-11-18 00:40:38.478234] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.701 [2024-11-18 00:40:38.478252] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.701 [2024-11-18 00:40:38.478265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.701 [2024-11-18 00:40:38.478276] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.701 [2024-11-18 00:40:38.490668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.701 [2024-11-18 00:40:38.491040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.701 [2024-11-18 00:40:38.491083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.701 [2024-11-18 00:40:38.491104] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.701 [2024-11-18 00:40:38.491372] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.701 [2024-11-18 00:40:38.491590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.701 [2024-11-18 00:40:38.491629] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.701 [2024-11-18 00:40:38.491643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.701 [2024-11-18 00:40:38.491655] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.701 [2024-11-18 00:40:38.503953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.701 [2024-11-18 00:40:38.504326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.701 [2024-11-18 00:40:38.504369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.701 [2024-11-18 00:40:38.504385] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.701 [2024-11-18 00:40:38.504638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.701 [2024-11-18 00:40:38.504836] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.701 [2024-11-18 00:40:38.504855] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.701 [2024-11-18 00:40:38.504867] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.701 [2024-11-18 00:40:38.504879] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.701 [2024-11-18 00:40:38.517474] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.701 [2024-11-18 00:40:38.517862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.701 [2024-11-18 00:40:38.517902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.701 [2024-11-18 00:40:38.517932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.701 [2024-11-18 00:40:38.518218] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.701 [2024-11-18 00:40:38.518477] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.701 [2024-11-18 00:40:38.518500] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.701 [2024-11-18 00:40:38.518514] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.701 [2024-11-18 00:40:38.518527] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.961 [2024-11-18 00:40:38.530761] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.961 [2024-11-18 00:40:38.531102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.961 [2024-11-18 00:40:38.531132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.961 [2024-11-18 00:40:38.531149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.961 [2024-11-18 00:40:38.531386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.961 [2024-11-18 00:40:38.531625] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.961 [2024-11-18 00:40:38.531645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.961 [2024-11-18 00:40:38.531658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.961 [2024-11-18 00:40:38.531685] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.961 [2024-11-18 00:40:38.544029] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.961 [2024-11-18 00:40:38.544393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.961 [2024-11-18 00:40:38.544422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.961 [2024-11-18 00:40:38.544439] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.961 [2024-11-18 00:40:38.544667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.961 [2024-11-18 00:40:38.544887] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.961 [2024-11-18 00:40:38.544907] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.962 [2024-11-18 00:40:38.544919] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.962 [2024-11-18 00:40:38.544931] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.962 [2024-11-18 00:40:38.557209] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.962 [2024-11-18 00:40:38.557597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.962 [2024-11-18 00:40:38.557626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.962 [2024-11-18 00:40:38.557642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.962 [2024-11-18 00:40:38.557870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.962 [2024-11-18 00:40:38.558084] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.962 [2024-11-18 00:40:38.558103] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.962 [2024-11-18 00:40:38.558115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.962 [2024-11-18 00:40:38.558127] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.962 [2024-11-18 00:40:38.570438] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.962 [2024-11-18 00:40:38.570837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.962 [2024-11-18 00:40:38.570881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.962 [2024-11-18 00:40:38.570905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.962 [2024-11-18 00:40:38.571172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.962 [2024-11-18 00:40:38.571399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.962 [2024-11-18 00:40:38.571419] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.962 [2024-11-18 00:40:38.571432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.962 [2024-11-18 00:40:38.571449] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.962 [2024-11-18 00:40:38.583770] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.962 [2024-11-18 00:40:38.584177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.962 [2024-11-18 00:40:38.584205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.962 [2024-11-18 00:40:38.584222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.962 [2024-11-18 00:40:38.584460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.962 [2024-11-18 00:40:38.584695] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.962 [2024-11-18 00:40:38.584714] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.962 [2024-11-18 00:40:38.584727] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.962 [2024-11-18 00:40:38.584739] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.962 [2024-11-18 00:40:38.597069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.962 [2024-11-18 00:40:38.597442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.962 [2024-11-18 00:40:38.597486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.962 [2024-11-18 00:40:38.597502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.962 [2024-11-18 00:40:38.597755] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.962 [2024-11-18 00:40:38.597952] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.962 [2024-11-18 00:40:38.597971] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.962 [2024-11-18 00:40:38.597983] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.962 [2024-11-18 00:40:38.597995] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.962 [2024-11-18 00:40:38.610407] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.962 [2024-11-18 00:40:38.610747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.962 [2024-11-18 00:40:38.610789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.962 [2024-11-18 00:40:38.610806] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.962 [2024-11-18 00:40:38.611027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.962 [2024-11-18 00:40:38.611240] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.962 [2024-11-18 00:40:38.611259] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.962 [2024-11-18 00:40:38.611272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.962 [2024-11-18 00:40:38.611283] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.962 [2024-11-18 00:40:38.623747] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.962 [2024-11-18 00:40:38.624179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.962 [2024-11-18 00:40:38.624223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.962 [2024-11-18 00:40:38.624239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.962 [2024-11-18 00:40:38.624462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.962 [2024-11-18 00:40:38.624706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.962 [2024-11-18 00:40:38.624725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.962 [2024-11-18 00:40:38.624738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.962 [2024-11-18 00:40:38.624749] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.962 [2024-11-18 00:40:38.636972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.962 [2024-11-18 00:40:38.637323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.962 [2024-11-18 00:40:38.637351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.962 [2024-11-18 00:40:38.637367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.962 [2024-11-18 00:40:38.637595] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.962 [2024-11-18 00:40:38.637812] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.962 [2024-11-18 00:40:38.637831] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.962 [2024-11-18 00:40:38.637843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.962 [2024-11-18 00:40:38.637854] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.962 [2024-11-18 00:40:38.650261] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.962 [2024-11-18 00:40:38.650625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.962 [2024-11-18 00:40:38.650653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.962 [2024-11-18 00:40:38.650670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.962 [2024-11-18 00:40:38.650901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.962 [2024-11-18 00:40:38.651121] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.962 [2024-11-18 00:40:38.651141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.962 [2024-11-18 00:40:38.651153] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.962 [2024-11-18 00:40:38.651166] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.962 [2024-11-18 00:40:38.663699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.962 [2024-11-18 00:40:38.664070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.962 [2024-11-18 00:40:38.664115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.962 [2024-11-18 00:40:38.664131] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.962 [2024-11-18 00:40:38.664390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.962 [2024-11-18 00:40:38.664616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.962 [2024-11-18 00:40:38.664636] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.962 [2024-11-18 00:40:38.664648] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.962 [2024-11-18 00:40:38.664675] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.962 [2024-11-18 00:40:38.677020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.962 [2024-11-18 00:40:38.677388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.962 [2024-11-18 00:40:38.677418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.962 [2024-11-18 00:40:38.677435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.962 [2024-11-18 00:40:38.677649] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.962 [2024-11-18 00:40:38.677863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.962 [2024-11-18 00:40:38.677881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.963 [2024-11-18 00:40:38.677893] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.963 [2024-11-18 00:40:38.677905] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.963 [2024-11-18 00:40:38.690300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.963 [2024-11-18 00:40:38.690717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.963 [2024-11-18 00:40:38.690761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.963 [2024-11-18 00:40:38.690777] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.963 [2024-11-18 00:40:38.691043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.963 [2024-11-18 00:40:38.691245] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.963 [2024-11-18 00:40:38.691264] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.963 [2024-11-18 00:40:38.691276] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.963 [2024-11-18 00:40:38.691288] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.963 [2024-11-18 00:40:38.703548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.963 [2024-11-18 00:40:38.703960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.963 [2024-11-18 00:40:38.703987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.963 [2024-11-18 00:40:38.704019] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.963 [2024-11-18 00:40:38.704246] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.963 [2024-11-18 00:40:38.704488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.963 [2024-11-18 00:40:38.704513] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.963 [2024-11-18 00:40:38.704527] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.963 [2024-11-18 00:40:38.704538] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.963 [2024-11-18 00:40:38.716792] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.963 [2024-11-18 00:40:38.717143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.963 [2024-11-18 00:40:38.717171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.963 [2024-11-18 00:40:38.717188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.963 [2024-11-18 00:40:38.717425] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.963 [2024-11-18 00:40:38.717661] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.963 [2024-11-18 00:40:38.717680] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.963 [2024-11-18 00:40:38.717693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.963 [2024-11-18 00:40:38.717704] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.963 [2024-11-18 00:40:38.730196] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.963 [2024-11-18 00:40:38.730594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.963 [2024-11-18 00:40:38.730637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.963 [2024-11-18 00:40:38.730653] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.963 [2024-11-18 00:40:38.730906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.963 [2024-11-18 00:40:38.731104] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.963 [2024-11-18 00:40:38.731122] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.963 [2024-11-18 00:40:38.731134] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.963 [2024-11-18 00:40:38.731146] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.963 [2024-11-18 00:40:38.743464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.963 [2024-11-18 00:40:38.743893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.963 [2024-11-18 00:40:38.743921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.963 [2024-11-18 00:40:38.743937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.963 [2024-11-18 00:40:38.744143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.963 [2024-11-18 00:40:38.744393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.963 [2024-11-18 00:40:38.744415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.963 [2024-11-18 00:40:38.744443] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.963 [2024-11-18 00:40:38.744461] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.963 [2024-11-18 00:40:38.756759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.963 [2024-11-18 00:40:38.757099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.963 [2024-11-18 00:40:38.757127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.963 [2024-11-18 00:40:38.757143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.963 [2024-11-18 00:40:38.757386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.963 [2024-11-18 00:40:38.757607] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.963 [2024-11-18 00:40:38.757642] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.963 [2024-11-18 00:40:38.757655] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.963 [2024-11-18 00:40:38.757666] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.963 4554.80 IOPS, 17.79 MiB/s [2024-11-17T23:40:38.785Z] [2024-11-18 00:40:38.770049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.963 [2024-11-18 00:40:38.770448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.963 [2024-11-18 00:40:38.770477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:14.963 [2024-11-18 00:40:38.770493] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:14.963 [2024-11-18 00:40:38.770720] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:14.963 [2024-11-18 00:40:38.770933] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.963 [2024-11-18 00:40:38.770952] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.963 [2024-11-18 00:40:38.770965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.963 [2024-11-18 00:40:38.770976] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:15.222 [2024-11-18 00:40:38.783815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:15.222 [2024-11-18 00:40:38.784184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.222 [2024-11-18 00:40:38.784228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:15.222 [2024-11-18 00:40:38.784245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:15.222 [2024-11-18 00:40:38.784490] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:15.222 [2024-11-18 00:40:38.784707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:15.222 [2024-11-18 00:40:38.784727] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:15.222 [2024-11-18 00:40:38.784739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:15.222 [2024-11-18 00:40:38.784751] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:15.222 [2024-11-18 00:40:38.797044] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:15.222 [2024-11-18 00:40:38.797474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.222 [2024-11-18 00:40:38.797505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:15.222 [2024-11-18 00:40:38.797522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:15.222 [2024-11-18 00:40:38.797767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:15.222 [2024-11-18 00:40:38.797965] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:15.222 [2024-11-18 00:40:38.797984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:15.222 [2024-11-18 00:40:38.797996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:15.222 [2024-11-18 00:40:38.798007] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:15.222 [2024-11-18 00:40:38.810462] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:15.222 [2024-11-18 00:40:38.810859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.222 [2024-11-18 00:40:38.810888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:15.222 [2024-11-18 00:40:38.810904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:15.222 [2024-11-18 00:40:38.811127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:15.222 [2024-11-18 00:40:38.811367] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:15.222 [2024-11-18 00:40:38.811388] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:15.222 [2024-11-18 00:40:38.811401] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:15.222 [2024-11-18 00:40:38.811412] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:15.222 [2024-11-18 00:40:38.823856] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:15.222 [2024-11-18 00:40:38.824266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.222 [2024-11-18 00:40:38.824336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:15.223 [2024-11-18 00:40:38.824352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:15.223 [2024-11-18 00:40:38.824619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:15.223 [2024-11-18 00:40:38.824828] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:15.223 [2024-11-18 00:40:38.824846] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:15.223 [2024-11-18 00:40:38.824859] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:15.223 [2024-11-18 00:40:38.824870] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:15.223 [2024-11-18 00:40:38.836950] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:15.223 [2024-11-18 00:40:38.837383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.223 [2024-11-18 00:40:38.837426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:15.223 [2024-11-18 00:40:38.837443] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:15.223 [2024-11-18 00:40:38.837684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:15.223 [2024-11-18 00:40:38.837877] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:15.223 [2024-11-18 00:40:38.837895] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:15.223 [2024-11-18 00:40:38.837907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:15.223 [2024-11-18 00:40:38.837918] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:15.223 [2024-11-18 00:40:38.850050] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:15.223 [2024-11-18 00:40:38.850387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.223 [2024-11-18 00:40:38.850415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:15.223 [2024-11-18 00:40:38.850431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:15.223 [2024-11-18 00:40:38.850651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:15.223 [2024-11-18 00:40:38.850859] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:15.223 [2024-11-18 00:40:38.850877] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:15.223 [2024-11-18 00:40:38.850889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:15.223 [2024-11-18 00:40:38.850900] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:15.223 [2024-11-18 00:40:38.863207] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:15.223 [2024-11-18 00:40:38.863600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.223 [2024-11-18 00:40:38.863644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:15.223 [2024-11-18 00:40:38.863660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:15.223 [2024-11-18 00:40:38.863894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:15.223 [2024-11-18 00:40:38.864102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:15.223 [2024-11-18 00:40:38.864120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:15.223 [2024-11-18 00:40:38.864132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:15.223 [2024-11-18 00:40:38.864143] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:15.223 [2024-11-18 00:40:38.876294] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:15.223 [2024-11-18 00:40:38.876681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.223 [2024-11-18 00:40:38.876723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:15.223 [2024-11-18 00:40:38.876738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:15.223 [2024-11-18 00:40:38.876984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:15.223 [2024-11-18 00:40:38.877175] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:15.223 [2024-11-18 00:40:38.877201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:15.223 [2024-11-18 00:40:38.877214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:15.223 [2024-11-18 00:40:38.877225] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:15.223 [2024-11-18 00:40:38.889450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:15.223 [2024-11-18 00:40:38.889818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.223 [2024-11-18 00:40:38.889861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:15.223 [2024-11-18 00:40:38.889877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:15.223 [2024-11-18 00:40:38.890145] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:15.223 [2024-11-18 00:40:38.890365] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:15.223 [2024-11-18 00:40:38.890384] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:15.223 [2024-11-18 00:40:38.890397] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:15.223 [2024-11-18 00:40:38.890408] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:15.223 [2024-11-18 00:40:38.902532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:15.223 [2024-11-18 00:40:38.902983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.223 [2024-11-18 00:40:38.903027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:15.223 [2024-11-18 00:40:38.903043] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:15.223 [2024-11-18 00:40:38.903295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:15.223 [2024-11-18 00:40:38.903504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:15.223 [2024-11-18 00:40:38.903524] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:15.223 [2024-11-18 00:40:38.903536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:15.223 [2024-11-18 00:40:38.903548] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:15.223 [2024-11-18 00:40:38.916121] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:15.223 [2024-11-18 00:40:38.916534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.223 [2024-11-18 00:40:38.916563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:15.223 [2024-11-18 00:40:38.916579] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:15.223 [2024-11-18 00:40:38.916820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:15.223 [2024-11-18 00:40:38.917034] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:15.223 [2024-11-18 00:40:38.917053] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:15.223 [2024-11-18 00:40:38.917065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:15.223 [2024-11-18 00:40:38.917081] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:15.223 [2024-11-18 00:40:38.929371] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:15.223 [2024-11-18 00:40:38.929776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.223 [2024-11-18 00:40:38.929804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:15.223 [2024-11-18 00:40:38.929820] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:15.223 [2024-11-18 00:40:38.930053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:15.223 [2024-11-18 00:40:38.930261] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:15.223 [2024-11-18 00:40:38.930279] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:15.223 [2024-11-18 00:40:38.930305] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:15.223 [2024-11-18 00:40:38.930328] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:15.223 [2024-11-18 00:40:38.942475] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:15.223 [2024-11-18 00:40:38.942781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.223 [2024-11-18 00:40:38.942823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:15.223 [2024-11-18 00:40:38.942839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:15.223 [2024-11-18 00:40:38.943053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:15.223 [2024-11-18 00:40:38.943261] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:15.223 [2024-11-18 00:40:38.943281] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:15.223 [2024-11-18 00:40:38.943293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:15.223 [2024-11-18 00:40:38.943304] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:15.223 [2024-11-18 00:40:38.955481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:15.223 [2024-11-18 00:40:38.955843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.223 [2024-11-18 00:40:38.955870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:15.223 [2024-11-18 00:40:38.955884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:15.223 [2024-11-18 00:40:38.956098] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:15.223 [2024-11-18 00:40:38.956331] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:15.223 [2024-11-18 00:40:38.956351] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:15.223 [2024-11-18 00:40:38.956363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:15.223 [2024-11-18 00:40:38.956374] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:15.223 [2024-11-18 00:40:38.968504] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:15.223 [2024-11-18 00:40:38.968909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.223 [2024-11-18 00:40:38.968936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:15.223 [2024-11-18 00:40:38.968952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:15.223 [2024-11-18 00:40:38.969175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:15.223 [2024-11-18 00:40:38.969393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:15.223 [2024-11-18 00:40:38.969411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:15.223 [2024-11-18 00:40:38.969423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:15.223 [2024-11-18 00:40:38.969434] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:15.223 [2024-11-18 00:40:38.981481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:15.224 [2024-11-18 00:40:38.981854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.224 [2024-11-18 00:40:38.981882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:15.224 [2024-11-18 00:40:38.981898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:15.224 [2024-11-18 00:40:38.982132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:15.224 [2024-11-18 00:40:38.982351] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:15.224 [2024-11-18 00:40:38.982371] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:15.224 [2024-11-18 00:40:38.982383] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:15.224 [2024-11-18 00:40:38.982394] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:15.224 [2024-11-18 00:40:38.994538] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:15.224 [2024-11-18 00:40:38.994902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.224 [2024-11-18 00:40:38.994930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:15.224 [2024-11-18 00:40:38.994946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:15.224 [2024-11-18 00:40:38.995180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:15.224 [2024-11-18 00:40:38.995401] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:15.224 [2024-11-18 00:40:38.995422] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:15.224 [2024-11-18 00:40:38.995434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:15.224 [2024-11-18 00:40:38.995445] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:15.224 [2024-11-18 00:40:39.007572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:15.224 [2024-11-18 00:40:39.007906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.224 [2024-11-18 00:40:39.007933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:15.224 [2024-11-18 00:40:39.007948] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:15.224 [2024-11-18 00:40:39.008175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:15.224 [2024-11-18 00:40:39.008412] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:15.224 [2024-11-18 00:40:39.008432] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:15.224 [2024-11-18 00:40:39.008444] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:15.224 [2024-11-18 00:40:39.008456] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:15.224 [2024-11-18 00:40:39.020534] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:15.224 [2024-11-18 00:40:39.020964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.224 [2024-11-18 00:40:39.021006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:15.224 [2024-11-18 00:40:39.021024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:15.224 [2024-11-18 00:40:39.021263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:15.224 [2024-11-18 00:40:39.021501] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:15.224 [2024-11-18 00:40:39.021521] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:15.224 [2024-11-18 00:40:39.021533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:15.224 [2024-11-18 00:40:39.021545] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:15.224 [2024-11-18 00:40:39.033547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:15.224 [2024-11-18 00:40:39.034038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.224 [2024-11-18 00:40:39.034081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:15.224 [2024-11-18 00:40:39.034097] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:15.224 [2024-11-18 00:40:39.034358] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:15.224 [2024-11-18 00:40:39.034570] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:15.224 [2024-11-18 00:40:39.034589] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:15.224 [2024-11-18 00:40:39.034601] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:15.224 [2024-11-18 00:40:39.034613] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:15.484 [2024-11-18 00:40:39.046990] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:15.484 [2024-11-18 00:40:39.047337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.484 [2024-11-18 00:40:39.047367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:15.484 [2024-11-18 00:40:39.047400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:15.484 [2024-11-18 00:40:39.047642] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:15.484 [2024-11-18 00:40:39.047870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:15.484 [2024-11-18 00:40:39.047911] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:15.484 [2024-11-18 00:40:39.047924] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:15.484 [2024-11-18 00:40:39.047937] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:15.484 [2024-11-18 00:40:39.060087] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:15.484 [2024-11-18 00:40:39.060465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.484 [2024-11-18 00:40:39.060511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:15.484 [2024-11-18 00:40:39.060527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:15.484 [2024-11-18 00:40:39.060795] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:15.484 [2024-11-18 00:40:39.060988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:15.484 [2024-11-18 00:40:39.061007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:15.484 [2024-11-18 00:40:39.061019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:15.484 [2024-11-18 00:40:39.061030] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:15.484 [2024-11-18 00:40:39.073207] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:15.484 [2024-11-18 00:40:39.073575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.484 [2024-11-18 00:40:39.073604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:15.484 [2024-11-18 00:40:39.073621] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:15.484 [2024-11-18 00:40:39.073862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:15.484 [2024-11-18 00:40:39.074070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:15.484 [2024-11-18 00:40:39.074088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:15.484 [2024-11-18 00:40:39.074101] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:15.484 [2024-11-18 00:40:39.074112] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:15.484 [2024-11-18 00:40:39.086451] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:15.484 [2024-11-18 00:40:39.086848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.484 [2024-11-18 00:40:39.086876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:15.484 [2024-11-18 00:40:39.086893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:15.484 [2024-11-18 00:40:39.087128] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:15.484 [2024-11-18 00:40:39.087362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:15.484 [2024-11-18 00:40:39.087382] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:15.484 [2024-11-18 00:40:39.087394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:15.484 [2024-11-18 00:40:39.087411] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:15.484 [2024-11-18 00:40:39.099483] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:15.484 [2024-11-18 00:40:39.099851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.484 [2024-11-18 00:40:39.099895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:15.484 [2024-11-18 00:40:39.099911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:15.484 [2024-11-18 00:40:39.100177] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:15.484 [2024-11-18 00:40:39.100398] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:15.484 [2024-11-18 00:40:39.100417] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:15.484 [2024-11-18 00:40:39.100430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:15.484 [2024-11-18 00:40:39.100441] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:15.484 [2024-11-18 00:40:39.112531] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:15.484 [2024-11-18 00:40:39.112897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.484 [2024-11-18 00:40:39.112940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:15.484 [2024-11-18 00:40:39.112956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:15.484 [2024-11-18 00:40:39.113208] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:15.484 [2024-11-18 00:40:39.113442] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:15.484 [2024-11-18 00:40:39.113462] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:15.484 [2024-11-18 00:40:39.113474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:15.484 [2024-11-18 00:40:39.113486] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:15.484 [2024-11-18 00:40:39.125576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:15.484 [2024-11-18 00:40:39.125910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.484 [2024-11-18 00:40:39.125938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:15.484 [2024-11-18 00:40:39.125953] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:15.484 [2024-11-18 00:40:39.126174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:15.484 [2024-11-18 00:40:39.126410] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:15.484 [2024-11-18 00:40:39.126430] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:15.484 [2024-11-18 00:40:39.126442] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:15.484 [2024-11-18 00:40:39.126454] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:15.484 [2024-11-18 00:40:39.138650] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:15.484 [2024-11-18 00:40:39.139079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.484 [2024-11-18 00:40:39.139125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:15.484 [2024-11-18 00:40:39.139143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:15.484 [2024-11-18 00:40:39.139392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:15.484 [2024-11-18 00:40:39.139599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:15.484 [2024-11-18 00:40:39.139617] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:15.484 [2024-11-18 00:40:39.139629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:15.484 [2024-11-18 00:40:39.139640] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:15.484 [2024-11-18 00:40:39.151806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:15.484 [2024-11-18 00:40:39.152185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.484 [2024-11-18 00:40:39.152225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:15.484 [2024-11-18 00:40:39.152242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:15.485 [2024-11-18 00:40:39.152511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:15.485 [2024-11-18 00:40:39.152722] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:15.485 [2024-11-18 00:40:39.152741] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:15.485 [2024-11-18 00:40:39.152752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:15.485 [2024-11-18 00:40:39.152763] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:15.485 [2024-11-18 00:40:39.165460] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:15.485 [2024-11-18 00:40:39.165891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.485 [2024-11-18 00:40:39.165919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:15.485 [2024-11-18 00:40:39.165936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:15.485 [2024-11-18 00:40:39.166163] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:15.485 [2024-11-18 00:40:39.166406] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:15.485 [2024-11-18 00:40:39.166426] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:15.485 [2024-11-18 00:40:39.166439] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:15.485 [2024-11-18 00:40:39.166450] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:15.485 [2024-11-18 00:40:39.178762] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:15.485 [2024-11-18 00:40:39.179093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.485 [2024-11-18 00:40:39.179121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:15.485 [2024-11-18 00:40:39.179137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:15.485 [2024-11-18 00:40:39.179398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:15.485 [2024-11-18 00:40:39.179633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:15.485 [2024-11-18 00:40:39.179651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:15.485 [2024-11-18 00:40:39.179663] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:15.485 [2024-11-18 00:40:39.179674] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:15.485 [2024-11-18 00:40:39.191884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:15.485 [2024-11-18 00:40:39.192321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.485 [2024-11-18 00:40:39.192365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:15.485 [2024-11-18 00:40:39.192382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:15.485 [2024-11-18 00:40:39.192622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:15.485 [2024-11-18 00:40:39.192830] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:15.485 [2024-11-18 00:40:39.192848] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:15.485 [2024-11-18 00:40:39.192862] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:15.485 [2024-11-18 00:40:39.192874] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:15.485 [2024-11-18 00:40:39.205138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:15.485 [2024-11-18 00:40:39.205549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.485 [2024-11-18 00:40:39.205577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:15.485 [2024-11-18 00:40:39.205593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:15.485 [2024-11-18 00:40:39.205823] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:15.485 [2024-11-18 00:40:39.206031] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:15.485 [2024-11-18 00:40:39.206049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:15.485 [2024-11-18 00:40:39.206061] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:15.485 [2024-11-18 00:40:39.206072] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:15.485 [2024-11-18 00:40:39.218287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:15.485 [2024-11-18 00:40:39.218793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.485 [2024-11-18 00:40:39.218844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:15.485 [2024-11-18 00:40:39.218860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:15.485 [2024-11-18 00:40:39.219120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:15.485 [2024-11-18 00:40:39.219320] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:15.485 [2024-11-18 00:40:39.219359] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:15.485 [2024-11-18 00:40:39.219372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:15.485 [2024-11-18 00:40:39.219384] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:15.485 [2024-11-18 00:40:39.231515] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:15.485 [2024-11-18 00:40:39.231948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.485 [2024-11-18 00:40:39.232001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:15.485 [2024-11-18 00:40:39.232016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:15.485 [2024-11-18 00:40:39.232277] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:15.485 [2024-11-18 00:40:39.232497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:15.485 [2024-11-18 00:40:39.232517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:15.485 [2024-11-18 00:40:39.232529] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:15.485 [2024-11-18 00:40:39.232540] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:15.485 [2024-11-18 00:40:39.244664] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:15.485 [2024-11-18 00:40:39.245023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.485 [2024-11-18 00:40:39.245091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:15.485 [2024-11-18 00:40:39.245107] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:15.485 [2024-11-18 00:40:39.245357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:15.485 [2024-11-18 00:40:39.245555] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:15.485 [2024-11-18 00:40:39.245574] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:15.485 [2024-11-18 00:40:39.245586] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:15.485 [2024-11-18 00:40:39.245597] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:15.485 [2024-11-18 00:40:39.257730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:15.485 [2024-11-18 00:40:39.258066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.485 [2024-11-18 00:40:39.258093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:15.485 [2024-11-18 00:40:39.258109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:15.486 [2024-11-18 00:40:39.258341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:15.486 [2024-11-18 00:40:39.258555] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:15.486 [2024-11-18 00:40:39.258573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:15.486 [2024-11-18 00:40:39.258586] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:15.486 [2024-11-18 00:40:39.258597] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:15.486 [2024-11-18 00:40:39.270866] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:15.486 [2024-11-18 00:40:39.271228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.486 [2024-11-18 00:40:39.271255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:15.486 [2024-11-18 00:40:39.271270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:15.486 [2024-11-18 00:40:39.271530] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:15.486 [2024-11-18 00:40:39.271741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:15.486 [2024-11-18 00:40:39.271760] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:15.486 [2024-11-18 00:40:39.271771] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:15.486 [2024-11-18 00:40:39.271782] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:15.486 [2024-11-18 00:40:39.284093] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:15.486 [2024-11-18 00:40:39.284457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.486 [2024-11-18 00:40:39.284486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:15.486 [2024-11-18 00:40:39.284502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:15.486 [2024-11-18 00:40:39.284743] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:15.486 [2024-11-18 00:40:39.284951] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:15.486 [2024-11-18 00:40:39.284970] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:15.486 [2024-11-18 00:40:39.284982] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:15.486 [2024-11-18 00:40:39.284993] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:15.486 [2024-11-18 00:40:39.297373] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:15.486 [2024-11-18 00:40:39.297721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.486 [2024-11-18 00:40:39.297748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:15.486 [2024-11-18 00:40:39.297764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:15.486 [2024-11-18 00:40:39.297970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:15.486 [2024-11-18 00:40:39.298194] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:15.486 [2024-11-18 00:40:39.298212] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:15.486 [2024-11-18 00:40:39.298224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:15.486 [2024-11-18 00:40:39.298236] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:15.751 [2024-11-18 00:40:39.310619] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:15.751 [2024-11-18 00:40:39.310991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.752 [2024-11-18 00:40:39.311040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:15.752 [2024-11-18 00:40:39.311057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:15.752 [2024-11-18 00:40:39.311309] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:15.752 [2024-11-18 00:40:39.311534] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:15.752 [2024-11-18 00:40:39.311553] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:15.752 [2024-11-18 00:40:39.311566] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:15.752 [2024-11-18 00:40:39.311578] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:15.752 [2024-11-18 00:40:39.323854] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:15.752 [2024-11-18 00:40:39.324222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.752 [2024-11-18 00:40:39.324251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:15.752 [2024-11-18 00:40:39.324268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:15.752 [2024-11-18 00:40:39.324523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:15.752 [2024-11-18 00:40:39.324755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:15.752 [2024-11-18 00:40:39.324774] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:15.752 [2024-11-18 00:40:39.324786] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:15.752 [2024-11-18 00:40:39.324798] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:15.752 [2024-11-18 00:40:39.337112] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:15.752 [2024-11-18 00:40:39.337557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.752 [2024-11-18 00:40:39.337587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:15.752 [2024-11-18 00:40:39.337603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:15.752 [2024-11-18 00:40:39.337853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:15.752 [2024-11-18 00:40:39.338045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:15.752 [2024-11-18 00:40:39.338064] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:15.752 [2024-11-18 00:40:39.338075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:15.752 [2024-11-18 00:40:39.338086] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:15.752 [2024-11-18 00:40:39.350366] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:15.752 [2024-11-18 00:40:39.350694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.752 [2024-11-18 00:40:39.350723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:15.752 [2024-11-18 00:40:39.350740] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:15.752 [2024-11-18 00:40:39.350973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:15.752 [2024-11-18 00:40:39.351188] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:15.752 [2024-11-18 00:40:39.351207] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:15.752 [2024-11-18 00:40:39.351219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:15.752 [2024-11-18 00:40:39.351231] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:15.752 [2024-11-18 00:40:39.363632] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:15.752 [2024-11-18 00:40:39.363964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.752 [2024-11-18 00:40:39.363992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:15.752 [2024-11-18 00:40:39.364009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:15.752 [2024-11-18 00:40:39.364230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:15.752 [2024-11-18 00:40:39.364469] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:15.752 [2024-11-18 00:40:39.364489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:15.752 [2024-11-18 00:40:39.364502] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:15.752 [2024-11-18 00:40:39.364514] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:15.752 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 400088 Killed "${NVMF_APP[@]}" "$@" 00:35:15.752 00:40:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:35:15.752 00:40:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:15.752 00:40:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:15.752 00:40:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:15.752 00:40:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:15.752 [2024-11-18 00:40:39.377136] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:15.752 [2024-11-18 00:40:39.377566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.752 [2024-11-18 00:40:39.377597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:15.752 [2024-11-18 00:40:39.377614] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:15.752 [2024-11-18 00:40:39.377842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:15.752 [2024-11-18 00:40:39.378071] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:15.752 [2024-11-18 00:40:39.378089] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:15.752 [2024-11-18 00:40:39.378102] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:15.752 [2024-11-18 00:40:39.378113] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:15.752 00:40:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=401041 00:35:15.752 00:40:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:15.752 00:40:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 401041 00:35:15.752 00:40:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 401041 ']' 00:35:15.752 00:40:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:15.752 00:40:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:15.752 00:40:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:15.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:15.752 00:40:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:15.752 00:40:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:15.752 [2024-11-18 00:40:39.390481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:15.752 [2024-11-18 00:40:39.390935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.752 [2024-11-18 00:40:39.390965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:15.752 [2024-11-18 00:40:39.390982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:15.752 [2024-11-18 00:40:39.391224] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:15.752 [2024-11-18 00:40:39.391477] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:15.752 [2024-11-18 00:40:39.391498] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:15.752 [2024-11-18 00:40:39.391512] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:15.752 [2024-11-18 00:40:39.391526] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:15.752 [2024-11-18 00:40:39.403824] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:15.752 [2024-11-18 00:40:39.404190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.752 [2024-11-18 00:40:39.404217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:15.752 [2024-11-18 00:40:39.404234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:15.752 [2024-11-18 00:40:39.404497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:15.752 [2024-11-18 00:40:39.404722] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:15.752 [2024-11-18 00:40:39.404742] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:15.753 [2024-11-18 00:40:39.404755] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:15.753 [2024-11-18 00:40:39.404783] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:15.753 [2024-11-18 00:40:39.417108] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:15.753 [2024-11-18 00:40:39.417468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.753 [2024-11-18 00:40:39.417497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:15.753 [2024-11-18 00:40:39.417514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:15.753 [2024-11-18 00:40:39.417744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:15.753 [2024-11-18 00:40:39.417957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:15.753 [2024-11-18 00:40:39.417981] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:15.753 [2024-11-18 00:40:39.417994] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:15.753 [2024-11-18 00:40:39.418005] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:15.753 [2024-11-18 00:40:39.430605] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:15.753 [2024-11-18 00:40:39.431125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.753 [2024-11-18 00:40:39.431169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:15.753 [2024-11-18 00:40:39.431186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:15.753 [2024-11-18 00:40:39.431453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:15.753 [2024-11-18 00:40:39.431691] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:15.753 [2024-11-18 00:40:39.431710] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:15.753 [2024-11-18 00:40:39.431723] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:15.753 [2024-11-18 00:40:39.431734] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:15.753 [2024-11-18 00:40:39.435243] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:35:15.753 [2024-11-18 00:40:39.435345] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:15.753 [2024-11-18 00:40:39.443807] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:15.753 [2024-11-18 00:40:39.444153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.753 [2024-11-18 00:40:39.444181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:15.753 [2024-11-18 00:40:39.444198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:15.753 [2024-11-18 00:40:39.444669] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:15.753 [2024-11-18 00:40:39.444882] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:15.753 [2024-11-18 00:40:39.444902] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:15.753 [2024-11-18 00:40:39.444929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:15.753 [2024-11-18 00:40:39.444941] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:15.753 [2024-11-18 00:40:39.457054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:15.753 [2024-11-18 00:40:39.457473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.753 [2024-11-18 00:40:39.457503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:15.753 [2024-11-18 00:40:39.457520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:15.753 [2024-11-18 00:40:39.457761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:15.753 [2024-11-18 00:40:39.457981] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:15.753 [2024-11-18 00:40:39.458001] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:15.753 [2024-11-18 00:40:39.458013] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:15.753 [2024-11-18 00:40:39.458025] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:15.753 [2024-11-18 00:40:39.470244] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:15.753 [2024-11-18 00:40:39.470695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.753 [2024-11-18 00:40:39.470724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:15.753 [2024-11-18 00:40:39.470740] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:15.753 [2024-11-18 00:40:39.470968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:15.753 [2024-11-18 00:40:39.471182] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:15.753 [2024-11-18 00:40:39.471201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:15.753 [2024-11-18 00:40:39.471213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:15.753 [2024-11-18 00:40:39.471224] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:15.753 [2024-11-18 00:40:39.483609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:15.753 [2024-11-18 00:40:39.483943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.753 [2024-11-18 00:40:39.483971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:15.753 [2024-11-18 00:40:39.483988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:15.753 [2024-11-18 00:40:39.484217] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:15.753 [2024-11-18 00:40:39.484462] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:15.753 [2024-11-18 00:40:39.484482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:15.753 [2024-11-18 00:40:39.484495] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:15.753 [2024-11-18 00:40:39.484507] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:15.753 [2024-11-18 00:40:39.497006] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:15.753 [2024-11-18 00:40:39.497354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.753 [2024-11-18 00:40:39.497389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:15.753 [2024-11-18 00:40:39.497405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:15.753 [2024-11-18 00:40:39.497633] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:15.753 [2024-11-18 00:40:39.497854] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:15.753 [2024-11-18 00:40:39.497873] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:15.753 [2024-11-18 00:40:39.497886] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:15.753 [2024-11-18 00:40:39.497904] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:15.753 [2024-11-18 00:40:39.510277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:15.753 [2024-11-18 00:40:39.510688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.753 [2024-11-18 00:40:39.510717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:15.753 [2024-11-18 00:40:39.510733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:15.753 [2024-11-18 00:40:39.510946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:15.753 [2024-11-18 00:40:39.511186] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:15.753 [2024-11-18 00:40:39.511206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:15.753 [2024-11-18 00:40:39.511218] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:15.753 [2024-11-18 00:40:39.511231] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:15.753 [2024-11-18 00:40:39.522516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:15.753 [2024-11-18 00:40:39.523752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:15.753 [2024-11-18 00:40:39.524116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.753 [2024-11-18 00:40:39.524144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:15.753 [2024-11-18 00:40:39.524161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:15.754 [2024-11-18 00:40:39.524401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:15.754 [2024-11-18 00:40:39.524622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:15.754 [2024-11-18 00:40:39.524641] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:15.754 [2024-11-18 00:40:39.524654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:15.754 [2024-11-18 00:40:39.524666] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:15.754 [2024-11-18 00:40:39.537110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:15.754 [2024-11-18 00:40:39.537605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.754 [2024-11-18 00:40:39.537644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:15.754 [2024-11-18 00:40:39.537665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:15.754 [2024-11-18 00:40:39.537899] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:15.754 [2024-11-18 00:40:39.538111] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:15.754 [2024-11-18 00:40:39.538132] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:15.754 [2024-11-18 00:40:39.538150] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:15.754 [2024-11-18 00:40:39.538164] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:15.754 [2024-11-18 00:40:39.550640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:15.754 [2024-11-18 00:40:39.551017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.754 [2024-11-18 00:40:39.551047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:15.754 [2024-11-18 00:40:39.551065] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:15.754 [2024-11-18 00:40:39.551297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:15.754 [2024-11-18 00:40:39.551529] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:15.754 [2024-11-18 00:40:39.551550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:15.754 [2024-11-18 00:40:39.551563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:15.754 [2024-11-18 00:40:39.551575] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:15.754 [2024-11-18 00:40:39.564071] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:15.754 [2024-11-18 00:40:39.564728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:15.754 [2024-11-18 00:40:39.564772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:15.754 [2024-11-18 00:40:39.564788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:15.754 [2024-11-18 00:40:39.565026] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:15.754 [2024-11-18 00:40:39.565230] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:15.754 [2024-11-18 00:40:39.565250] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:15.754 [2024-11-18 00:40:39.565264] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:15.754 [2024-11-18 00:40:39.565276] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:15.754 [2024-11-18 00:40:39.571720] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:15.754 [2024-11-18 00:40:39.571760] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:15.754 [2024-11-18 00:40:39.571775] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:15.754 [2024-11-18 00:40:39.571788] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:15.754 [2024-11-18 00:40:39.571799] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:16.014 [2024-11-18 00:40:39.573432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:16.014 [2024-11-18 00:40:39.573464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:16.014 [2024-11-18 00:40:39.573468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:16.014 [2024-11-18 00:40:39.577682] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:16.014 [2024-11-18 00:40:39.578154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.014 [2024-11-18 00:40:39.578189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:16.014 [2024-11-18 00:40:39.578210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:16.014 [2024-11-18 00:40:39.578443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:16.014 [2024-11-18 00:40:39.578690] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:16.014 [2024-11-18 00:40:39.578722] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:16.014 [2024-11-18 00:40:39.578750] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:16.014 [2024-11-18 00:40:39.578777] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:16.014 [2024-11-18 00:40:39.591219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:16.014 [2024-11-18 00:40:39.591825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.014 [2024-11-18 00:40:39.591866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:16.014 [2024-11-18 00:40:39.591888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:16.014 [2024-11-18 00:40:39.592128] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:16.014 [2024-11-18 00:40:39.592377] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:16.014 [2024-11-18 00:40:39.592400] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:16.014 [2024-11-18 00:40:39.592418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:16.014 [2024-11-18 00:40:39.592433] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:16.014 [2024-11-18 00:40:39.604849] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:16.014 [2024-11-18 00:40:39.605326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.014 [2024-11-18 00:40:39.605367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:16.014 [2024-11-18 00:40:39.605389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:16.014 [2024-11-18 00:40:39.605630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:16.014 [2024-11-18 00:40:39.605850] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:16.014 [2024-11-18 00:40:39.605871] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:16.014 [2024-11-18 00:40:39.605889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:16.014 [2024-11-18 00:40:39.605904] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:16.014 [2024-11-18 00:40:39.618526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:16.014 [2024-11-18 00:40:39.619026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.014 [2024-11-18 00:40:39.619067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:16.014 [2024-11-18 00:40:39.619089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:16.014 [2024-11-18 00:40:39.619341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:16.014 [2024-11-18 00:40:39.619584] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:16.014 [2024-11-18 00:40:39.619606] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:16.014 [2024-11-18 00:40:39.619624] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:16.014 [2024-11-18 00:40:39.619648] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:16.014 [2024-11-18 00:40:39.632083] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:16.014 [2024-11-18 00:40:39.632596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.014 [2024-11-18 00:40:39.632634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:16.014 [2024-11-18 00:40:39.632654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:16.014 [2024-11-18 00:40:39.632893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:16.014 [2024-11-18 00:40:39.633129] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:16.014 [2024-11-18 00:40:39.633151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:16.014 [2024-11-18 00:40:39.633168] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:16.014 [2024-11-18 00:40:39.633183] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:16.014 [2024-11-18 00:40:39.645744] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:16.014 [2024-11-18 00:40:39.646244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.014 [2024-11-18 00:40:39.646285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:16.015 [2024-11-18 00:40:39.646307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:16.015 [2024-11-18 00:40:39.646543] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:16.015 [2024-11-18 00:40:39.646779] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:16.015 [2024-11-18 00:40:39.646801] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:16.015 [2024-11-18 00:40:39.646818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:16.015 [2024-11-18 00:40:39.646834] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:16.015 [2024-11-18 00:40:39.659375] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:16.015 [2024-11-18 00:40:39.659908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.015 [2024-11-18 00:40:39.659946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:16.015 [2024-11-18 00:40:39.659968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:16.015 [2024-11-18 00:40:39.660209] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:16.015 [2024-11-18 00:40:39.660459] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:16.015 [2024-11-18 00:40:39.660482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:16.015 [2024-11-18 00:40:39.660501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:16.015 [2024-11-18 00:40:39.660516] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:16.015 [2024-11-18 00:40:39.673068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:16.015 [2024-11-18 00:40:39.673427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.015 [2024-11-18 00:40:39.673457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:16.015 [2024-11-18 00:40:39.673474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:16.015 [2024-11-18 00:40:39.673704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:16.015 [2024-11-18 00:40:39.673925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:16.015 [2024-11-18 00:40:39.673945] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:16.015 [2024-11-18 00:40:39.673958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:16.015 [2024-11-18 00:40:39.673970] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:16.015 [2024-11-18 00:40:39.686571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:16.015 [2024-11-18 00:40:39.686955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.015 [2024-11-18 00:40:39.686984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:16.015 [2024-11-18 00:40:39.687001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:16.015 [2024-11-18 00:40:39.687214] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:16.015 [2024-11-18 00:40:39.687471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:16.015 [2024-11-18 00:40:39.687493] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:16.015 [2024-11-18 00:40:39.687506] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:16.015 [2024-11-18 00:40:39.687519] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:16.015 00:40:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:16.015 00:40:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:35:16.015 00:40:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:16.015 00:40:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:16.015 00:40:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:16.015 [2024-11-18 00:40:39.700139] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:16.015 [2024-11-18 00:40:39.700467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.015 [2024-11-18 00:40:39.700496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:16.015 [2024-11-18 00:40:39.700513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:16.015 [2024-11-18 00:40:39.700727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:16.015 [2024-11-18 00:40:39.700947] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:16.015 [2024-11-18 00:40:39.700967] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:16.015 [2024-11-18 00:40:39.700980] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:16.015 [2024-11-18 00:40:39.700992] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:16.015 [2024-11-18 00:40:39.713577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:16.015 [2024-11-18 00:40:39.713960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.015 [2024-11-18 00:40:39.713989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:16.015 [2024-11-18 00:40:39.714007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:16.015 [2024-11-18 00:40:39.714244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:16.015 [2024-11-18 00:40:39.714500] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:16.015 [2024-11-18 00:40:39.714523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:16.015 [2024-11-18 00:40:39.714536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:16.015 [2024-11-18 00:40:39.714549] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:16.015 00:40:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:16.015 00:40:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:16.015 00:40:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.015 00:40:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:16.015 [2024-11-18 00:40:39.723712] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:16.015 [2024-11-18 00:40:39.727083] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:16.015 [2024-11-18 00:40:39.727440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.015 [2024-11-18 00:40:39.727469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:16.015 [2024-11-18 00:40:39.727485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:16.015 [2024-11-18 00:40:39.727713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:16.015 [2024-11-18 00:40:39.727934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:16.015 [2024-11-18 00:40:39.727954] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:16.015 [2024-11-18 00:40:39.727968] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:16.015 [2024-11-18 00:40:39.727981] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:16.015 00:40:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.015 00:40:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:16.015 00:40:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.015 00:40:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:16.015 [2024-11-18 00:40:39.740691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:16.015 [2024-11-18 00:40:39.741182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.015 [2024-11-18 00:40:39.741219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:16.015 [2024-11-18 00:40:39.741241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:16.015 [2024-11-18 00:40:39.741472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:16.015 [2024-11-18 00:40:39.741734] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:16.015 [2024-11-18 00:40:39.741754] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:16.015 [2024-11-18 00:40:39.741786] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:16.015 [2024-11-18 00:40:39.741800] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:16.015 [2024-11-18 00:40:39.754237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:16.015 [2024-11-18 00:40:39.754598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.015 [2024-11-18 00:40:39.754628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:16.015 [2024-11-18 00:40:39.754654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:16.015 [2024-11-18 00:40:39.754888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:16.015 [2024-11-18 00:40:39.755102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:16.015 [2024-11-18 00:40:39.755122] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:16.015 [2024-11-18 00:40:39.755134] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:16.015 [2024-11-18 00:40:39.755146] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:16.015 3795.67 IOPS, 14.83 MiB/s [2024-11-17T23:40:39.838Z] [2024-11-18 00:40:39.769207] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:16.016 [2024-11-18 00:40:39.769628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.016 [2024-11-18 00:40:39.769661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:16.016 [2024-11-18 00:40:39.769681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:16.016 [2024-11-18 00:40:39.769917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:16.016 [2024-11-18 00:40:39.770125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:16.016 [2024-11-18 00:40:39.770145] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:16.016 [2024-11-18 00:40:39.770171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:16.016 [2024-11-18 00:40:39.770185] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:16.016 Malloc0 00:35:16.016 00:40:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.016 00:40:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:16.016 00:40:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.016 00:40:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:16.016 00:40:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.016 00:40:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:16.016 00:40:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.016 00:40:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:16.016 [2024-11-18 00:40:39.783122] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:16.016 [2024-11-18 00:40:39.783520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:16.016 [2024-11-18 00:40:39.783551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2fcf0 with addr=10.0.0.2, port=4420 00:35:16.016 [2024-11-18 00:40:39.783569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2fcf0 is same with the state(6) to be set 00:35:16.016 [2024-11-18 00:40:39.783799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2fcf0 (9): Bad file descriptor 00:35:16.016 [2024-11-18 00:40:39.784019] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:16.016 [2024-11-18 00:40:39.784039] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:16.016 [2024-11-18 00:40:39.784052] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:16.016 [2024-11-18 00:40:39.784064] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:16.016 00:40:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.016 00:40:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:16.016 00:40:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.016 00:40:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:16.016 [2024-11-18 00:40:39.792876] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:16.016 [2024-11-18 00:40:39.796782] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:16.016 00:40:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.016 00:40:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 400374 00:35:16.274 [2024-11-18 00:40:39.864035] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:35:18.173 4371.14 IOPS, 17.07 MiB/s [2024-11-17T23:40:42.928Z] 4919.50 IOPS, 19.22 MiB/s [2024-11-17T23:40:43.880Z] 5346.22 IOPS, 20.88 MiB/s [2024-11-17T23:40:44.815Z] 5691.10 IOPS, 22.23 MiB/s [2024-11-17T23:40:46.189Z] 5968.82 IOPS, 23.32 MiB/s [2024-11-17T23:40:47.133Z] 6189.58 IOPS, 24.18 MiB/s [2024-11-17T23:40:48.068Z] 6376.62 IOPS, 24.91 MiB/s [2024-11-17T23:40:49.000Z] 6541.50 IOPS, 25.55 MiB/s [2024-11-17T23:40:49.000Z] 6688.67 IOPS, 26.13 MiB/s 00:35:25.178 Latency(us) 00:35:25.178 [2024-11-17T23:40:49.000Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:25.178 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:25.178 Verification LBA range: start 0x0 length 0x4000 00:35:25.178 Nvme1n1 : 15.01 6692.11 26.14 10225.62 0.00 7543.54 813.13 19903.53 00:35:25.178 [2024-11-17T23:40:49.000Z] =================================================================================================================== 00:35:25.178 [2024-11-17T23:40:49.000Z] Total : 6692.11 26.14 10225.62 0.00 7543.54 813.13 19903.53 00:35:25.178 00:40:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:35:25.178 00:40:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:25.178 00:40:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.178 00:40:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:25.178 00:40:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.178 00:40:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:35:25.178 00:40:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:35:25.178 00:40:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:25.178 00:40:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:35:25.178 00:40:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:25.178 00:40:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:35:25.178 00:40:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:25.178 00:40:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:25.178 rmmod nvme_tcp 00:35:25.435 rmmod nvme_fabrics 00:35:25.435 rmmod nvme_keyring 00:35:25.435 00:40:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:25.435 00:40:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:35:25.435 00:40:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:35:25.435 00:40:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 401041 ']' 00:35:25.435 00:40:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 401041 00:35:25.435 00:40:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 401041 ']' 00:35:25.435 00:40:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 401041 00:35:25.435 00:40:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:35:25.435 00:40:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:25.435 00:40:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 401041 00:35:25.435 00:40:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:25.435 00:40:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:25.435 00:40:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 401041' 00:35:25.435 killing process with pid 401041 00:35:25.435 00:40:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 401041 00:35:25.435 00:40:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 401041 00:35:25.694 00:40:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:25.694 00:40:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:25.694 00:40:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:25.694 00:40:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:35:25.694 00:40:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:35:25.694 00:40:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:25.694 00:40:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:35:25.694 00:40:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:25.694 00:40:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:25.694 00:40:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:25.694 00:40:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:25.694 00:40:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:27.603 00:40:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:27.603 00:35:27.603 real 0m22.549s 00:35:27.603 user 0m59.140s 00:35:27.603 sys 0m4.727s 00:35:27.603 00:40:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:27.603 00:40:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:27.603 ************************************ 00:35:27.603 END TEST nvmf_bdevperf 00:35:27.603 ************************************ 00:35:27.603 00:40:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:35:27.603 00:40:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:27.603 00:40:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:27.603 00:40:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.603 ************************************ 00:35:27.603 START TEST nvmf_target_disconnect 00:35:27.603 ************************************ 00:35:27.603 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:35:27.862 * Looking for test storage... 00:35:27.862 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:27.862 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:27.862 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:35:27.862 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:27.862 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:27.862 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:27.862 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:27.862 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:27.862 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:35:27.862 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:35:27.862 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:35:27.862 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:35:27.862 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:35:27.862 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:35:27.862 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:35:27.862 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:27.862 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:35:27.862 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:35:27.862 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:27.862 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:27.862 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:35:27.862 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:35:27.862 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:27.862 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:35:27.862 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:35:27.862 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:35:27.862 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:35:27.862 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:27.862 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:35:27.862 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:35:27.862 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:27.862 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:27.862 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:35:27.862 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:27.862 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:27.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:27.862 --rc genhtml_branch_coverage=1 00:35:27.862 --rc genhtml_function_coverage=1 00:35:27.862 --rc genhtml_legend=1 00:35:27.862 --rc geninfo_all_blocks=1 00:35:27.862 --rc geninfo_unexecuted_blocks=1 00:35:27.862 00:35:27.862 ' 00:35:27.862 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:27.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:27.862 --rc genhtml_branch_coverage=1 00:35:27.862 --rc genhtml_function_coverage=1 00:35:27.862 --rc genhtml_legend=1 00:35:27.862 --rc geninfo_all_blocks=1 00:35:27.862 --rc geninfo_unexecuted_blocks=1 00:35:27.862 00:35:27.862 ' 00:35:27.862 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:27.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:27.862 --rc genhtml_branch_coverage=1 00:35:27.862 --rc genhtml_function_coverage=1 00:35:27.862 --rc genhtml_legend=1 00:35:27.862 --rc geninfo_all_blocks=1 00:35:27.862 --rc geninfo_unexecuted_blocks=1 00:35:27.862 00:35:27.862 ' 00:35:27.862 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:27.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:27.862 --rc genhtml_branch_coverage=1 00:35:27.862 --rc genhtml_function_coverage=1 00:35:27.862 --rc genhtml_legend=1 00:35:27.862 --rc geninfo_all_blocks=1 00:35:27.862 --rc geninfo_unexecuted_blocks=1 00:35:27.862 00:35:27.863 ' 00:35:27.863 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:27.863 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:35:27.863 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:27.863 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:27.863 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:27.863 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:27.863 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:27.863 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:27.863 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:27.863 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:27.863 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:27.863 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:27.863 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:27.863 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:27.863 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:27.863 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:27.863 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:27.863 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:27.863 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:27.863 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:35:27.863 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:27.863 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:27.863 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:27.863 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:27.863 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:27.863 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:27.863 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:35:27.863 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:27.863 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:35:27.863 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:27.863 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:27.863 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:27.863 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:27.863 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:27.863 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:27.863 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:27.863 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:27.863 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:27.863 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:27.863 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:35:27.863 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:35:27.863 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:35:27.863 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:35:27.863 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:27.863 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:27.863 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:27.863 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:27.863 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:27.863 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:27.863 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:27.863 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:27.863 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:27.863 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:27.863 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:35:27.863 00:40:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:30.398 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:30.398 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:35:30.398 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:30.398 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:30.398 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:30.398 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:30.398 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:30.398 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:35:30.398 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:30.398 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:35:30.398 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:35:30.398 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:35:30.398 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:35:30.398 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:35:30.398 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:35:30.398 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:30.398 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:30.398 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:30.398 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:30.398 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:30.398 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:30.398 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:30.398 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:30.398 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:30.398 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:30.398 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:30.398 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:30.398 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:30.398 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:30.398 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:30.398 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:30.398 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:30.398 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:30.398 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:30.398 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:30.398 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:30.398 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:30.398 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:30.398 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:30.398 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:30.398 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:30.398 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:30.399 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:30.399 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:30.399 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:30.399 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:30.399 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:35:30.399 00:35:30.399 --- 10.0.0.2 ping statistics --- 00:35:30.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:30.399 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:30.399 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:30.399 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:35:30.399 00:35:30.399 --- 10.0.0.1 ping statistics --- 00:35:30.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:30.399 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:30.399 ************************************ 00:35:30.399 START TEST nvmf_target_disconnect_tc1 00:35:30.399 ************************************ 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:35:30.399 00:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:30.399 [2024-11-18 00:40:54.044392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.399 [2024-11-18 00:40:54.044474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd99a90 with addr=10.0.0.2, port=4420 00:35:30.399 [2024-11-18 00:40:54.044515] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:35:30.400 [2024-11-18 00:40:54.044535] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:30.400 [2024-11-18 00:40:54.044548] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:35:30.400 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:35:30.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:35:30.400 Initializing NVMe Controllers 00:35:30.400 00:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:35:30.400 00:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:30.400 00:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:30.400 00:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:30.400 00:35:30.400 real 0m0.094s 00:35:30.400 user 0m0.045s 00:35:30.400 sys 0m0.049s 00:35:30.400 00:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:30.400 00:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:35:30.400 ************************************ 00:35:30.400 END TEST nvmf_target_disconnect_tc1 00:35:30.400 ************************************ 00:35:30.400 00:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:35:30.400 00:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:30.400 00:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:30.400 00:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:30.400 ************************************ 00:35:30.400 START TEST nvmf_target_disconnect_tc2 00:35:30.400 ************************************ 00:35:30.400 00:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:35:30.400 00:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:35:30.400 00:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:35:30.400 00:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:30.400 00:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:30.400 00:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:30.400 00:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=404189 00:35:30.400 00:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:35:30.400 00:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 404189 00:35:30.400 00:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 404189 ']' 00:35:30.400 00:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:30.400 00:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:30.400 00:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:30.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:30.400 00:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:30.400 00:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:30.400 [2024-11-18 00:40:54.157218] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:35:30.400 [2024-11-18 00:40:54.157309] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:30.658 [2024-11-18 00:40:54.236485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:30.658 [2024-11-18 00:40:54.284108] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:30.658 [2024-11-18 00:40:54.284162] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:30.658 [2024-11-18 00:40:54.284185] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:30.658 [2024-11-18 00:40:54.284196] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:30.658 [2024-11-18 00:40:54.284206] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:30.658 [2024-11-18 00:40:54.285736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:35:30.658 [2024-11-18 00:40:54.285816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:35:30.658 [2024-11-18 00:40:54.285760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:35:30.658 [2024-11-18 00:40:54.285819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:35:30.658 00:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:30.658 00:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:35:30.658 00:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:30.658 00:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:30.658 00:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:30.658 00:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:30.658 00:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:30.658 00:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.658 00:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:30.658 Malloc0 00:35:30.658 00:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.658 00:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:35:30.658 00:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.658 00:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:30.658 [2024-11-18 00:40:54.470051] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:30.658 00:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.658 00:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:30.658 00:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.658 00:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:30.915 00:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.915 00:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:30.915 00:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.915 00:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:30.915 00:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.915 00:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:30.915 00:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.915 00:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:30.915 [2024-11-18 00:40:54.498344] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:30.915 00:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.915 00:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:30.915 00:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.915 00:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:30.915 00:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.915 00:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=404223 00:35:30.915 00:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:35:30.915 00:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:32.831 00:40:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 404189 00:35:32.831 00:40:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:35:32.831 Read completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Read completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Read completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Read completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Read completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Read completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Read completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Read completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Read completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Read completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Read completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Read completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Read completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Read completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Read completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Write completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Read completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Write completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Write completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Read completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Read completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Read completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Write completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Read completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Read completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Read completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Read completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Read completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Write completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Write completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Write completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Read completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Read completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Read completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Read completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Read completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Read completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Read completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 [2024-11-18 00:40:56.523453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:32.831 Write completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Write completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Write completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Write completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Read completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Read completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Write completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Read completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Write completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Read completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Read completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Read completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Write completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Write completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Read completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Read completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Read completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Write completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Read completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Write completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Write completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Read completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Read completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Write completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Read completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Write completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 [2024-11-18 00:40:56.523737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:32.831 Read completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Read completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Read completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Read completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Read completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Read completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Read completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Read completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Read completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Read completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Read completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Read completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Read completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Read completed with error (sct=0, sc=8) 00:35:32.831 starting I/O failed 00:35:32.831 Read completed with error (sct=0, sc=8) 00:35:32.832 starting I/O failed 00:35:32.832 Write completed with error (sct=0, sc=8) 00:35:32.832 starting I/O failed 00:35:32.832 Read completed with error (sct=0, sc=8) 00:35:32.832 starting I/O failed 00:35:32.832 Write completed with error (sct=0, sc=8) 00:35:32.832 starting I/O failed 00:35:32.832 Read completed with error (sct=0, sc=8) 00:35:32.832 starting I/O failed 00:35:32.832 Read completed with error (sct=0, sc=8) 00:35:32.832 starting I/O failed 00:35:32.832 Read completed with error (sct=0, sc=8) 00:35:32.832 starting I/O failed 00:35:32.832 Read completed with error (sct=0, sc=8) 00:35:32.832 starting I/O failed 00:35:32.832 Read completed with error (sct=0, sc=8) 00:35:32.832 starting I/O failed 00:35:32.832 Read completed with error (sct=0, sc=8) 00:35:32.832 starting I/O failed 00:35:32.832 Write completed with error (sct=0, sc=8) 00:35:32.832 starting I/O failed 00:35:32.832 Write completed with error (sct=0, sc=8) 00:35:32.832 starting I/O failed 00:35:32.832 Read completed with error (sct=0, sc=8) 00:35:32.832 starting I/O failed 00:35:32.832 Write completed with error (sct=0, sc=8) 00:35:32.832 starting I/O failed 00:35:32.832 Read completed with error (sct=0, sc=8) 00:35:32.832 starting I/O failed 00:35:32.832 Read completed with error (sct=0, sc=8) 00:35:32.832 starting I/O failed 00:35:32.832 Write completed with error (sct=0, sc=8) 00:35:32.832 starting I/O failed 00:35:32.832 Write completed with error (sct=0, sc=8) 00:35:32.832 starting I/O failed 00:35:32.832 [2024-11-18 00:40:56.524065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:32.832 Read completed with error (sct=0, sc=8) 00:35:32.832 starting I/O failed 00:35:32.832 Read completed with error (sct=0, sc=8) 00:35:32.832 starting I/O failed 00:35:32.832 Read completed with error (sct=0, sc=8) 00:35:32.832 starting I/O failed 00:35:32.832 Read completed with error (sct=0, sc=8) 00:35:32.832 starting I/O failed 00:35:32.832 Read completed with error (sct=0, sc=8) 00:35:32.832 starting I/O failed 00:35:32.832 Read completed with error (sct=0, sc=8) 00:35:32.832 starting I/O failed 00:35:32.832 Read completed with error (sct=0, sc=8) 00:35:32.832 starting I/O failed 00:35:32.832 Write completed with error (sct=0, sc=8) 00:35:32.832 starting I/O failed 00:35:32.832 Read completed with error (sct=0, sc=8) 00:35:32.832 starting I/O failed 00:35:32.832 Read completed with error (sct=0, sc=8) 00:35:32.832 starting I/O failed 00:35:32.832 Write completed with error (sct=0, sc=8) 00:35:32.832 starting I/O failed 00:35:32.832 Write completed with error (sct=0, sc=8) 00:35:32.832 starting I/O failed 00:35:32.832 Write completed with error (sct=0, sc=8) 00:35:32.832 starting I/O failed 00:35:32.832 Write completed with error (sct=0, sc=8) 00:35:32.832 starting I/O failed 00:35:32.832 Write completed with error (sct=0, sc=8) 00:35:32.832 starting I/O failed 00:35:32.832 Write completed with error (sct=0, sc=8) 00:35:32.832 starting I/O failed 00:35:32.832 Write completed with error (sct=0, sc=8) 00:35:32.832 starting I/O failed 00:35:32.832 Read completed with error (sct=0, sc=8) 00:35:32.832 starting I/O failed 00:35:32.832 Write completed with error (sct=0, sc=8) 00:35:32.832 starting I/O failed 00:35:32.832 Write completed with error (sct=0, sc=8) 00:35:32.832 starting I/O failed 00:35:32.832 Write completed with error (sct=0, sc=8) 00:35:32.832 starting I/O failed 00:35:32.832 Write completed with error (sct=0, sc=8) 00:35:32.832 starting I/O failed 00:35:32.832 Write completed with error (sct=0, sc=8) 00:35:32.832 starting I/O failed 00:35:32.832 Read completed with error (sct=0, sc=8) 00:35:32.832 starting I/O failed 00:35:32.832 Write completed with error (sct=0, sc=8) 00:35:32.832 starting I/O failed 00:35:32.832 Read completed with error (sct=0, sc=8) 00:35:32.832 starting I/O failed 00:35:32.832 Read completed with error (sct=0, sc=8) 00:35:32.832 starting I/O failed 00:35:32.832 Write completed with error (sct=0, sc=8) 00:35:32.832 starting I/O failed 00:35:32.832 Read completed with error (sct=0, sc=8) 00:35:32.832 starting I/O failed 00:35:32.832 Read completed with error (sct=0, sc=8) 00:35:32.832 starting I/O failed 00:35:32.832 Write completed with error (sct=0, sc=8) 00:35:32.832 starting I/O failed 00:35:32.832 Read completed with error (sct=0, sc=8) 00:35:32.832 starting I/O failed 00:35:32.832 [2024-11-18 00:40:56.524367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:32.832 [2024-11-18 00:40:56.524491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.832 [2024-11-18 00:40:56.524542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.832 qpair failed and we were unable to recover it. 00:35:32.832 [2024-11-18 00:40:56.524671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.832 [2024-11-18 00:40:56.524701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.832 qpair failed and we were unable to recover it. 00:35:32.832 [2024-11-18 00:40:56.524828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.832 [2024-11-18 00:40:56.524855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.832 qpair failed and we were unable to recover it. 00:35:32.832 [2024-11-18 00:40:56.524984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.832 [2024-11-18 00:40:56.525018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.832 qpair failed and we were unable to recover it. 00:35:32.832 [2024-11-18 00:40:56.525109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.832 [2024-11-18 00:40:56.525136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.832 qpair failed and we were unable to recover it. 00:35:32.832 [2024-11-18 00:40:56.525227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.832 [2024-11-18 00:40:56.525254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.832 qpair failed and we were unable to recover it. 00:35:32.832 [2024-11-18 00:40:56.525375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.832 [2024-11-18 00:40:56.525403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.832 qpair failed and we were unable to recover it. 00:35:32.832 [2024-11-18 00:40:56.525485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.832 [2024-11-18 00:40:56.525512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.832 qpair failed and we were unable to recover it. 00:35:32.832 [2024-11-18 00:40:56.525617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.832 [2024-11-18 00:40:56.525644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.832 qpair failed and we were unable to recover it. 00:35:32.832 [2024-11-18 00:40:56.525766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.832 [2024-11-18 00:40:56.525792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.832 qpair failed and we were unable to recover it. 00:35:32.832 [2024-11-18 00:40:56.525878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.832 [2024-11-18 00:40:56.525905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.832 qpair failed and we were unable to recover it. 00:35:32.832 [2024-11-18 00:40:56.526026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.832 [2024-11-18 00:40:56.526053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.832 qpair failed and we were unable to recover it. 00:35:32.832 [2024-11-18 00:40:56.526147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.832 [2024-11-18 00:40:56.526175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.832 qpair failed and we were unable to recover it. 00:35:32.832 [2024-11-18 00:40:56.526255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.832 [2024-11-18 00:40:56.526281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.832 qpair failed and we were unable to recover it. 00:35:32.832 [2024-11-18 00:40:56.526393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.832 [2024-11-18 00:40:56.526419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.832 qpair failed and we were unable to recover it. 00:35:32.832 [2024-11-18 00:40:56.526503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.832 [2024-11-18 00:40:56.526530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.832 qpair failed and we were unable to recover it. 00:35:32.832 [2024-11-18 00:40:56.526657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.832 [2024-11-18 00:40:56.526685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.832 qpair failed and we were unable to recover it. 00:35:32.832 [2024-11-18 00:40:56.526811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.832 [2024-11-18 00:40:56.526838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.832 qpair failed and we were unable to recover it. 00:35:32.832 [2024-11-18 00:40:56.526953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.832 [2024-11-18 00:40:56.526987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.832 qpair failed and we were unable to recover it. 00:35:32.832 [2024-11-18 00:40:56.527076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.832 [2024-11-18 00:40:56.527103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.832 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-18 00:40:56.527233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-18 00:40:56.527275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-18 00:40:56.527403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-18 00:40:56.527455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-18 00:40:56.527545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-18 00:40:56.527574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-18 00:40:56.527690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-18 00:40:56.527717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-18 00:40:56.527809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-18 00:40:56.527836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-18 00:40:56.527975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-18 00:40:56.528030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-18 00:40:56.528150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-18 00:40:56.528175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-18 00:40:56.528295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-18 00:40:56.528326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-18 00:40:56.528427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-18 00:40:56.528454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-18 00:40:56.528541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-18 00:40:56.528579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-18 00:40:56.528675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-18 00:40:56.528706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-18 00:40:56.528845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-18 00:40:56.528873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-18 00:40:56.529021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-18 00:40:56.529052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-18 00:40:56.529164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-18 00:40:56.529190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-18 00:40:56.529337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-18 00:40:56.529383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-18 00:40:56.529468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-18 00:40:56.529496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-18 00:40:56.529640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-18 00:40:56.529681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-18 00:40:56.529864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-18 00:40:56.529894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-18 00:40:56.530113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-18 00:40:56.530164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-18 00:40:56.530280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-18 00:40:56.530306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-18 00:40:56.530412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-18 00:40:56.530438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-18 00:40:56.530531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-18 00:40:56.530558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-18 00:40:56.530647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-18 00:40:56.530688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-18 00:40:56.530831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-18 00:40:56.530858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-18 00:40:56.530951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-18 00:40:56.530977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-18 00:40:56.531095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-18 00:40:56.531120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-18 00:40:56.531239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-18 00:40:56.531264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-18 00:40:56.531379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-18 00:40:56.531417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-18 00:40:56.531504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-18 00:40:56.531533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-18 00:40:56.531657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-18 00:40:56.531686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-18 00:40:56.531802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-18 00:40:56.531828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-18 00:40:56.531940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-18 00:40:56.531966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-18 00:40:56.532081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-18 00:40:56.532107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-18 00:40:56.532222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-18 00:40:56.532249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-18 00:40:56.532366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-18 00:40:56.532395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-18 00:40:56.532883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-18 00:40:56.532923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-18 00:40:56.533088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-18 00:40:56.533118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.833 [2024-11-18 00:40:56.533242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.833 [2024-11-18 00:40:56.533269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.833 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-18 00:40:56.533370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-18 00:40:56.533398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-18 00:40:56.533503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-18 00:40:56.533530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-18 00:40:56.533655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-18 00:40:56.533690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-18 00:40:56.533806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-18 00:40:56.533833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-18 00:40:56.533988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-18 00:40:56.534029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-18 00:40:56.534157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-18 00:40:56.534184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-18 00:40:56.534299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-18 00:40:56.534332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-18 00:40:56.534460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-18 00:40:56.534487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-18 00:40:56.534575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-18 00:40:56.534612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-18 00:40:56.534726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-18 00:40:56.534752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-18 00:40:56.534832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-18 00:40:56.534858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-18 00:40:56.534946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-18 00:40:56.534974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-18 00:40:56.535089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-18 00:40:56.535121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-18 00:40:56.535229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-18 00:40:56.535264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-18 00:40:56.535358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-18 00:40:56.535386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-18 00:40:56.535483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-18 00:40:56.535510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-18 00:40:56.535628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-18 00:40:56.535657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-18 00:40:56.535770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-18 00:40:56.535797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-18 00:40:56.535933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-18 00:40:56.535971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-18 00:40:56.536087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-18 00:40:56.536116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-18 00:40:56.536232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-18 00:40:56.536259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-18 00:40:56.536362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-18 00:40:56.536389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-18 00:40:56.536504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-18 00:40:56.536532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-18 00:40:56.536626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-18 00:40:56.536654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-18 00:40:56.536764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-18 00:40:56.536790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-18 00:40:56.536895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-18 00:40:56.536921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-18 00:40:56.537012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-18 00:40:56.537040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-18 00:40:56.537155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-18 00:40:56.537181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-18 00:40:56.537292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-18 00:40:56.537327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-18 00:40:56.537439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-18 00:40:56.537465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-18 00:40:56.537578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-18 00:40:56.537604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-18 00:40:56.537712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-18 00:40:56.537738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-18 00:40:56.537845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-18 00:40:56.537871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-18 00:40:56.538011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-18 00:40:56.538037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-18 00:40:56.538124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-18 00:40:56.538150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-18 00:40:56.538270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-18 00:40:56.538297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-18 00:40:56.538403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.834 [2024-11-18 00:40:56.538432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.834 qpair failed and we were unable to recover it. 00:35:32.834 [2024-11-18 00:40:56.538573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-18 00:40:56.538600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-18 00:40:56.538743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-18 00:40:56.538770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-18 00:40:56.538894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-18 00:40:56.538922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-18 00:40:56.539063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-18 00:40:56.539089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-18 00:40:56.539200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-18 00:40:56.539226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-18 00:40:56.539320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-18 00:40:56.539350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-18 00:40:56.539435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-18 00:40:56.539462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-18 00:40:56.539586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-18 00:40:56.539627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-18 00:40:56.539769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-18 00:40:56.539798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-18 00:40:56.539907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-18 00:40:56.539934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-18 00:40:56.540052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-18 00:40:56.540080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-18 00:40:56.540203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-18 00:40:56.540232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-18 00:40:56.540377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-18 00:40:56.540404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-18 00:40:56.540519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-18 00:40:56.540545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-18 00:40:56.540657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-18 00:40:56.540683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-18 00:40:56.540827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-18 00:40:56.540860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-18 00:40:56.540980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-18 00:40:56.541006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-18 00:40:56.541130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-18 00:40:56.541160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-18 00:40:56.541289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-18 00:40:56.541336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-18 00:40:56.541487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-18 00:40:56.541518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-18 00:40:56.541637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-18 00:40:56.541664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-18 00:40:56.541749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-18 00:40:56.541774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-18 00:40:56.541880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-18 00:40:56.541906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-18 00:40:56.542000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-18 00:40:56.542026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-18 00:40:56.542143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-18 00:40:56.542169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-18 00:40:56.542278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-18 00:40:56.542303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-18 00:40:56.542422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-18 00:40:56.542448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-18 00:40:56.542532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-18 00:40:56.542558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-18 00:40:56.542697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-18 00:40:56.542723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-18 00:40:56.542802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-18 00:40:56.542829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-18 00:40:56.542915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-18 00:40:56.542940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-18 00:40:56.543090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-18 00:40:56.543115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-18 00:40:56.543231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.835 [2024-11-18 00:40:56.543258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.835 qpair failed and we were unable to recover it. 00:35:32.835 [2024-11-18 00:40:56.543390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-18 00:40:56.543429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-18 00:40:56.543525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-18 00:40:56.543563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-18 00:40:56.543672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-18 00:40:56.543702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-18 00:40:56.543802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-18 00:40:56.543831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-18 00:40:56.543914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-18 00:40:56.543941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-18 00:40:56.544018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-18 00:40:56.544044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-18 00:40:56.544130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-18 00:40:56.544157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-18 00:40:56.544271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-18 00:40:56.544298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-18 00:40:56.544409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-18 00:40:56.544435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-18 00:40:56.544563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-18 00:40:56.544611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-18 00:40:56.544759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-18 00:40:56.544789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-18 00:40:56.544900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-18 00:40:56.544927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-18 00:40:56.545050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-18 00:40:56.545076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-18 00:40:56.545193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-18 00:40:56.545225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-18 00:40:56.545343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-18 00:40:56.545372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-18 00:40:56.545485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-18 00:40:56.545512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-18 00:40:56.545665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-18 00:40:56.545713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-18 00:40:56.545894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-18 00:40:56.545922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-18 00:40:56.546101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-18 00:40:56.546130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-18 00:40:56.546245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-18 00:40:56.546271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-18 00:40:56.546448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-18 00:40:56.546488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-18 00:40:56.546626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-18 00:40:56.546676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-18 00:40:56.546764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-18 00:40:56.546796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-18 00:40:56.546918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-18 00:40:56.546944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-18 00:40:56.547119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-18 00:40:56.547170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-18 00:40:56.547276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-18 00:40:56.547302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-18 00:40:56.547427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-18 00:40:56.547452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-18 00:40:56.547536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-18 00:40:56.547563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-18 00:40:56.547677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-18 00:40:56.547703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-18 00:40:56.547786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-18 00:40:56.547812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-18 00:40:56.547952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-18 00:40:56.547978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-18 00:40:56.548065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-18 00:40:56.548091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-18 00:40:56.548188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-18 00:40:56.548226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-18 00:40:56.548359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-18 00:40:56.548388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-18 00:40:56.548476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-18 00:40:56.548504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-18 00:40:56.548624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-18 00:40:56.548652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-18 00:40:56.548743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.836 [2024-11-18 00:40:56.548770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.836 qpair failed and we were unable to recover it. 00:35:32.836 [2024-11-18 00:40:56.548892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-18 00:40:56.548929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-18 00:40:56.549033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-18 00:40:56.549061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-18 00:40:56.549203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-18 00:40:56.549231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-18 00:40:56.549358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-18 00:40:56.549384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-18 00:40:56.549530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-18 00:40:56.549557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-18 00:40:56.549777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-18 00:40:56.549845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-18 00:40:56.549990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-18 00:40:56.550045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-18 00:40:56.550194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-18 00:40:56.550225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-18 00:40:56.550362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-18 00:40:56.550400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-18 00:40:56.550550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-18 00:40:56.550588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-18 00:40:56.550712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-18 00:40:56.550738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-18 00:40:56.550964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-18 00:40:56.551017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-18 00:40:56.551110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-18 00:40:56.551142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-18 00:40:56.551290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-18 00:40:56.551322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-18 00:40:56.551476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-18 00:40:56.551504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-18 00:40:56.551591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-18 00:40:56.551618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-18 00:40:56.551759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-18 00:40:56.551786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-18 00:40:56.551978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-18 00:40:56.552030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-18 00:40:56.552155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-18 00:40:56.552186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-18 00:40:56.552275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-18 00:40:56.552302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-18 00:40:56.552428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-18 00:40:56.552454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-18 00:40:56.552599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-18 00:40:56.552628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-18 00:40:56.552750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-18 00:40:56.552776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-18 00:40:56.552932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-18 00:40:56.552973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-18 00:40:56.553098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-18 00:40:56.553125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-18 00:40:56.553201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-18 00:40:56.553227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-18 00:40:56.553374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-18 00:40:56.553401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-18 00:40:56.553475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-18 00:40:56.553500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-18 00:40:56.553586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-18 00:40:56.553613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-18 00:40:56.553729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-18 00:40:56.553755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-18 00:40:56.553870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-18 00:40:56.553898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-18 00:40:56.554010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-18 00:40:56.554036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-18 00:40:56.554160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-18 00:40:56.554190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-18 00:40:56.554316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-18 00:40:56.554357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-18 00:40:56.554476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-18 00:40:56.554505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-18 00:40:56.554618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-18 00:40:56.554644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-18 00:40:56.554789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.837 [2024-11-18 00:40:56.554816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.837 qpair failed and we were unable to recover it. 00:35:32.837 [2024-11-18 00:40:56.554953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-18 00:40:56.554994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-18 00:40:56.555114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-18 00:40:56.555141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-18 00:40:56.555286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-18 00:40:56.555322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-18 00:40:56.555462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-18 00:40:56.555490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-18 00:40:56.555597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-18 00:40:56.555636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-18 00:40:56.555716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-18 00:40:56.555742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-18 00:40:56.555866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-18 00:40:56.555894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-18 00:40:56.555981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-18 00:40:56.556007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-18 00:40:56.556130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-18 00:40:56.556168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-18 00:40:56.556332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-18 00:40:56.556367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-18 00:40:56.556500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-18 00:40:56.556527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-18 00:40:56.556649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-18 00:40:56.556676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-18 00:40:56.556789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-18 00:40:56.556816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-18 00:40:56.556966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-18 00:40:56.557018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-18 00:40:56.557138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-18 00:40:56.557166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-18 00:40:56.557305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-18 00:40:56.557340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-18 00:40:56.557442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-18 00:40:56.557469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-18 00:40:56.557557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-18 00:40:56.557584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-18 00:40:56.557669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-18 00:40:56.557696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-18 00:40:56.557842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-18 00:40:56.557886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-18 00:40:56.557993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-18 00:40:56.558019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-18 00:40:56.558108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-18 00:40:56.558134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-18 00:40:56.558214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-18 00:40:56.558241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-18 00:40:56.558326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-18 00:40:56.558359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-18 00:40:56.558443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-18 00:40:56.558471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-18 00:40:56.558637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-18 00:40:56.558664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-18 00:40:56.558775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-18 00:40:56.558802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-18 00:40:56.558914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-18 00:40:56.558942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-18 00:40:56.559042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-18 00:40:56.559070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-18 00:40:56.559199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-18 00:40:56.559226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-18 00:40:56.559332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-18 00:40:56.559371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-18 00:40:56.559509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-18 00:40:56.559536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-18 00:40:56.559657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-18 00:40:56.559685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-18 00:40:56.559871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-18 00:40:56.559923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-18 00:40:56.560112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-18 00:40:56.560141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-18 00:40:56.560229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-18 00:40:56.560255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-18 00:40:56.560379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-18 00:40:56.560405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-18 00:40:56.560514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.838 [2024-11-18 00:40:56.560540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.838 qpair failed and we were unable to recover it. 00:35:32.838 [2024-11-18 00:40:56.560637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-18 00:40:56.560675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-18 00:40:56.560842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-18 00:40:56.560896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-18 00:40:56.561119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-18 00:40:56.561148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-18 00:40:56.561260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-18 00:40:56.561287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-18 00:40:56.561388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-18 00:40:56.561434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-18 00:40:56.561526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-18 00:40:56.561555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-18 00:40:56.561803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-18 00:40:56.561831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-18 00:40:56.561923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-18 00:40:56.561949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-18 00:40:56.562029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-18 00:40:56.562054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-18 00:40:56.562196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-18 00:40:56.562221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-18 00:40:56.562339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-18 00:40:56.562370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-18 00:40:56.562479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-18 00:40:56.562506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-18 00:40:56.562599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-18 00:40:56.562627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-18 00:40:56.562738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-18 00:40:56.562765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-18 00:40:56.562880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-18 00:40:56.562906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-18 00:40:56.563070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-18 00:40:56.563099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-18 00:40:56.563205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-18 00:40:56.563243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-18 00:40:56.563360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-18 00:40:56.563398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-18 00:40:56.563555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-18 00:40:56.563584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-18 00:40:56.563708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-18 00:40:56.563736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-18 00:40:56.563821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-18 00:40:56.563847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-18 00:40:56.563969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-18 00:40:56.563995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-18 00:40:56.564098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-18 00:40:56.564136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-18 00:40:56.564258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-18 00:40:56.564287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-18 00:40:56.564434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-18 00:40:56.564464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-18 00:40:56.564552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-18 00:40:56.564588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-18 00:40:56.564697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-18 00:40:56.564724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-18 00:40:56.564840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-18 00:40:56.564867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-18 00:40:56.565020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-18 00:40:56.565048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-18 00:40:56.565145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-18 00:40:56.565175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-18 00:40:56.565290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-18 00:40:56.565325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-18 00:40:56.565452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-18 00:40:56.565481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-18 00:40:56.565566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-18 00:40:56.565606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-18 00:40:56.565746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-18 00:40:56.565774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-18 00:40:56.565872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-18 00:40:56.565898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-18 00:40:56.566021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-18 00:40:56.566049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-18 00:40:56.566159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.839 [2024-11-18 00:40:56.566185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.839 qpair failed and we were unable to recover it. 00:35:32.839 [2024-11-18 00:40:56.566273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-18 00:40:56.566298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-18 00:40:56.566401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-18 00:40:56.566428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-18 00:40:56.566546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-18 00:40:56.566585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-18 00:40:56.566704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-18 00:40:56.566731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-18 00:40:56.566849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-18 00:40:56.566875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-18 00:40:56.567015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-18 00:40:56.567044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-18 00:40:56.567163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-18 00:40:56.567191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-18 00:40:56.567318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-18 00:40:56.567353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-18 00:40:56.567482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-18 00:40:56.567510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-18 00:40:56.567632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-18 00:40:56.567659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-18 00:40:56.567764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-18 00:40:56.567790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-18 00:40:56.567869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-18 00:40:56.567895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-18 00:40:56.568049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-18 00:40:56.568078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-18 00:40:56.568198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-18 00:40:56.568227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-18 00:40:56.568331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-18 00:40:56.568378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-18 00:40:56.568496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-18 00:40:56.568525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-18 00:40:56.568621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-18 00:40:56.568649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-18 00:40:56.568736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-18 00:40:56.568763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-18 00:40:56.568872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-18 00:40:56.568899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-18 00:40:56.568985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-18 00:40:56.569013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-18 00:40:56.569134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-18 00:40:56.569161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-18 00:40:56.569278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-18 00:40:56.569305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-18 00:40:56.569413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-18 00:40:56.569440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-18 00:40:56.569526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-18 00:40:56.569553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-18 00:40:56.569646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-18 00:40:56.569673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-18 00:40:56.569764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-18 00:40:56.569790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-18 00:40:56.569930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-18 00:40:56.569958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-18 00:40:56.570084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-18 00:40:56.570114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-18 00:40:56.570237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-18 00:40:56.570265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-18 00:40:56.570361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-18 00:40:56.570388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-18 00:40:56.570493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-18 00:40:56.570520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-18 00:40:56.570710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-18 00:40:56.570738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-18 00:40:56.570904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-18 00:40:56.570958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.840 [2024-11-18 00:40:56.571132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.840 [2024-11-18 00:40:56.571189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.840 qpair failed and we were unable to recover it. 00:35:32.841 [2024-11-18 00:40:56.571268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.841 [2024-11-18 00:40:56.571300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.841 qpair failed and we were unable to recover it. 00:35:32.841 [2024-11-18 00:40:56.571402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.841 [2024-11-18 00:40:56.571428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.841 qpair failed and we were unable to recover it. 00:35:32.841 [2024-11-18 00:40:56.571504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.841 [2024-11-18 00:40:56.571531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.841 qpair failed and we were unable to recover it. 00:35:32.841 [2024-11-18 00:40:56.571619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.841 [2024-11-18 00:40:56.571647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.841 qpair failed and we were unable to recover it. 00:35:32.841 [2024-11-18 00:40:56.571819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.841 [2024-11-18 00:40:56.571847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.841 qpair failed and we were unable to recover it. 00:35:32.841 [2024-11-18 00:40:56.571934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.841 [2024-11-18 00:40:56.571960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.841 qpair failed and we were unable to recover it. 00:35:32.841 [2024-11-18 00:40:56.572051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.841 [2024-11-18 00:40:56.572080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.841 qpair failed and we were unable to recover it. 00:35:32.841 [2024-11-18 00:40:56.572196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.841 [2024-11-18 00:40:56.572224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.841 qpair failed and we were unable to recover it. 00:35:32.841 [2024-11-18 00:40:56.572340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.841 [2024-11-18 00:40:56.572371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.841 qpair failed and we were unable to recover it. 00:35:32.841 [2024-11-18 00:40:56.572458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.841 [2024-11-18 00:40:56.572484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.841 qpair failed and we were unable to recover it. 00:35:32.841 [2024-11-18 00:40:56.572599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.841 [2024-11-18 00:40:56.572625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.841 qpair failed and we were unable to recover it. 00:35:32.841 [2024-11-18 00:40:56.572717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.841 [2024-11-18 00:40:56.572744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.841 qpair failed and we were unable to recover it. 00:35:32.841 [2024-11-18 00:40:56.572831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.841 [2024-11-18 00:40:56.572858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.841 qpair failed and we were unable to recover it. 00:35:32.841 [2024-11-18 00:40:56.573000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.841 [2024-11-18 00:40:56.573026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.841 qpair failed and we were unable to recover it. 00:35:32.841 [2024-11-18 00:40:56.573136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.841 [2024-11-18 00:40:56.573162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.841 qpair failed and we were unable to recover it. 00:35:32.841 [2024-11-18 00:40:56.573240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.841 [2024-11-18 00:40:56.573266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.841 qpair failed and we were unable to recover it. 00:35:32.841 [2024-11-18 00:40:56.573374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.841 [2024-11-18 00:40:56.573402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.841 qpair failed and we were unable to recover it. 00:35:32.841 [2024-11-18 00:40:56.573491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.841 [2024-11-18 00:40:56.573519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.841 qpair failed and we were unable to recover it. 00:35:32.841 [2024-11-18 00:40:56.573595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.841 [2024-11-18 00:40:56.573626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.841 qpair failed and we were unable to recover it. 00:35:32.841 [2024-11-18 00:40:56.573744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.841 [2024-11-18 00:40:56.573771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.841 qpair failed and we were unable to recover it. 00:35:32.841 [2024-11-18 00:40:56.573886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.841 [2024-11-18 00:40:56.573914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.841 qpair failed and we were unable to recover it. 00:35:32.841 [2024-11-18 00:40:56.574026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.841 [2024-11-18 00:40:56.574053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.841 qpair failed and we were unable to recover it. 00:35:32.841 [2024-11-18 00:40:56.574170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.841 [2024-11-18 00:40:56.574196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.841 qpair failed and we were unable to recover it. 00:35:32.841 [2024-11-18 00:40:56.574321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.841 [2024-11-18 00:40:56.574349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.841 qpair failed and we were unable to recover it. 00:35:32.841 [2024-11-18 00:40:56.574442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.841 [2024-11-18 00:40:56.574469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.841 qpair failed and we were unable to recover it. 00:35:32.841 [2024-11-18 00:40:56.574561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.841 [2024-11-18 00:40:56.574588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.841 qpair failed and we were unable to recover it. 00:35:32.841 [2024-11-18 00:40:56.574672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.841 [2024-11-18 00:40:56.574698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.841 qpair failed and we were unable to recover it. 00:35:32.841 [2024-11-18 00:40:56.574790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.841 [2024-11-18 00:40:56.574816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.841 qpair failed and we were unable to recover it. 00:35:32.841 [2024-11-18 00:40:56.574900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.841 [2024-11-18 00:40:56.574928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.841 qpair failed and we were unable to recover it. 00:35:32.841 [2024-11-18 00:40:56.575073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.841 [2024-11-18 00:40:56.575101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.841 qpair failed and we were unable to recover it. 00:35:32.841 [2024-11-18 00:40:56.575216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.841 [2024-11-18 00:40:56.575242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.841 qpair failed and we were unable to recover it. 00:35:32.841 [2024-11-18 00:40:56.575365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.841 [2024-11-18 00:40:56.575391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.841 qpair failed and we were unable to recover it. 00:35:32.841 [2024-11-18 00:40:56.575502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.841 [2024-11-18 00:40:56.575529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.841 qpair failed and we were unable to recover it. 00:35:32.841 [2024-11-18 00:40:56.575609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.841 [2024-11-18 00:40:56.575637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.841 qpair failed and we were unable to recover it. 00:35:32.841 [2024-11-18 00:40:56.575750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.841 [2024-11-18 00:40:56.575776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.841 qpair failed and we were unable to recover it. 00:35:32.841 [2024-11-18 00:40:56.575914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.841 [2024-11-18 00:40:56.575940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.841 qpair failed and we were unable to recover it. 00:35:32.841 [2024-11-18 00:40:56.576074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.841 [2024-11-18 00:40:56.576114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.841 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-18 00:40:56.576212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-18 00:40:56.576239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-18 00:40:56.576394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-18 00:40:56.576423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-18 00:40:56.576517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-18 00:40:56.576545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-18 00:40:56.576657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-18 00:40:56.576691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-18 00:40:56.576781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-18 00:40:56.576808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-18 00:40:56.576891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-18 00:40:56.576917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-18 00:40:56.577061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-18 00:40:56.577086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-18 00:40:56.577194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-18 00:40:56.577221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-18 00:40:56.577331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-18 00:40:56.577368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-18 00:40:56.577506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-18 00:40:56.577533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-18 00:40:56.577620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-18 00:40:56.577646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-18 00:40:56.577766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-18 00:40:56.577793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-18 00:40:56.577904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-18 00:40:56.577932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-18 00:40:56.578020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-18 00:40:56.578046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-18 00:40:56.578167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-18 00:40:56.578192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-18 00:40:56.578274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-18 00:40:56.578300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-18 00:40:56.578464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-18 00:40:56.578491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-18 00:40:56.578612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-18 00:40:56.578642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-18 00:40:56.578789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-18 00:40:56.578817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-18 00:40:56.578909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-18 00:40:56.578935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-18 00:40:56.579052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-18 00:40:56.579079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-18 00:40:56.579205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-18 00:40:56.579231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-18 00:40:56.579344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-18 00:40:56.579377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-18 00:40:56.579464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-18 00:40:56.579491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-18 00:40:56.579608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-18 00:40:56.579636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-18 00:40:56.579755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-18 00:40:56.579784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-18 00:40:56.579907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-18 00:40:56.579933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-18 00:40:56.580080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-18 00:40:56.580108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-18 00:40:56.580191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-18 00:40:56.580217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-18 00:40:56.580332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-18 00:40:56.580365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-18 00:40:56.580464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-18 00:40:56.580504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-18 00:40:56.580713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-18 00:40:56.580775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-18 00:40:56.580965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-18 00:40:56.580994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-18 00:40:56.581118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-18 00:40:56.581146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-18 00:40:56.581236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-18 00:40:56.581263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-18 00:40:56.581383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.842 [2024-11-18 00:40:56.581409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.842 qpair failed and we were unable to recover it. 00:35:32.842 [2024-11-18 00:40:56.581492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-18 00:40:56.581520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-18 00:40:56.581610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-18 00:40:56.581636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-18 00:40:56.581750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-18 00:40:56.581777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-18 00:40:56.581868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-18 00:40:56.581897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-18 00:40:56.582016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-18 00:40:56.582045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-18 00:40:56.582165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-18 00:40:56.582193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-18 00:40:56.582282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-18 00:40:56.582307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-18 00:40:56.582430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-18 00:40:56.582461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-18 00:40:56.582549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-18 00:40:56.582584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-18 00:40:56.582676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-18 00:40:56.582703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-18 00:40:56.582784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-18 00:40:56.582809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-18 00:40:56.582908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-18 00:40:56.582947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-18 00:40:56.583062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-18 00:40:56.583089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-18 00:40:56.583203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-18 00:40:56.583231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-18 00:40:56.583354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-18 00:40:56.583381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-18 00:40:56.583471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-18 00:40:56.583497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-18 00:40:56.583617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-18 00:40:56.583644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-18 00:40:56.583785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-18 00:40:56.583812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-18 00:40:56.583935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-18 00:40:56.583962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-18 00:40:56.584079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-18 00:40:56.584108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-18 00:40:56.584195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-18 00:40:56.584221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-18 00:40:56.584376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-18 00:40:56.584404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-18 00:40:56.584550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-18 00:40:56.584589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-18 00:40:56.584702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-18 00:40:56.584728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-18 00:40:56.584838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-18 00:40:56.584866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-18 00:40:56.584977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-18 00:40:56.585005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-18 00:40:56.585149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-18 00:40:56.585176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-18 00:40:56.585287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-18 00:40:56.585320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-18 00:40:56.585459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-18 00:40:56.585487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-18 00:40:56.585571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-18 00:40:56.585597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-18 00:40:56.585715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-18 00:40:56.585740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-18 00:40:56.585836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-18 00:40:56.585874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-18 00:40:56.586018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-18 00:40:56.586049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-18 00:40:56.586206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-18 00:40:56.586247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-18 00:40:56.586343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-18 00:40:56.586370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-18 00:40:56.586457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-18 00:40:56.586483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-18 00:40:56.586626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.843 [2024-11-18 00:40:56.586651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.843 qpair failed and we were unable to recover it. 00:35:32.843 [2024-11-18 00:40:56.586765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-18 00:40:56.586794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-18 00:40:56.586939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-18 00:40:56.586967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-18 00:40:56.587074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-18 00:40:56.587099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-18 00:40:56.587204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-18 00:40:56.587244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-18 00:40:56.587427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-18 00:40:56.587468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-18 00:40:56.587591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-18 00:40:56.587620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-18 00:40:56.587733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-18 00:40:56.587758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-18 00:40:56.587976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-18 00:40:56.588028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-18 00:40:56.588142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-18 00:40:56.588168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-18 00:40:56.588275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-18 00:40:56.588302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-18 00:40:56.588426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-18 00:40:56.588453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-18 00:40:56.588546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-18 00:40:56.588573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-18 00:40:56.588712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-18 00:40:56.588739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-18 00:40:56.588853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-18 00:40:56.588879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-18 00:40:56.589019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-18 00:40:56.589046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-18 00:40:56.589154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-18 00:40:56.589180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-18 00:40:56.589295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-18 00:40:56.589329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-18 00:40:56.589445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-18 00:40:56.589473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-18 00:40:56.589588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-18 00:40:56.589616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-18 00:40:56.589759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-18 00:40:56.589787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-18 00:40:56.589904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-18 00:40:56.589931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-18 00:40:56.590011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-18 00:40:56.590038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-18 00:40:56.590169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-18 00:40:56.590206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-18 00:40:56.590303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-18 00:40:56.590343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-18 00:40:56.590494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-18 00:40:56.590523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-18 00:40:56.590607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-18 00:40:56.590633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-18 00:40:56.590768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-18 00:40:56.590797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-18 00:40:56.590904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-18 00:40:56.590932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-18 00:40:56.591018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-18 00:40:56.591043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-18 00:40:56.591126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-18 00:40:56.591152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-18 00:40:56.591239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-18 00:40:56.591266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-18 00:40:56.591386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-18 00:40:56.591413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-18 00:40:56.591525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-18 00:40:56.591552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-18 00:40:56.591633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-18 00:40:56.591659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-18 00:40:56.591794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-18 00:40:56.591856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-18 00:40:56.591948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-18 00:40:56.591986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-18 00:40:56.592102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-18 00:40:56.592132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-18 00:40:56.592256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.844 [2024-11-18 00:40:56.592302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.844 qpair failed and we were unable to recover it. 00:35:32.844 [2024-11-18 00:40:56.592430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-18 00:40:56.592458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-18 00:40:56.592603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-18 00:40:56.592631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-18 00:40:56.592738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-18 00:40:56.592764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-18 00:40:56.592847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-18 00:40:56.592872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-18 00:40:56.592992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-18 00:40:56.593019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-18 00:40:56.593132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-18 00:40:56.593159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-18 00:40:56.593280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-18 00:40:56.593320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-18 00:40:56.593437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-18 00:40:56.593466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-18 00:40:56.593550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-18 00:40:56.593576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-18 00:40:56.593685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-18 00:40:56.593711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-18 00:40:56.593831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-18 00:40:56.593859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-18 00:40:56.593943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-18 00:40:56.593968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-18 00:40:56.594055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-18 00:40:56.594081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-18 00:40:56.594170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-18 00:40:56.594197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-18 00:40:56.594321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-18 00:40:56.594351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-18 00:40:56.594438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-18 00:40:56.594465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-18 00:40:56.594573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-18 00:40:56.594608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-18 00:40:56.594724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-18 00:40:56.594750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-18 00:40:56.594835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-18 00:40:56.594863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-18 00:40:56.594967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-18 00:40:56.595008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-18 00:40:56.595128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-18 00:40:56.595158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-18 00:40:56.595272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-18 00:40:56.595300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-18 00:40:56.595427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-18 00:40:56.595453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-18 00:40:56.595593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-18 00:40:56.595621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-18 00:40:56.595790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-18 00:40:56.595850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-18 00:40:56.595932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-18 00:40:56.595958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-18 00:40:56.596103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-18 00:40:56.596133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-18 00:40:56.596245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-18 00:40:56.596272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-18 00:40:56.596381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-18 00:40:56.596408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-18 00:40:56.596519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-18 00:40:56.596546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-18 00:40:56.596663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-18 00:40:56.596691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-18 00:40:56.596778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-18 00:40:56.596804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.845 [2024-11-18 00:40:56.597016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.845 [2024-11-18 00:40:56.597069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.845 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-18 00:40:56.597155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-18 00:40:56.597182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-18 00:40:56.597295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-18 00:40:56.597329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-18 00:40:56.597412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-18 00:40:56.597438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-18 00:40:56.597553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-18 00:40:56.597581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-18 00:40:56.597715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-18 00:40:56.597742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-18 00:40:56.597830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-18 00:40:56.597855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-18 00:40:56.598085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-18 00:40:56.598140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-18 00:40:56.598254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-18 00:40:56.598282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-18 00:40:56.598413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-18 00:40:56.598443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-18 00:40:56.598534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-18 00:40:56.598560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-18 00:40:56.598702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-18 00:40:56.598730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-18 00:40:56.598846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-18 00:40:56.598872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-18 00:40:56.598955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-18 00:40:56.598981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-18 00:40:56.599096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-18 00:40:56.599123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-18 00:40:56.599211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-18 00:40:56.599238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-18 00:40:56.599381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-18 00:40:56.599409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-18 00:40:56.599539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-18 00:40:56.599581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-18 00:40:56.599806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-18 00:40:56.599857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-18 00:40:56.600082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-18 00:40:56.600131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-18 00:40:56.600220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-18 00:40:56.600247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-18 00:40:56.600400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-18 00:40:56.600429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-18 00:40:56.600544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-18 00:40:56.600573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-18 00:40:56.600663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-18 00:40:56.600689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-18 00:40:56.600798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-18 00:40:56.600826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-18 00:40:56.600960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-18 00:40:56.601028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-18 00:40:56.601197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-18 00:40:56.601249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-18 00:40:56.601364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-18 00:40:56.601391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-18 00:40:56.601511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-18 00:40:56.601539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-18 00:40:56.601655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-18 00:40:56.601682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-18 00:40:56.601795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-18 00:40:56.601822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-18 00:40:56.601906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-18 00:40:56.601933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-18 00:40:56.602017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-18 00:40:56.602043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-18 00:40:56.602160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-18 00:40:56.602186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-18 00:40:56.602270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-18 00:40:56.602303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-18 00:40:56.602435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-18 00:40:56.602473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-18 00:40:56.602596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-18 00:40:56.602623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.846 [2024-11-18 00:40:56.602828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.846 [2024-11-18 00:40:56.602858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.846 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-18 00:40:56.602965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-18 00:40:56.602991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-18 00:40:56.603118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-18 00:40:56.603166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-18 00:40:56.603289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-18 00:40:56.603324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-18 00:40:56.603419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-18 00:40:56.603444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-18 00:40:56.603587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-18 00:40:56.603614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-18 00:40:56.603794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-18 00:40:56.603822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-18 00:40:56.604034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-18 00:40:56.604088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-18 00:40:56.604201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-18 00:40:56.604229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-18 00:40:56.604306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-18 00:40:56.604340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-18 00:40:56.604438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-18 00:40:56.604464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-18 00:40:56.604589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-18 00:40:56.604616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-18 00:40:56.604755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-18 00:40:56.604783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-18 00:40:56.604897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-18 00:40:56.604925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-18 00:40:56.605039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-18 00:40:56.605066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-18 00:40:56.605157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-18 00:40:56.605183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-18 00:40:56.605341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-18 00:40:56.605382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-18 00:40:56.605501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-18 00:40:56.605531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-18 00:40:56.605624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-18 00:40:56.605650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-18 00:40:56.605758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-18 00:40:56.605816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-18 00:40:56.605950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-18 00:40:56.606008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-18 00:40:56.606136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-18 00:40:56.606176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-18 00:40:56.606302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-18 00:40:56.606336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-18 00:40:56.606428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-18 00:40:56.606455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-18 00:40:56.606576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-18 00:40:56.606605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-18 00:40:56.606719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-18 00:40:56.606747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-18 00:40:56.606858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-18 00:40:56.606885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-18 00:40:56.606965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-18 00:40:56.606991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-18 00:40:56.607072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-18 00:40:56.607099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-18 00:40:56.607215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-18 00:40:56.607243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-18 00:40:56.607359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-18 00:40:56.607393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-18 00:40:56.607506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-18 00:40:56.607538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-18 00:40:56.607670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-18 00:40:56.607711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-18 00:40:56.607858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-18 00:40:56.607888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-18 00:40:56.608032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-18 00:40:56.608061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-18 00:40:56.608178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-18 00:40:56.608206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-18 00:40:56.608332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-18 00:40:56.608362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-18 00:40:56.608452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.847 [2024-11-18 00:40:56.608478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.847 qpair failed and we were unable to recover it. 00:35:32.847 [2024-11-18 00:40:56.608648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-18 00:40:56.608701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-18 00:40:56.608888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-18 00:40:56.608942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-18 00:40:56.609173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-18 00:40:56.609200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-18 00:40:56.609322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-18 00:40:56.609351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-18 00:40:56.609465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-18 00:40:56.609493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-18 00:40:56.609581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-18 00:40:56.609608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-18 00:40:56.609725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-18 00:40:56.609753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-18 00:40:56.609951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-18 00:40:56.609991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-18 00:40:56.610116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-18 00:40:56.610146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-18 00:40:56.610225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-18 00:40:56.610251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-18 00:40:56.610381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-18 00:40:56.610409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-18 00:40:56.610524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-18 00:40:56.610552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-18 00:40:56.610634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-18 00:40:56.610660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-18 00:40:56.610759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-18 00:40:56.610788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-18 00:40:56.610881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-18 00:40:56.610908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-18 00:40:56.611016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-18 00:40:56.611044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-18 00:40:56.611160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-18 00:40:56.611187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-18 00:40:56.611298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-18 00:40:56.611335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-18 00:40:56.611424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-18 00:40:56.611451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-18 00:40:56.611529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-18 00:40:56.611556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-18 00:40:56.611700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-18 00:40:56.611728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-18 00:40:56.611839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-18 00:40:56.611865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-18 00:40:56.611939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-18 00:40:56.611965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-18 00:40:56.612080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-18 00:40:56.612108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-18 00:40:56.612217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-18 00:40:56.612244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-18 00:40:56.612375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-18 00:40:56.612404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-18 00:40:56.612520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-18 00:40:56.612554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-18 00:40:56.612640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-18 00:40:56.612667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-18 00:40:56.612810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-18 00:40:56.612838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-18 00:40:56.612956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-18 00:40:56.612984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-18 00:40:56.613097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-18 00:40:56.613126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-18 00:40:56.613249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-18 00:40:56.613290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-18 00:40:56.613384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-18 00:40:56.613412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-18 00:40:56.613525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-18 00:40:56.613553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-18 00:40:56.613668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-18 00:40:56.613695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-18 00:40:56.613805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-18 00:40:56.613831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-18 00:40:56.613973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-18 00:40:56.614001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-18 00:40:56.614115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.848 [2024-11-18 00:40:56.614142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.848 qpair failed and we were unable to recover it. 00:35:32.848 [2024-11-18 00:40:56.614235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-18 00:40:56.614264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-18 00:40:56.614397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-18 00:40:56.614437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-18 00:40:56.614540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-18 00:40:56.614567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-18 00:40:56.614784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-18 00:40:56.614811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-18 00:40:56.614886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-18 00:40:56.614913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-18 00:40:56.615029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-18 00:40:56.615057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-18 00:40:56.615177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-18 00:40:56.615205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-18 00:40:56.615339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-18 00:40:56.615367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-18 00:40:56.615508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-18 00:40:56.615535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-18 00:40:56.615647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-18 00:40:56.615675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-18 00:40:56.615787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-18 00:40:56.615814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-18 00:40:56.615928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-18 00:40:56.615955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-18 00:40:56.616102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-18 00:40:56.616131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-18 00:40:56.616242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-18 00:40:56.616270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-18 00:40:56.616396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-18 00:40:56.616426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-18 00:40:56.616511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-18 00:40:56.616540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-18 00:40:56.616624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-18 00:40:56.616650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-18 00:40:56.616786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-18 00:40:56.616814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-18 00:40:56.616930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-18 00:40:56.616959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-18 00:40:56.617051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-18 00:40:56.617078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-18 00:40:56.617225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-18 00:40:56.617254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-18 00:40:56.617382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-18 00:40:56.617410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-18 00:40:56.617523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-18 00:40:56.617551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-18 00:40:56.617636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-18 00:40:56.617663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-18 00:40:56.617835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-18 00:40:56.617863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-18 00:40:56.617951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-18 00:40:56.617978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-18 00:40:56.618064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-18 00:40:56.618090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-18 00:40:56.618201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-18 00:40:56.618231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-18 00:40:56.618327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-18 00:40:56.618359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-18 00:40:56.618475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-18 00:40:56.618502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-18 00:40:56.618640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-18 00:40:56.618668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-18 00:40:56.618757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-18 00:40:56.618782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.849 qpair failed and we were unable to recover it. 00:35:32.849 [2024-11-18 00:40:56.618896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.849 [2024-11-18 00:40:56.618925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-18 00:40:56.619047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-18 00:40:56.619075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-18 00:40:56.619217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-18 00:40:56.619245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-18 00:40:56.619384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-18 00:40:56.619412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-18 00:40:56.619492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-18 00:40:56.619519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-18 00:40:56.619661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-18 00:40:56.619689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-18 00:40:56.619838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-18 00:40:56.619866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-18 00:40:56.619959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-18 00:40:56.619985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-18 00:40:56.620061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-18 00:40:56.620086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-18 00:40:56.620193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-18 00:40:56.620220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-18 00:40:56.620336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-18 00:40:56.620365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-18 00:40:56.620480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-18 00:40:56.620507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-18 00:40:56.620647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-18 00:40:56.620674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-18 00:40:56.620790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-18 00:40:56.620817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-18 00:40:56.620903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-18 00:40:56.620929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-18 00:40:56.621072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-18 00:40:56.621101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-18 00:40:56.621221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-18 00:40:56.621249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-18 00:40:56.621358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-18 00:40:56.621385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-18 00:40:56.621525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-18 00:40:56.621553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-18 00:40:56.621695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-18 00:40:56.621722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-18 00:40:56.621836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-18 00:40:56.621864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-18 00:40:56.621950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-18 00:40:56.621976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-18 00:40:56.622117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-18 00:40:56.622144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-18 00:40:56.622300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-18 00:40:56.622350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-18 00:40:56.622450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-18 00:40:56.622479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-18 00:40:56.622591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-18 00:40:56.622620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-18 00:40:56.622739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-18 00:40:56.622767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-18 00:40:56.622882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-18 00:40:56.622941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-18 00:40:56.623021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-18 00:40:56.623047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-18 00:40:56.623126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-18 00:40:56.623154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-18 00:40:56.623274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-18 00:40:56.623303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-18 00:40:56.623408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-18 00:40:56.623448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-18 00:40:56.623537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-18 00:40:56.623565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-18 00:40:56.623684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-18 00:40:56.623713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-18 00:40:56.623829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-18 00:40:56.623858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-18 00:40:56.623981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-18 00:40:56.624010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-18 00:40:56.624128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-18 00:40:56.624158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-18 00:40:56.624278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.850 [2024-11-18 00:40:56.624305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.850 qpair failed and we were unable to recover it. 00:35:32.850 [2024-11-18 00:40:56.624439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.851 [2024-11-18 00:40:56.624467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.851 qpair failed and we were unable to recover it. 00:35:32.851 [2024-11-18 00:40:56.624552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.851 [2024-11-18 00:40:56.624579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.851 qpair failed and we were unable to recover it. 00:35:32.851 [2024-11-18 00:40:56.624671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.851 [2024-11-18 00:40:56.624700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.851 qpair failed and we were unable to recover it. 00:35:32.851 [2024-11-18 00:40:56.624789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.851 [2024-11-18 00:40:56.624818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.851 qpair failed and we were unable to recover it. 00:35:32.851 [2024-11-18 00:40:56.624909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.851 [2024-11-18 00:40:56.624938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.851 qpair failed and we were unable to recover it. 00:35:32.851 [2024-11-18 00:40:56.625024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.851 [2024-11-18 00:40:56.625050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.851 qpair failed and we were unable to recover it. 00:35:32.851 [2024-11-18 00:40:56.625162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.851 [2024-11-18 00:40:56.625189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.851 qpair failed and we were unable to recover it. 00:35:32.851 [2024-11-18 00:40:56.625277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.851 [2024-11-18 00:40:56.625302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.851 qpair failed and we were unable to recover it. 00:35:32.851 [2024-11-18 00:40:56.625423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.851 [2024-11-18 00:40:56.625451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.851 qpair failed and we were unable to recover it. 00:35:32.851 [2024-11-18 00:40:56.625538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.851 [2024-11-18 00:40:56.625565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.851 qpair failed and we were unable to recover it. 00:35:32.851 [2024-11-18 00:40:56.625655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.851 [2024-11-18 00:40:56.625683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.851 qpair failed and we were unable to recover it. 00:35:32.851 [2024-11-18 00:40:56.625771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.851 [2024-11-18 00:40:56.625797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.851 qpair failed and we were unable to recover it. 00:35:32.851 [2024-11-18 00:40:56.625936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.851 [2024-11-18 00:40:56.625985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.851 qpair failed and we were unable to recover it. 00:35:32.851 [2024-11-18 00:40:56.626103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.851 [2024-11-18 00:40:56.626134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.851 qpair failed and we were unable to recover it. 00:35:32.851 [2024-11-18 00:40:56.626255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.851 [2024-11-18 00:40:56.626294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.851 qpair failed and we were unable to recover it. 00:35:32.851 [2024-11-18 00:40:56.626406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.851 [2024-11-18 00:40:56.626432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.851 qpair failed and we were unable to recover it. 00:35:32.851 [2024-11-18 00:40:56.626518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.851 [2024-11-18 00:40:56.626546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.851 qpair failed and we were unable to recover it. 00:35:32.851 [2024-11-18 00:40:56.626651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.851 [2024-11-18 00:40:56.626678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.851 qpair failed and we were unable to recover it. 00:35:32.851 [2024-11-18 00:40:56.626757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.851 [2024-11-18 00:40:56.626782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.851 qpair failed and we were unable to recover it. 00:35:32.851 [2024-11-18 00:40:56.626898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.851 [2024-11-18 00:40:56.626925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.851 qpair failed and we were unable to recover it. 00:35:32.851 [2024-11-18 00:40:56.627039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.851 [2024-11-18 00:40:56.627066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.851 qpair failed and we were unable to recover it. 00:35:32.851 [2024-11-18 00:40:56.627153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.851 [2024-11-18 00:40:56.627178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.851 qpair failed and we were unable to recover it. 00:35:32.851 [2024-11-18 00:40:56.627264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.851 [2024-11-18 00:40:56.627291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.851 qpair failed and we were unable to recover it. 00:35:32.851 [2024-11-18 00:40:56.627411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.851 [2024-11-18 00:40:56.627439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.851 qpair failed and we were unable to recover it. 00:35:32.851 [2024-11-18 00:40:56.627551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.851 [2024-11-18 00:40:56.627579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.851 qpair failed and we were unable to recover it. 00:35:32.851 [2024-11-18 00:40:56.627661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.851 [2024-11-18 00:40:56.627692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.851 qpair failed and we were unable to recover it. 00:35:32.851 [2024-11-18 00:40:56.627773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.851 [2024-11-18 00:40:56.627801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.851 qpair failed and we were unable to recover it. 00:35:32.851 [2024-11-18 00:40:56.627917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.851 [2024-11-18 00:40:56.627945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.851 qpair failed and we were unable to recover it. 00:35:32.851 [2024-11-18 00:40:56.628032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.851 [2024-11-18 00:40:56.628058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.851 qpair failed and we were unable to recover it. 00:35:32.851 [2024-11-18 00:40:56.628155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.851 [2024-11-18 00:40:56.628185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.851 qpair failed and we were unable to recover it. 00:35:32.851 [2024-11-18 00:40:56.628276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.851 [2024-11-18 00:40:56.628306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.851 qpair failed and we were unable to recover it. 00:35:32.851 [2024-11-18 00:40:56.628413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.851 [2024-11-18 00:40:56.628441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.851 qpair failed and we were unable to recover it. 00:35:32.851 [2024-11-18 00:40:56.628530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.851 [2024-11-18 00:40:56.628556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.851 qpair failed and we were unable to recover it. 00:35:32.851 [2024-11-18 00:40:56.628703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.851 [2024-11-18 00:40:56.628755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.851 qpair failed and we were unable to recover it. 00:35:32.851 [2024-11-18 00:40:56.628836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.851 [2024-11-18 00:40:56.628862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.851 qpair failed and we were unable to recover it. 00:35:32.851 [2024-11-18 00:40:56.628981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.851 [2024-11-18 00:40:56.629010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.851 qpair failed and we were unable to recover it. 00:35:32.851 [2024-11-18 00:40:56.629096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.851 [2024-11-18 00:40:56.629122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.851 qpair failed and we were unable to recover it. 00:35:32.851 [2024-11-18 00:40:56.629232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.851 [2024-11-18 00:40:56.629260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.851 qpair failed and we were unable to recover it. 00:35:32.851 [2024-11-18 00:40:56.629346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.852 [2024-11-18 00:40:56.629373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.852 qpair failed and we were unable to recover it. 00:35:32.852 [2024-11-18 00:40:56.629494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.852 [2024-11-18 00:40:56.629521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.852 qpair failed and we were unable to recover it. 00:35:32.852 [2024-11-18 00:40:56.629636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.852 [2024-11-18 00:40:56.629663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.852 qpair failed and we were unable to recover it. 00:35:32.852 [2024-11-18 00:40:56.629775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.852 [2024-11-18 00:40:56.629803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.852 qpair failed and we were unable to recover it. 00:35:32.852 [2024-11-18 00:40:56.629899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.852 [2024-11-18 00:40:56.629939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.852 qpair failed and we were unable to recover it. 00:35:32.852 [2024-11-18 00:40:56.630063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.852 [2024-11-18 00:40:56.630092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.852 qpair failed and we were unable to recover it. 00:35:32.852 [2024-11-18 00:40:56.630209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.852 [2024-11-18 00:40:56.630236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.852 qpair failed and we were unable to recover it. 00:35:32.852 [2024-11-18 00:40:56.630351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.852 [2024-11-18 00:40:56.630380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.852 qpair failed and we were unable to recover it. 00:35:32.852 [2024-11-18 00:40:56.630520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.852 [2024-11-18 00:40:56.630548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.852 qpair failed and we were unable to recover it. 00:35:32.852 [2024-11-18 00:40:56.630704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.852 [2024-11-18 00:40:56.630765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.852 qpair failed and we were unable to recover it. 00:35:32.852 [2024-11-18 00:40:56.630874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.852 [2024-11-18 00:40:56.630902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.852 qpair failed and we were unable to recover it. 00:35:32.852 [2024-11-18 00:40:56.631047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.852 [2024-11-18 00:40:56.631075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.852 qpair failed and we were unable to recover it. 00:35:32.852 [2024-11-18 00:40:56.631192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.852 [2024-11-18 00:40:56.631220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.852 qpair failed and we were unable to recover it. 00:35:32.852 [2024-11-18 00:40:56.631302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.852 [2024-11-18 00:40:56.631336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.852 qpair failed and we were unable to recover it. 00:35:32.852 [2024-11-18 00:40:56.631448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.852 [2024-11-18 00:40:56.631481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.852 qpair failed and we were unable to recover it. 00:35:32.852 [2024-11-18 00:40:56.631596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.852 [2024-11-18 00:40:56.631662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.852 qpair failed and we were unable to recover it. 00:35:32.852 [2024-11-18 00:40:56.631737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.852 [2024-11-18 00:40:56.631763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.852 qpair failed and we were unable to recover it. 00:35:32.852 [2024-11-18 00:40:56.631941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.852 [2024-11-18 00:40:56.631968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.852 qpair failed and we were unable to recover it. 00:35:32.852 [2024-11-18 00:40:56.632082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.852 [2024-11-18 00:40:56.632109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.852 qpair failed and we were unable to recover it. 00:35:32.852 [2024-11-18 00:40:56.632224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.852 [2024-11-18 00:40:56.632251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.852 qpair failed and we were unable to recover it. 00:35:32.852 [2024-11-18 00:40:56.632376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.852 [2024-11-18 00:40:56.632409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.852 qpair failed and we were unable to recover it. 00:35:32.852 [2024-11-18 00:40:56.632525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.852 [2024-11-18 00:40:56.632554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.852 qpair failed and we were unable to recover it. 00:35:32.852 [2024-11-18 00:40:56.632664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.852 [2024-11-18 00:40:56.632692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.852 qpair failed and we were unable to recover it. 00:35:32.852 [2024-11-18 00:40:56.632782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.852 [2024-11-18 00:40:56.632809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.852 qpair failed and we were unable to recover it. 00:35:32.852 [2024-11-18 00:40:56.632956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.852 [2024-11-18 00:40:56.632985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.852 qpair failed and we were unable to recover it. 00:35:32.852 [2024-11-18 00:40:56.633080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.852 [2024-11-18 00:40:56.633120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.852 qpair failed and we were unable to recover it. 00:35:32.852 [2024-11-18 00:40:56.633237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.852 [2024-11-18 00:40:56.633266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.852 qpair failed and we were unable to recover it. 00:35:32.852 [2024-11-18 00:40:56.633355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.852 [2024-11-18 00:40:56.633382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.852 qpair failed and we were unable to recover it. 00:35:32.852 [2024-11-18 00:40:56.633514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.852 [2024-11-18 00:40:56.633542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.852 qpair failed and we were unable to recover it. 00:35:32.852 [2024-11-18 00:40:56.633634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.852 [2024-11-18 00:40:56.633660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.852 qpair failed and we were unable to recover it. 00:35:32.852 [2024-11-18 00:40:56.633838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.852 [2024-11-18 00:40:56.633865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.852 qpair failed and we were unable to recover it. 00:35:32.852 [2024-11-18 00:40:56.634003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.852 [2024-11-18 00:40:56.634058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.852 qpair failed and we were unable to recover it. 00:35:32.852 [2024-11-18 00:40:56.634146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.852 [2024-11-18 00:40:56.634177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.852 qpair failed and we were unable to recover it. 00:35:32.852 [2024-11-18 00:40:56.634296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.852 [2024-11-18 00:40:56.634334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.852 qpair failed and we were unable to recover it. 00:35:32.852 [2024-11-18 00:40:56.634432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.852 [2024-11-18 00:40:56.634458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.852 qpair failed and we were unable to recover it. 00:35:32.852 [2024-11-18 00:40:56.634596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.852 [2024-11-18 00:40:56.634623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.852 qpair failed and we were unable to recover it. 00:35:32.852 [2024-11-18 00:40:56.634736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.852 [2024-11-18 00:40:56.634765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.852 qpair failed and we were unable to recover it. 00:35:32.852 [2024-11-18 00:40:56.634884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.853 [2024-11-18 00:40:56.634915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.853 qpair failed and we were unable to recover it. 00:35:32.853 [2024-11-18 00:40:56.635057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.853 [2024-11-18 00:40:56.635086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.853 qpair failed and we were unable to recover it. 00:35:32.853 [2024-11-18 00:40:56.635230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.853 [2024-11-18 00:40:56.635260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.853 qpair failed and we were unable to recover it. 00:35:32.853 [2024-11-18 00:40:56.635375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.853 [2024-11-18 00:40:56.635404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.853 qpair failed and we were unable to recover it. 00:35:32.853 [2024-11-18 00:40:56.635522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.853 [2024-11-18 00:40:56.635551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.853 qpair failed and we were unable to recover it. 00:35:32.853 [2024-11-18 00:40:56.635668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.853 [2024-11-18 00:40:56.635696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.853 qpair failed and we were unable to recover it. 00:35:32.853 [2024-11-18 00:40:56.635791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.853 [2024-11-18 00:40:56.635825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:32.853 qpair failed and we were unable to recover it. 00:35:32.853 [2024-11-18 00:40:56.635958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.853 [2024-11-18 00:40:56.635987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.853 qpair failed and we were unable to recover it. 00:35:32.853 [2024-11-18 00:40:56.636095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.853 [2024-11-18 00:40:56.636121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.853 qpair failed and we were unable to recover it. 00:35:32.853 [2024-11-18 00:40:56.636233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.853 [2024-11-18 00:40:56.636260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.853 qpair failed and we were unable to recover it. 00:35:32.853 [2024-11-18 00:40:56.636377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.853 [2024-11-18 00:40:56.636405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.853 qpair failed and we were unable to recover it. 00:35:32.853 [2024-11-18 00:40:56.636496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.853 [2024-11-18 00:40:56.636522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.853 qpair failed and we were unable to recover it. 00:35:32.853 [2024-11-18 00:40:56.636606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.853 [2024-11-18 00:40:56.636633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.853 qpair failed and we were unable to recover it. 00:35:32.853 [2024-11-18 00:40:56.636742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.853 [2024-11-18 00:40:56.636769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:32.853 qpair failed and we were unable to recover it. 00:35:32.853 [2024-11-18 00:40:56.636872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.853 [2024-11-18 00:40:56.636912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:32.853 qpair failed and we were unable to recover it. 00:35:32.853 [2024-11-18 00:40:56.637036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.853 [2024-11-18 00:40:56.637066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.853 qpair failed and we were unable to recover it. 00:35:32.853 [2024-11-18 00:40:56.637158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.853 [2024-11-18 00:40:56.637185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.853 qpair failed and we were unable to recover it. 00:35:32.853 [2024-11-18 00:40:56.637308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.853 [2024-11-18 00:40:56.637351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.853 qpair failed and we were unable to recover it. 00:35:32.853 [2024-11-18 00:40:56.637467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.853 [2024-11-18 00:40:56.637495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.853 qpair failed and we were unable to recover it. 00:35:32.853 [2024-11-18 00:40:56.637612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.853 [2024-11-18 00:40:56.637639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.853 qpair failed and we were unable to recover it. 00:35:32.853 [2024-11-18 00:40:56.637752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.853 [2024-11-18 00:40:56.637779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.853 qpair failed and we were unable to recover it. 00:35:32.853 [2024-11-18 00:40:56.637865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.853 [2024-11-18 00:40:56.637891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:32.853 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-18 00:40:56.637972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-18 00:40:56.637998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-18 00:40:56.638122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-18 00:40:56.638150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-18 00:40:56.638268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-18 00:40:56.638295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-18 00:40:56.638402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-18 00:40:56.638438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-18 00:40:56.638583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-18 00:40:56.638611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-18 00:40:56.638731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-18 00:40:56.638759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-18 00:40:56.638868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-18 00:40:56.638895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-18 00:40:56.638990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-18 00:40:56.639018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-18 00:40:56.639125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-18 00:40:56.639153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-18 00:40:56.639269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-18 00:40:56.639297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-18 00:40:56.639393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-18 00:40:56.639419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-18 00:40:56.639532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-18 00:40:56.639559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-18 00:40:56.639671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-18 00:40:56.639698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-18 00:40:56.639843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-18 00:40:56.639871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-18 00:40:56.639956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-18 00:40:56.639983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-18 00:40:56.640098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-18 00:40:56.640130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-18 00:40:56.640222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-18 00:40:56.640248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-18 00:40:56.640340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-18 00:40:56.640367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-18 00:40:56.640457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-18 00:40:56.640483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-18 00:40:56.640567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-18 00:40:56.640594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-18 00:40:56.640677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-18 00:40:56.640704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-18 00:40:56.640821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-18 00:40:56.640850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-18 00:40:56.640971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-18 00:40:56.641004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-18 00:40:56.641135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-18 00:40:56.641176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-18 00:40:56.641292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-18 00:40:56.641328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-18 00:40:56.641417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-18 00:40:56.641444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-18 00:40:56.641562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-18 00:40:56.641590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-18 00:40:56.641698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-18 00:40:56.641725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-18 00:40:56.641808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-18 00:40:56.641836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-18 00:40:56.641949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-18 00:40:56.642016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-18 00:40:56.642154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-18 00:40:56.642194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-18 00:40:56.642284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-18 00:40:56.642321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-18 00:40:56.642412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-18 00:40:56.642439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-18 00:40:56.642517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-18 00:40:56.642545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-18 00:40:56.642776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-18 00:40:56.642829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-18 00:40:56.642961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-18 00:40:56.643020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-18 00:40:56.643156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-18 00:40:56.643196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-18 00:40:56.643351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-18 00:40:56.643390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-18 00:40:56.643509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-18 00:40:56.643536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-18 00:40:56.643621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-18 00:40:56.643647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-18 00:40:56.643785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-18 00:40:56.643824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-18 00:40:56.643982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-18 00:40:56.644044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-18 00:40:56.644157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-18 00:40:56.644184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-18 00:40:56.644292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-18 00:40:56.644328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-18 00:40:56.644449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-18 00:40:56.644478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-18 00:40:56.644621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-18 00:40:56.644648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-18 00:40:56.644795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-18 00:40:56.644822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.148 [2024-11-18 00:40:56.644980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.148 [2024-11-18 00:40:56.645036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.148 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-18 00:40:56.645129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-18 00:40:56.645159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-18 00:40:56.645284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-18 00:40:56.645320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-18 00:40:56.645416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-18 00:40:56.645441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-18 00:40:56.645552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-18 00:40:56.645579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-18 00:40:56.645660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-18 00:40:56.645685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-18 00:40:56.645828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-18 00:40:56.645855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-18 00:40:56.645932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-18 00:40:56.645958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-18 00:40:56.646072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-18 00:40:56.646099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-18 00:40:56.646239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-18 00:40:56.646267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-18 00:40:56.646390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-18 00:40:56.646418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-18 00:40:56.646516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-18 00:40:56.646545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-18 00:40:56.646627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-18 00:40:56.646653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-18 00:40:56.646765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-18 00:40:56.646800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-18 00:40:56.646913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-18 00:40:56.646941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-18 00:40:56.647048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-18 00:40:56.647081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-18 00:40:56.647166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-18 00:40:56.647200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-18 00:40:56.647352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-18 00:40:56.647380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-18 00:40:56.647510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-18 00:40:56.647537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-18 00:40:56.647622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-18 00:40:56.647655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-18 00:40:56.647738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-18 00:40:56.647764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-18 00:40:56.647880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-18 00:40:56.647907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-18 00:40:56.647985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-18 00:40:56.648012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-18 00:40:56.648128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-18 00:40:56.648154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-18 00:40:56.648263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-18 00:40:56.648290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-18 00:40:56.648417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-18 00:40:56.648444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-18 00:40:56.648556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-18 00:40:56.648588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-18 00:40:56.648669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-18 00:40:56.648696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-18 00:40:56.648789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-18 00:40:56.648816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-18 00:40:56.648931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-18 00:40:56.648960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-18 00:40:56.649126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-18 00:40:56.649167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-18 00:40:56.649266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-18 00:40:56.649297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-18 00:40:56.649426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-18 00:40:56.649454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-18 00:40:56.649575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-18 00:40:56.649603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-18 00:40:56.649716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-18 00:40:56.649744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-18 00:40:56.649862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-18 00:40:56.649888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-18 00:40:56.650001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-18 00:40:56.650028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-18 00:40:56.650141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-18 00:40:56.650170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-18 00:40:56.650253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-18 00:40:56.650280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-18 00:40:56.650414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-18 00:40:56.650444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-18 00:40:56.650565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-18 00:40:56.650594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-18 00:40:56.650681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-18 00:40:56.650709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-18 00:40:56.650820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-18 00:40:56.650856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-18 00:40:56.651045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-18 00:40:56.651104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-18 00:40:56.651186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-18 00:40:56.651212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-18 00:40:56.651354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-18 00:40:56.651386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-18 00:40:56.651471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-18 00:40:56.651497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-18 00:40:56.651590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-18 00:40:56.651616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-18 00:40:56.651738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-18 00:40:56.651765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-18 00:40:56.651873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-18 00:40:56.651900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-18 00:40:56.652042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-18 00:40:56.652068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-18 00:40:56.652152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-18 00:40:56.652177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-18 00:40:56.652268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-18 00:40:56.652321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-18 00:40:56.652438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-18 00:40:56.652467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-18 00:40:56.652617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-18 00:40:56.652645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-18 00:40:56.652738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.149 [2024-11-18 00:40:56.652767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.149 qpair failed and we were unable to recover it. 00:35:33.149 [2024-11-18 00:40:56.652975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-18 00:40:56.653006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-18 00:40:56.653099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-18 00:40:56.653127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-18 00:40:56.653242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-18 00:40:56.653271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-18 00:40:56.653376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-18 00:40:56.653402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-18 00:40:56.653515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-18 00:40:56.653543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-18 00:40:56.653635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-18 00:40:56.653662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-18 00:40:56.653759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-18 00:40:56.653786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-18 00:40:56.653952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-18 00:40:56.654003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-18 00:40:56.654117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-18 00:40:56.654144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-18 00:40:56.654284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-18 00:40:56.654320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-18 00:40:56.654441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-18 00:40:56.654468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-18 00:40:56.654594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-18 00:40:56.654621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-18 00:40:56.654705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-18 00:40:56.654731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-18 00:40:56.654829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-18 00:40:56.654857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-18 00:40:56.655010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-18 00:40:56.655037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-18 00:40:56.655152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-18 00:40:56.655179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-18 00:40:56.655288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-18 00:40:56.655336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-18 00:40:56.655463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-18 00:40:56.655492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-18 00:40:56.655595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-18 00:40:56.655647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-18 00:40:56.655781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-18 00:40:56.655810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-18 00:40:56.655891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-18 00:40:56.655917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-18 00:40:56.656109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-18 00:40:56.656165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-18 00:40:56.656305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-18 00:40:56.656342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-18 00:40:56.656453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-18 00:40:56.656481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-18 00:40:56.656596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-18 00:40:56.656624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-18 00:40:56.656738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-18 00:40:56.656766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-18 00:40:56.656886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-18 00:40:56.656919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-18 00:40:56.657060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-18 00:40:56.657088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-18 00:40:56.657233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-18 00:40:56.657261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-18 00:40:56.657390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-18 00:40:56.657419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-18 00:40:56.657526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-18 00:40:56.657553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-18 00:40:56.657652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-18 00:40:56.657678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-18 00:40:56.657814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-18 00:40:56.657841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-18 00:40:56.657984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-18 00:40:56.658011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-18 00:40:56.658099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-18 00:40:56.658126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-18 00:40:56.658237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-18 00:40:56.658266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-18 00:40:56.658401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-18 00:40:56.658442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-18 00:40:56.658562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-18 00:40:56.658593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-18 00:40:56.658732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-18 00:40:56.658760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-18 00:40:56.658917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-18 00:40:56.658975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-18 00:40:56.659096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-18 00:40:56.659123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-18 00:40:56.659265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-18 00:40:56.659291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-18 00:40:56.659428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-18 00:40:56.659455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-18 00:40:56.659570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-18 00:40:56.659597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-18 00:40:56.659723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-18 00:40:56.659751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-18 00:40:56.659837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-18 00:40:56.659863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-18 00:40:56.659952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-18 00:40:56.659979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-18 00:40:56.660126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-18 00:40:56.660153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-18 00:40:56.660271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-18 00:40:56.660297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-18 00:40:56.660402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-18 00:40:56.660430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-18 00:40:56.660520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-18 00:40:56.660546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-18 00:40:56.660683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-18 00:40:56.660710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-18 00:40:56.660789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-18 00:40:56.660814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-18 00:40:56.660904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-18 00:40:56.660933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-18 00:40:56.661024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.150 [2024-11-18 00:40:56.661052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.150 qpair failed and we were unable to recover it. 00:35:33.150 [2024-11-18 00:40:56.661160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-18 00:40:56.661188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-18 00:40:56.661269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-18 00:40:56.661295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-18 00:40:56.661398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-18 00:40:56.661429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-18 00:40:56.661514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-18 00:40:56.661541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-18 00:40:56.661630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-18 00:40:56.661658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-18 00:40:56.661781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-18 00:40:56.661808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-18 00:40:56.661890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-18 00:40:56.661915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-18 00:40:56.662000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-18 00:40:56.662027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-18 00:40:56.662141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-18 00:40:56.662168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-18 00:40:56.662282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-18 00:40:56.662309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-18 00:40:56.662424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-18 00:40:56.662450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-18 00:40:56.662592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-18 00:40:56.662626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-18 00:40:56.662746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-18 00:40:56.662773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-18 00:40:56.662916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-18 00:40:56.662945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-18 00:40:56.663038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-18 00:40:56.663065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-18 00:40:56.663224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-18 00:40:56.663264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-18 00:40:56.663374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-18 00:40:56.663403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-18 00:40:56.663496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-18 00:40:56.663523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-18 00:40:56.663697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-18 00:40:56.663749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-18 00:40:56.663930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-18 00:40:56.663993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-18 00:40:56.664120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-18 00:40:56.664147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-18 00:40:56.664261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-18 00:40:56.664288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-18 00:40:56.664388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-18 00:40:56.664414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-18 00:40:56.664522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-18 00:40:56.664549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-18 00:40:56.664661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-18 00:40:56.664698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-18 00:40:56.664831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-18 00:40:56.664858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-18 00:40:56.664964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-18 00:40:56.664991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-18 00:40:56.665093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-18 00:40:56.665135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-18 00:40:56.665228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-18 00:40:56.665257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-18 00:40:56.665351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-18 00:40:56.665378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-18 00:40:56.665521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-18 00:40:56.665548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-18 00:40:56.665662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-18 00:40:56.665688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-18 00:40:56.665828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-18 00:40:56.665855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-18 00:40:56.666074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-18 00:40:56.666101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-18 00:40:56.666197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-18 00:40:56.666228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-18 00:40:56.666378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-18 00:40:56.666406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-18 00:40:56.666521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-18 00:40:56.666548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-18 00:40:56.666664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-18 00:40:56.666719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-18 00:40:56.666802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-18 00:40:56.666834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-18 00:40:56.666928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-18 00:40:56.666957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-18 00:40:56.667101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-18 00:40:56.667128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-18 00:40:56.667239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-18 00:40:56.667266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-18 00:40:56.667370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-18 00:40:56.667396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-18 00:40:56.667485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-18 00:40:56.667512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-18 00:40:56.667668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-18 00:40:56.667722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-18 00:40:56.667870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-18 00:40:56.667924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-18 00:40:56.668020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-18 00:40:56.668060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.151 [2024-11-18 00:40:56.668186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.151 [2024-11-18 00:40:56.668216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.151 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-18 00:40:56.668327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-18 00:40:56.668366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-18 00:40:56.668480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-18 00:40:56.668508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-18 00:40:56.668609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-18 00:40:56.668637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-18 00:40:56.668783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-18 00:40:56.668829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-18 00:40:56.668975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-18 00:40:56.669003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-18 00:40:56.669109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-18 00:40:56.669137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-18 00:40:56.669261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-18 00:40:56.669300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-18 00:40:56.669398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-18 00:40:56.669426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-18 00:40:56.669541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-18 00:40:56.669577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-18 00:40:56.669720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-18 00:40:56.669746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-18 00:40:56.669855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-18 00:40:56.669893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-18 00:40:56.669983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-18 00:40:56.670009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-18 00:40:56.670093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-18 00:40:56.670120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-18 00:40:56.670228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-18 00:40:56.670255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-18 00:40:56.670378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-18 00:40:56.670410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-18 00:40:56.670496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-18 00:40:56.670523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-18 00:40:56.670679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-18 00:40:56.670706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-18 00:40:56.670814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-18 00:40:56.670847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-18 00:40:56.670935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-18 00:40:56.670961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-18 00:40:56.671084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-18 00:40:56.671128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-18 00:40:56.671246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-18 00:40:56.671274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-18 00:40:56.671398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-18 00:40:56.671428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-18 00:40:56.671519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-18 00:40:56.671547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-18 00:40:56.671672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-18 00:40:56.671700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-18 00:40:56.671842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-18 00:40:56.671871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-18 00:40:56.671986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-18 00:40:56.672014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-18 00:40:56.672095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-18 00:40:56.672126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-18 00:40:56.672246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-18 00:40:56.672275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-18 00:40:56.672387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-18 00:40:56.672415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-18 00:40:56.672502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-18 00:40:56.672529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-18 00:40:56.672614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-18 00:40:56.672640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-18 00:40:56.672760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-18 00:40:56.672787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-18 00:40:56.672901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-18 00:40:56.672928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-18 00:40:56.673035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-18 00:40:56.673062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-18 00:40:56.673176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-18 00:40:56.673206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-18 00:40:56.673293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-18 00:40:56.673330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-18 00:40:56.673476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-18 00:40:56.673503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-18 00:40:56.673639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-18 00:40:56.673666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-18 00:40:56.673783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-18 00:40:56.673811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-18 00:40:56.673923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-18 00:40:56.673951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-18 00:40:56.674068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-18 00:40:56.674096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-18 00:40:56.674185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-18 00:40:56.674214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-18 00:40:56.674343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-18 00:40:56.674384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-18 00:40:56.674481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-18 00:40:56.674509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-18 00:40:56.674654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-18 00:40:56.674682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-18 00:40:56.674799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-18 00:40:56.674826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-18 00:40:56.674944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-18 00:40:56.674971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-18 00:40:56.675082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-18 00:40:56.675109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-18 00:40:56.675195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-18 00:40:56.675222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-18 00:40:56.675319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-18 00:40:56.675346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-18 00:40:56.675467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-18 00:40:56.675495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-18 00:40:56.675678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-18 00:40:56.675745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-18 00:40:56.675877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-18 00:40:56.675947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-18 00:40:56.676063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-18 00:40:56.676092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.152 [2024-11-18 00:40:56.676209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.152 [2024-11-18 00:40:56.676237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.152 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-18 00:40:56.676350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-18 00:40:56.676378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-18 00:40:56.676497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-18 00:40:56.676524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-18 00:40:56.676638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-18 00:40:56.676665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-18 00:40:56.676812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-18 00:40:56.676839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-18 00:40:56.676926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-18 00:40:56.676955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-18 00:40:56.677096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-18 00:40:56.677123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-18 00:40:56.677217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-18 00:40:56.677243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-18 00:40:56.677359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-18 00:40:56.677386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-18 00:40:56.677505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-18 00:40:56.677533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-18 00:40:56.677619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-18 00:40:56.677648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-18 00:40:56.677774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-18 00:40:56.677804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-18 00:40:56.677917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-18 00:40:56.677946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-18 00:40:56.678033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-18 00:40:56.678060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-18 00:40:56.678138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-18 00:40:56.678164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-18 00:40:56.678273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-18 00:40:56.678302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-18 00:40:56.678426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-18 00:40:56.678453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-18 00:40:56.678578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-18 00:40:56.678606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-18 00:40:56.678707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-18 00:40:56.678735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-18 00:40:56.678853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-18 00:40:56.678893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-18 00:40:56.678987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-18 00:40:56.679016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-18 00:40:56.679129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-18 00:40:56.679157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-18 00:40:56.679280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-18 00:40:56.679307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-18 00:40:56.679396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-18 00:40:56.679422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-18 00:40:56.679533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-18 00:40:56.679560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-18 00:40:56.679665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-18 00:40:56.679692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-18 00:40:56.679771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-18 00:40:56.679796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-18 00:40:56.679909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-18 00:40:56.679936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-18 00:40:56.680051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-18 00:40:56.680080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-18 00:40:56.680194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-18 00:40:56.680223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-18 00:40:56.680341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-18 00:40:56.680373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-18 00:40:56.680453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-18 00:40:56.680478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-18 00:40:56.680621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-18 00:40:56.680648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-18 00:40:56.680758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-18 00:40:56.680785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-18 00:40:56.680922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-18 00:40:56.680950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-18 00:40:56.681065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-18 00:40:56.681091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-18 00:40:56.681172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-18 00:40:56.681199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-18 00:40:56.681322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-18 00:40:56.681350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-18 00:40:56.681465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-18 00:40:56.681493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-18 00:40:56.681575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-18 00:40:56.681602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-18 00:40:56.681693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-18 00:40:56.681721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-18 00:40:56.681833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-18 00:40:56.681859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-18 00:40:56.681949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-18 00:40:56.681977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-18 00:40:56.682094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-18 00:40:56.682121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-18 00:40:56.682267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-18 00:40:56.682295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-18 00:40:56.682421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-18 00:40:56.682448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-18 00:40:56.682558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-18 00:40:56.682584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-18 00:40:56.682693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-18 00:40:56.682719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-18 00:40:56.682832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-18 00:40:56.682859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-18 00:40:56.682944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-18 00:40:56.682973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-18 00:40:56.683106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-18 00:40:56.683146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-18 00:40:56.683288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-18 00:40:56.683336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-18 00:40:56.683455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-18 00:40:56.683484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-18 00:40:56.683599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-18 00:40:56.683627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-18 00:40:56.683768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-18 00:40:56.683795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.153 [2024-11-18 00:40:56.683910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.153 [2024-11-18 00:40:56.683939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.153 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-18 00:40:56.684053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-18 00:40:56.684081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-18 00:40:56.684198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-18 00:40:56.684230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-18 00:40:56.684351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-18 00:40:56.684381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-18 00:40:56.684490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-18 00:40:56.684518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-18 00:40:56.684655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-18 00:40:56.684683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-18 00:40:56.684797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-18 00:40:56.684824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-18 00:40:56.684913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-18 00:40:56.684941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-18 00:40:56.685026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-18 00:40:56.685054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-18 00:40:56.685196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-18 00:40:56.685223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-18 00:40:56.685363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-18 00:40:56.685389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-18 00:40:56.685471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-18 00:40:56.685497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-18 00:40:56.685603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-18 00:40:56.685629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-18 00:40:56.685744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-18 00:40:56.685772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-18 00:40:56.685864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-18 00:40:56.685891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-18 00:40:56.686018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-18 00:40:56.686063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-18 00:40:56.686184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-18 00:40:56.686213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-18 00:40:56.686295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-18 00:40:56.686330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-18 00:40:56.686418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-18 00:40:56.686445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-18 00:40:56.686586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-18 00:40:56.686614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-18 00:40:56.686760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-18 00:40:56.686787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-18 00:40:56.686894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-18 00:40:56.686921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-18 00:40:56.687033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-18 00:40:56.687060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-18 00:40:56.687147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-18 00:40:56.687176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-18 00:40:56.687322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-18 00:40:56.687352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-18 00:40:56.687443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-18 00:40:56.687470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-18 00:40:56.687586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-18 00:40:56.687612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-18 00:40:56.687748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-18 00:40:56.687775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-18 00:40:56.687862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-18 00:40:56.687890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-18 00:40:56.688034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-18 00:40:56.688061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-18 00:40:56.688204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-18 00:40:56.688232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-18 00:40:56.688368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-18 00:40:56.688409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-18 00:40:56.688509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-18 00:40:56.688538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-18 00:40:56.688658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-18 00:40:56.688685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-18 00:40:56.688802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-18 00:40:56.688831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-18 00:40:56.688932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-18 00:40:56.688973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-18 00:40:56.689133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-18 00:40:56.689162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-18 00:40:56.689257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-18 00:40:56.689287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-18 00:40:56.689390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-18 00:40:56.689417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-18 00:40:56.689499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-18 00:40:56.689528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-18 00:40:56.689653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-18 00:40:56.689708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-18 00:40:56.689879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-18 00:40:56.689908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-18 00:40:56.690019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-18 00:40:56.690055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-18 00:40:56.690151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-18 00:40:56.690182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-18 00:40:56.690325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-18 00:40:56.690366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-18 00:40:56.690489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-18 00:40:56.690518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-18 00:40:56.690610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-18 00:40:56.690637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-18 00:40:56.690754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-18 00:40:56.690781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-18 00:40:56.690935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.154 [2024-11-18 00:40:56.690962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.154 qpair failed and we were unable to recover it. 00:35:33.154 [2024-11-18 00:40:56.691076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-18 00:40:56.691103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-18 00:40:56.691214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-18 00:40:56.691241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-18 00:40:56.691360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-18 00:40:56.691388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-18 00:40:56.691503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-18 00:40:56.691533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-18 00:40:56.691650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-18 00:40:56.691678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-18 00:40:56.691785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-18 00:40:56.691813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-18 00:40:56.691974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-18 00:40:56.692028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-18 00:40:56.692158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-18 00:40:56.692188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-18 00:40:56.692331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-18 00:40:56.692360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-18 00:40:56.692507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-18 00:40:56.692536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-18 00:40:56.692646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-18 00:40:56.692673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-18 00:40:56.692757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-18 00:40:56.692783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-18 00:40:56.692906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-18 00:40:56.692933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-18 00:40:56.693015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-18 00:40:56.693042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-18 00:40:56.693150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-18 00:40:56.693177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-18 00:40:56.693320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-18 00:40:56.693347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-18 00:40:56.693461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-18 00:40:56.693488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-18 00:40:56.693617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-18 00:40:56.693658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-18 00:40:56.693851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-18 00:40:56.693880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-18 00:40:56.694001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-18 00:40:56.694031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-18 00:40:56.694180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-18 00:40:56.694208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-18 00:40:56.694353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-18 00:40:56.694381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-18 00:40:56.694475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-18 00:40:56.694503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-18 00:40:56.694621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-18 00:40:56.694649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-18 00:40:56.694736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-18 00:40:56.694763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-18 00:40:56.694845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-18 00:40:56.694872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-18 00:40:56.694983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-18 00:40:56.695011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-18 00:40:56.695092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-18 00:40:56.695118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-18 00:40:56.695234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-18 00:40:56.695263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-18 00:40:56.695360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-18 00:40:56.695387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-18 00:40:56.695478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-18 00:40:56.695505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-18 00:40:56.695618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-18 00:40:56.695645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-18 00:40:56.695724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-18 00:40:56.695750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-18 00:40:56.695891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-18 00:40:56.695925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-18 00:40:56.696039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-18 00:40:56.696067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-18 00:40:56.696183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-18 00:40:56.696210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-18 00:40:56.696339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-18 00:40:56.696379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-18 00:40:56.696504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-18 00:40:56.696532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-18 00:40:56.696648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-18 00:40:56.696675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-18 00:40:56.696790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-18 00:40:56.696818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-18 00:40:56.697020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-18 00:40:56.697048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-18 00:40:56.697159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-18 00:40:56.697186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-18 00:40:56.697273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-18 00:40:56.697300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-18 00:40:56.697419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-18 00:40:56.697446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-18 00:40:56.697532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-18 00:40:56.697559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-18 00:40:56.697673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-18 00:40:56.697701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-18 00:40:56.697821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-18 00:40:56.697848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-18 00:40:56.697982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-18 00:40:56.698022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-18 00:40:56.698148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-18 00:40:56.698178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-18 00:40:56.698263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-18 00:40:56.698292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-18 00:40:56.698415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-18 00:40:56.698443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-18 00:40:56.698557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-18 00:40:56.698584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-18 00:40:56.698698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-18 00:40:56.698725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-18 00:40:56.698837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-18 00:40:56.698865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-18 00:40:56.698971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-18 00:40:56.699010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-18 00:40:56.699113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-18 00:40:56.699142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-18 00:40:56.699283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-18 00:40:56.699317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-18 00:40:56.699409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-18 00:40:56.699436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-18 00:40:56.699573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-18 00:40:56.699599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-18 00:40:56.699741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-18 00:40:56.699768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-18 00:40:56.699920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-18 00:40:56.699955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-18 00:40:56.700055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-18 00:40:56.700096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-18 00:40:56.700213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-18 00:40:56.700242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-18 00:40:56.700356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-18 00:40:56.700385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-18 00:40:56.700528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-18 00:40:56.700556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-18 00:40:56.700646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-18 00:40:56.700674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.155 [2024-11-18 00:40:56.700794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.155 [2024-11-18 00:40:56.700822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.155 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-18 00:40:56.700962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-18 00:40:56.700989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-18 00:40:56.701067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-18 00:40:56.701093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-18 00:40:56.701207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-18 00:40:56.701234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-18 00:40:56.701355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-18 00:40:56.701384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-18 00:40:56.701499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-18 00:40:56.701527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-18 00:40:56.701618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-18 00:40:56.701646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-18 00:40:56.701728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-18 00:40:56.701757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-18 00:40:56.701848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-18 00:40:56.701875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-18 00:40:56.701995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-18 00:40:56.702024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-18 00:40:56.702102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-18 00:40:56.702127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-18 00:40:56.702273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-18 00:40:56.702301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-18 00:40:56.702429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-18 00:40:56.702456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-18 00:40:56.702595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-18 00:40:56.702622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-18 00:40:56.702733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-18 00:40:56.702761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-18 00:40:56.702876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-18 00:40:56.702904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-18 00:40:56.703022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-18 00:40:56.703051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-18 00:40:56.703194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-18 00:40:56.703221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-18 00:40:56.703343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-18 00:40:56.703372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-18 00:40:56.703511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-18 00:40:56.703538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-18 00:40:56.703640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-18 00:40:56.703697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-18 00:40:56.703876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-18 00:40:56.703941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-18 00:40:56.704036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-18 00:40:56.704064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-18 00:40:56.704164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-18 00:40:56.704204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-18 00:40:56.704301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-18 00:40:56.704342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-18 00:40:56.704454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-18 00:40:56.704481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-18 00:40:56.704596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-18 00:40:56.704623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-18 00:40:56.704715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-18 00:40:56.704741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-18 00:40:56.704825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-18 00:40:56.704852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-18 00:40:56.704936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-18 00:40:56.704966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-18 00:40:56.705050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-18 00:40:56.705079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-18 00:40:56.705163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-18 00:40:56.705191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-18 00:40:56.705322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-18 00:40:56.705350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-18 00:40:56.705444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-18 00:40:56.705469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-18 00:40:56.705594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-18 00:40:56.705640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-18 00:40:56.705840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-18 00:40:56.705896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-18 00:40:56.705983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-18 00:40:56.706012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-18 00:40:56.706131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-18 00:40:56.706159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-18 00:40:56.706247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-18 00:40:56.706277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-18 00:40:56.706375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-18 00:40:56.706404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-18 00:40:56.706519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-18 00:40:56.706545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-18 00:40:56.706632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-18 00:40:56.706659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-18 00:40:56.706854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-18 00:40:56.706908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-18 00:40:56.706985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-18 00:40:56.707010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-18 00:40:56.707095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-18 00:40:56.707122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-18 00:40:56.707277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-18 00:40:56.707324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-18 00:40:56.707446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-18 00:40:56.707475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-18 00:40:56.707594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-18 00:40:56.707625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-18 00:40:56.707859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-18 00:40:56.707905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-18 00:40:56.707990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-18 00:40:56.708018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-18 00:40:56.708128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-18 00:40:56.708156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-18 00:40:56.708243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-18 00:40:56.708273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-18 00:40:56.708411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-18 00:40:56.708452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-18 00:40:56.708546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-18 00:40:56.708574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-18 00:40:56.708653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-18 00:40:56.708680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-18 00:40:56.708828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-18 00:40:56.708854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-18 00:40:56.708969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-18 00:40:56.709027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-18 00:40:56.709137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-18 00:40:56.709164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-18 00:40:56.709308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-18 00:40:56.709344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-18 00:40:56.709488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-18 00:40:56.709516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-18 00:40:56.709632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-18 00:40:56.709659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-18 00:40:56.709823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-18 00:40:56.709880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-18 00:40:56.709994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-18 00:40:56.710023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-18 00:40:56.710110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-18 00:40:56.710137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-18 00:40:56.710277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-18 00:40:56.710305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-18 00:40:56.710398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-18 00:40:56.710424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.156 [2024-11-18 00:40:56.710538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.156 [2024-11-18 00:40:56.710565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.156 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-18 00:40:56.710705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-18 00:40:56.710731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-18 00:40:56.710843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-18 00:40:56.710870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-18 00:40:56.711009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-18 00:40:56.711036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-18 00:40:56.711144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-18 00:40:56.711171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-18 00:40:56.711290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-18 00:40:56.711329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-18 00:40:56.711494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-18 00:40:56.711533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-18 00:40:56.711659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-18 00:40:56.711689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-18 00:40:56.711805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-18 00:40:56.711834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-18 00:40:56.711923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-18 00:40:56.711951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-18 00:40:56.712094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-18 00:40:56.712123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-18 00:40:56.712256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-18 00:40:56.712296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-18 00:40:56.712398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-18 00:40:56.712426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-18 00:40:56.712547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-18 00:40:56.712574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-18 00:40:56.712687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-18 00:40:56.712715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-18 00:40:56.712802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-18 00:40:56.712826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-18 00:40:56.712943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-18 00:40:56.712972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-18 00:40:56.713083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-18 00:40:56.713111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-18 00:40:56.713200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-18 00:40:56.713227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-18 00:40:56.713343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-18 00:40:56.713372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-18 00:40:56.713485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-18 00:40:56.713513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-18 00:40:56.713593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-18 00:40:56.713621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-18 00:40:56.713747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-18 00:40:56.713776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-18 00:40:56.713889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-18 00:40:56.713916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-18 00:40:56.714030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-18 00:40:56.714057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-18 00:40:56.714166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-18 00:40:56.714192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-18 00:40:56.714284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-18 00:40:56.714317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-18 00:40:56.714402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-18 00:40:56.714427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-18 00:40:56.714531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-18 00:40:56.714558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-18 00:40:56.714671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-18 00:40:56.714698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-18 00:40:56.714815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-18 00:40:56.714843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-18 00:40:56.714957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-18 00:40:56.714985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-18 00:40:56.715101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-18 00:40:56.715128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-18 00:40:56.715244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-18 00:40:56.715271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-18 00:40:56.715362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-18 00:40:56.715389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-18 00:40:56.715508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-18 00:40:56.715541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-18 00:40:56.715653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-18 00:40:56.715680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-18 00:40:56.715793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-18 00:40:56.715821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-18 00:40:56.715909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-18 00:40:56.715938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-18 00:40:56.716024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-18 00:40:56.716051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-18 00:40:56.716135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-18 00:40:56.716162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-18 00:40:56.716248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-18 00:40:56.716275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-18 00:40:56.716396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-18 00:40:56.716423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-18 00:40:56.716542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-18 00:40:56.716569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-18 00:40:56.716654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-18 00:40:56.716682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-18 00:40:56.716822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-18 00:40:56.716849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-18 00:40:56.716964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-18 00:40:56.716994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-18 00:40:56.717110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-18 00:40:56.717137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-18 00:40:56.717219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-18 00:40:56.717246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-18 00:40:56.717335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-18 00:40:56.717361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-18 00:40:56.717478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-18 00:40:56.717505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-18 00:40:56.717644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-18 00:40:56.717670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-18 00:40:56.717785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-18 00:40:56.717811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-18 00:40:56.717955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-18 00:40:56.717983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-18 00:40:56.718106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-18 00:40:56.718132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-18 00:40:56.718251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-18 00:40:56.718280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-18 00:40:56.718370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-18 00:40:56.718396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-18 00:40:56.718479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-18 00:40:56.718507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-18 00:40:56.718619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-18 00:40:56.718647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-18 00:40:56.718758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-18 00:40:56.718785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-18 00:40:56.718904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-18 00:40:56.718930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-18 00:40:56.719072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.157 [2024-11-18 00:40:56.719100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.157 qpair failed and we were unable to recover it. 00:35:33.157 [2024-11-18 00:40:56.719235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-18 00:40:56.719277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-18 00:40:56.719409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-18 00:40:56.719449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-18 00:40:56.719565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-18 00:40:56.719594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-18 00:40:56.719766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-18 00:40:56.719818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-18 00:40:56.719958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-18 00:40:56.720026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-18 00:40:56.720166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-18 00:40:56.720193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-18 00:40:56.720304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-18 00:40:56.720343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-18 00:40:56.720430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-18 00:40:56.720457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-18 00:40:56.720602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-18 00:40:56.720629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-18 00:40:56.720796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-18 00:40:56.720848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-18 00:40:56.720993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-18 00:40:56.721054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-18 00:40:56.721169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-18 00:40:56.721198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-18 00:40:56.721323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-18 00:40:56.721351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-18 00:40:56.721443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-18 00:40:56.721475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-18 00:40:56.721615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-18 00:40:56.721642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-18 00:40:56.721806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-18 00:40:56.721869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-18 00:40:56.722053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-18 00:40:56.722106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-18 00:40:56.722247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-18 00:40:56.722275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-18 00:40:56.722391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-18 00:40:56.722419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-18 00:40:56.722501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-18 00:40:56.722529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-18 00:40:56.722646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-18 00:40:56.722674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-18 00:40:56.722867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-18 00:40:56.722920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-18 00:40:56.723071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-18 00:40:56.723125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-18 00:40:56.723202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-18 00:40:56.723228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-18 00:40:56.723341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-18 00:40:56.723369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-18 00:40:56.723487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-18 00:40:56.723514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-18 00:40:56.723602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-18 00:40:56.723628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-18 00:40:56.723780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-18 00:40:56.723807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-18 00:40:56.723888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-18 00:40:56.723913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-18 00:40:56.724029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-18 00:40:56.724055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-18 00:40:56.724161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-18 00:40:56.724188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-18 00:40:56.724304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-18 00:40:56.724407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-18 00:40:56.724522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-18 00:40:56.724549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-18 00:40:56.724686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-18 00:40:56.724713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-18 00:40:56.724799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-18 00:40:56.724824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-18 00:40:56.724902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-18 00:40:56.724931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-18 00:40:56.725040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-18 00:40:56.725066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-18 00:40:56.725145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-18 00:40:56.725170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-18 00:40:56.725319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-18 00:40:56.725346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-18 00:40:56.725435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-18 00:40:56.725462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-18 00:40:56.725578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-18 00:40:56.725611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-18 00:40:56.725748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-18 00:40:56.725775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-18 00:40:56.725921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-18 00:40:56.725968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-18 00:40:56.726110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-18 00:40:56.726137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-18 00:40:56.726224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-18 00:40:56.726252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-18 00:40:56.726390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-18 00:40:56.726430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-18 00:40:56.726556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-18 00:40:56.726584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-18 00:40:56.726671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-18 00:40:56.726698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-18 00:40:56.726932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-18 00:40:56.726996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-18 00:40:56.727158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-18 00:40:56.727184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-18 00:40:56.727262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-18 00:40:56.727287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-18 00:40:56.727436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-18 00:40:56.727465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-18 00:40:56.727578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-18 00:40:56.727605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-18 00:40:56.727683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-18 00:40:56.727709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-18 00:40:56.727806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-18 00:40:56.727834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-18 00:40:56.727958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-18 00:40:56.727999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-18 00:40:56.728124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-18 00:40:56.728154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-18 00:40:56.728276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-18 00:40:56.728305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-18 00:40:56.728435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-18 00:40:56.728462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-18 00:40:56.728557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-18 00:40:56.728583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-18 00:40:56.728673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-18 00:40:56.728700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-18 00:40:56.728796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-18 00:40:56.728823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-18 00:40:56.728941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-18 00:40:56.728968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-18 00:40:56.729095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-18 00:40:56.729136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.158 [2024-11-18 00:40:56.729290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.158 [2024-11-18 00:40:56.729327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.158 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-18 00:40:56.729474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-18 00:40:56.729503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-18 00:40:56.729620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-18 00:40:56.729648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-18 00:40:56.729729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-18 00:40:56.729762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-18 00:40:56.729855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-18 00:40:56.729882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-18 00:40:56.730030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-18 00:40:56.730078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-18 00:40:56.730188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-18 00:40:56.730216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-18 00:40:56.730303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-18 00:40:56.730336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-18 00:40:56.730480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-18 00:40:56.730508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-18 00:40:56.730654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-18 00:40:56.730682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-18 00:40:56.730791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-18 00:40:56.730818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-18 00:40:56.730933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-18 00:40:56.730961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-18 00:40:56.731081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-18 00:40:56.731108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-18 00:40:56.731250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-18 00:40:56.731277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-18 00:40:56.731398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-18 00:40:56.731439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-18 00:40:56.731540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-18 00:40:56.731569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-18 00:40:56.731681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-18 00:40:56.731708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-18 00:40:56.731821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-18 00:40:56.731848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-18 00:40:56.731957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-18 00:40:56.731984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-18 00:40:56.732073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-18 00:40:56.732112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-18 00:40:56.732238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-18 00:40:56.732266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-18 00:40:56.732391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-18 00:40:56.732420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-18 00:40:56.732504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-18 00:40:56.732532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-18 00:40:56.732648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-18 00:40:56.732675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-18 00:40:56.732785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-18 00:40:56.732812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-18 00:40:56.732925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-18 00:40:56.732954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-18 00:40:56.733077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-18 00:40:56.733107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-18 00:40:56.733218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-18 00:40:56.733245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-18 00:40:56.733346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-18 00:40:56.733374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-18 00:40:56.733484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-18 00:40:56.733511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-18 00:40:56.733629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-18 00:40:56.733656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-18 00:40:56.733775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-18 00:40:56.733802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-18 00:40:56.733893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-18 00:40:56.733921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-18 00:40:56.734044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-18 00:40:56.734075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-18 00:40:56.734219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-18 00:40:56.734248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-18 00:40:56.734363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-18 00:40:56.734391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-18 00:40:56.734507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-18 00:40:56.734536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-18 00:40:56.734648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-18 00:40:56.734676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-18 00:40:56.734802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-18 00:40:56.734829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-18 00:40:56.734947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-18 00:40:56.734975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-18 00:40:56.735084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-18 00:40:56.735111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-18 00:40:56.735228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-18 00:40:56.735257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-18 00:40:56.735352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-18 00:40:56.735379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-18 00:40:56.735496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-18 00:40:56.735529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-18 00:40:56.735630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-18 00:40:56.735657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-18 00:40:56.735732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-18 00:40:56.735758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-18 00:40:56.735870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-18 00:40:56.735897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-18 00:40:56.736007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-18 00:40:56.736034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-18 00:40:56.736152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-18 00:40:56.736179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-18 00:40:56.736291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-18 00:40:56.736329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-18 00:40:56.736443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-18 00:40:56.736470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-18 00:40:56.736610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-18 00:40:56.736637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-18 00:40:56.736747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-18 00:40:56.736773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-18 00:40:56.736880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-18 00:40:56.736907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-18 00:40:56.737032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-18 00:40:56.737072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-18 00:40:56.737187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-18 00:40:56.737228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-18 00:40:56.737324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-18 00:40:56.737354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-18 00:40:56.737506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-18 00:40:56.737533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-18 00:40:56.737647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-18 00:40:56.737675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-18 00:40:56.737816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-18 00:40:56.737844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-18 00:40:56.738068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-18 00:40:56.738095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-18 00:40:56.738237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-18 00:40:56.738265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-18 00:40:56.738413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-18 00:40:56.738441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-18 00:40:56.738579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-18 00:40:56.738606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-18 00:40:56.738742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.159 [2024-11-18 00:40:56.738770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.159 qpair failed and we were unable to recover it. 00:35:33.159 [2024-11-18 00:40:56.738862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.738890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-18 00:40:56.739007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.739035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-18 00:40:56.739179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.739206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-18 00:40:56.739331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.739363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-18 00:40:56.739484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.739513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-18 00:40:56.739626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.739667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-18 00:40:56.739793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.739822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-18 00:40:56.740004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.740049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-18 00:40:56.740161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.740188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-18 00:40:56.740334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.740363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-18 00:40:56.740477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.740504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-18 00:40:56.740589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.740616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-18 00:40:56.740727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.740754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-18 00:40:56.740843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.740872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-18 00:40:56.740993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.741022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-18 00:40:56.741137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.741164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-18 00:40:56.741250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.741277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-18 00:40:56.741370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.741398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-18 00:40:56.741515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.741548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-18 00:40:56.741676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.741703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-18 00:40:56.741846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.741873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-18 00:40:56.741985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.742014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-18 00:40:56.742129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.742158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-18 00:40:56.742323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.742365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-18 00:40:56.742456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.742485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-18 00:40:56.742621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.742649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-18 00:40:56.742861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.742914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-18 00:40:56.743042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.743096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-18 00:40:56.743210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.743237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-18 00:40:56.743327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.743360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-18 00:40:56.743475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.743502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-18 00:40:56.743581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.743607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-18 00:40:56.743729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.743756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-18 00:40:56.743839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.743864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-18 00:40:56.743943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.743972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-18 00:40:56.744081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.744109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-18 00:40:56.744199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.744226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-18 00:40:56.744338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.744365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-18 00:40:56.744477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.744504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-18 00:40:56.744613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.744640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-18 00:40:56.744754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.744781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-18 00:40:56.744893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.744922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-18 00:40:56.745042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.745082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-18 00:40:56.745202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.745230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-18 00:40:56.745350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.745378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-18 00:40:56.745484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.745516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-18 00:40:56.745629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.745657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-18 00:40:56.745752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.745780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-18 00:40:56.745869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.745897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-18 00:40:56.746013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.746040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-18 00:40:56.746141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.746182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-18 00:40:56.746303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.746340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-18 00:40:56.746431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.746464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-18 00:40:56.746593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.746621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-18 00:40:56.746732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.746760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-18 00:40:56.746849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.746877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-18 00:40:56.747023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.747050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-18 00:40:56.747139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.747166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-18 00:40:56.747248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.747275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-18 00:40:56.747404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.747433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-18 00:40:56.747548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.747576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-18 00:40:56.747692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.747720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-18 00:40:56.747835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.747862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-18 00:40:56.747985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.748012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-18 00:40:56.748129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.748157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-18 00:40:56.748298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.748333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-18 00:40:56.748453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.748481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.160 qpair failed and we were unable to recover it. 00:35:33.160 [2024-11-18 00:40:56.748602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.160 [2024-11-18 00:40:56.748629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-18 00:40:56.748754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-18 00:40:56.748783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-18 00:40:56.748871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-18 00:40:56.748898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-18 00:40:56.749015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-18 00:40:56.749042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-18 00:40:56.749153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-18 00:40:56.749180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-18 00:40:56.749294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-18 00:40:56.749328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-18 00:40:56.749444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-18 00:40:56.749472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-18 00:40:56.749592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-18 00:40:56.749620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-18 00:40:56.749739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-18 00:40:56.749767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-18 00:40:56.749853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-18 00:40:56.749880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-18 00:40:56.750024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-18 00:40:56.750052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-18 00:40:56.750169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-18 00:40:56.750197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-18 00:40:56.750330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-18 00:40:56.750371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-18 00:40:56.750487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-18 00:40:56.750514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-18 00:40:56.750598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-18 00:40:56.750625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-18 00:40:56.750766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-18 00:40:56.750793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-18 00:40:56.750965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-18 00:40:56.751021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-18 00:40:56.751136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-18 00:40:56.751163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-18 00:40:56.751248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-18 00:40:56.751283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-18 00:40:56.751401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-18 00:40:56.751430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-18 00:40:56.751566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-18 00:40:56.751606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-18 00:40:56.751765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-18 00:40:56.751818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-18 00:40:56.751924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-18 00:40:56.751990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-18 00:40:56.752127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-18 00:40:56.752155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-18 00:40:56.752297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-18 00:40:56.752331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-18 00:40:56.752414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-18 00:40:56.752441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-18 00:40:56.752530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-18 00:40:56.752558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-18 00:40:56.752639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-18 00:40:56.752667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-18 00:40:56.752854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-18 00:40:56.752912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-18 00:40:56.753058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-18 00:40:56.753086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-18 00:40:56.753176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-18 00:40:56.753202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-18 00:40:56.753349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-18 00:40:56.753378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-18 00:40:56.753477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-18 00:40:56.753505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-18 00:40:56.753623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-18 00:40:56.753650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-18 00:40:56.753765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-18 00:40:56.753792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-18 00:40:56.753869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-18 00:40:56.753894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-18 00:40:56.753968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-18 00:40:56.753994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-18 00:40:56.754100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-18 00:40:56.754127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-18 00:40:56.754216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-18 00:40:56.754244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-18 00:40:56.754331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-18 00:40:56.754358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-18 00:40:56.754473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-18 00:40:56.754502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-18 00:40:56.754594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-18 00:40:56.754622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-18 00:40:56.754711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-18 00:40:56.754738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-18 00:40:56.754855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-18 00:40:56.754883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-18 00:40:56.754967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-18 00:40:56.754996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-18 00:40:56.755086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-18 00:40:56.755114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-18 00:40:56.755199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-18 00:40:56.755226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-18 00:40:56.755354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-18 00:40:56.755394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-18 00:40:56.755517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-18 00:40:56.755546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-18 00:40:56.755666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-18 00:40:56.755694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-18 00:40:56.755784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-18 00:40:56.755810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-18 00:40:56.755902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-18 00:40:56.755928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-18 00:40:56.756012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-18 00:40:56.756041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-18 00:40:56.756155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-18 00:40:56.756182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-18 00:40:56.756276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-18 00:40:56.756304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-18 00:40:56.756414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-18 00:40:56.756442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-18 00:40:56.756579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-18 00:40:56.756606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-18 00:40:56.756726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-18 00:40:56.756755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-18 00:40:56.756872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-18 00:40:56.756905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-18 00:40:56.757023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-18 00:40:56.757052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-18 00:40:56.757139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-18 00:40:56.757167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-18 00:40:56.757255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-18 00:40:56.757283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-18 00:40:56.757397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-18 00:40:56.757425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-18 00:40:56.757534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-18 00:40:56.757561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-18 00:40:56.757708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-18 00:40:56.757735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-18 00:40:56.757817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-18 00:40:56.757844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.161 [2024-11-18 00:40:56.757959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.161 [2024-11-18 00:40:56.757986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.161 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-18 00:40:56.758071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-18 00:40:56.758099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-18 00:40:56.758238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-18 00:40:56.758265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-18 00:40:56.758387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-18 00:40:56.758415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-18 00:40:56.758505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-18 00:40:56.758532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-18 00:40:56.758647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-18 00:40:56.758675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-18 00:40:56.758770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-18 00:40:56.758798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-18 00:40:56.758905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-18 00:40:56.758932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-18 00:40:56.759036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-18 00:40:56.759076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-18 00:40:56.759224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-18 00:40:56.759254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-18 00:40:56.759374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-18 00:40:56.759402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-18 00:40:56.759492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-18 00:40:56.759520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-18 00:40:56.759612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-18 00:40:56.759639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-18 00:40:56.759782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-18 00:40:56.759809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-18 00:40:56.759932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-18 00:40:56.759960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-18 00:40:56.760098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-18 00:40:56.760126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-18 00:40:56.760234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-18 00:40:56.760261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-18 00:40:56.760405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-18 00:40:56.760433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-18 00:40:56.760574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-18 00:40:56.760601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-18 00:40:56.760721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-18 00:40:56.760749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-18 00:40:56.760864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-18 00:40:56.760891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-18 00:40:56.761005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-18 00:40:56.761032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-18 00:40:56.761172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-18 00:40:56.761200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-18 00:40:56.761282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-18 00:40:56.761307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-18 00:40:56.761422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-18 00:40:56.761449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-18 00:40:56.761531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-18 00:40:56.761559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-18 00:40:56.761703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-18 00:40:56.761730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-18 00:40:56.761847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-18 00:40:56.761874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-18 00:40:56.761992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-18 00:40:56.762021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-18 00:40:56.762136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-18 00:40:56.762164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-18 00:40:56.762274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-18 00:40:56.762301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-18 00:40:56.762389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-18 00:40:56.762417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-18 00:40:56.762505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-18 00:40:56.762536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-18 00:40:56.762649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-18 00:40:56.762677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-18 00:40:56.762793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-18 00:40:56.762821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-18 00:40:56.762898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-18 00:40:56.762925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-18 00:40:56.763065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-18 00:40:56.763092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-18 00:40:56.763244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-18 00:40:56.763285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-18 00:40:56.763414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-18 00:40:56.763442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-18 00:40:56.763554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-18 00:40:56.763582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-18 00:40:56.763759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-18 00:40:56.763821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-18 00:40:56.763901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-18 00:40:56.763929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-18 00:40:56.764113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-18 00:40:56.764162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-18 00:40:56.764240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-18 00:40:56.764265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-18 00:40:56.764371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-18 00:40:56.764398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-18 00:40:56.764529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-18 00:40:56.764557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-18 00:40:56.764657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-18 00:40:56.764684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-18 00:40:56.764793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-18 00:40:56.764821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-18 00:40:56.764899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-18 00:40:56.764926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-18 00:40:56.765001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-18 00:40:56.765026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-18 00:40:56.765133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-18 00:40:56.765160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-18 00:40:56.765245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-18 00:40:56.765272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-18 00:40:56.765354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-18 00:40:56.765380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-18 00:40:56.765469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-18 00:40:56.765496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-18 00:40:56.765611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-18 00:40:56.765638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-18 00:40:56.765746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-18 00:40:56.765772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-18 00:40:56.765884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-18 00:40:56.765911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-18 00:40:56.766031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-18 00:40:56.766060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-18 00:40:56.766170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-18 00:40:56.766197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-18 00:40:56.766339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-18 00:40:56.766372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-18 00:40:56.766488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-18 00:40:56.766515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-18 00:40:56.766629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-18 00:40:56.766656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-18 00:40:56.766742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-18 00:40:56.766769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-18 00:40:56.766854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-18 00:40:56.766881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-18 00:40:56.766970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.162 [2024-11-18 00:40:56.766997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.162 qpair failed and we were unable to recover it. 00:35:33.162 [2024-11-18 00:40:56.767111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-18 00:40:56.767138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-18 00:40:56.767266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-18 00:40:56.767306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-18 00:40:56.767414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-18 00:40:56.767443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-18 00:40:56.767557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-18 00:40:56.767585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-18 00:40:56.767695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-18 00:40:56.767723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-18 00:40:56.767864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-18 00:40:56.767891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-18 00:40:56.768002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-18 00:40:56.768030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-18 00:40:56.768140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-18 00:40:56.768168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-18 00:40:56.768287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-18 00:40:56.768326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-18 00:40:56.768445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-18 00:40:56.768473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-18 00:40:56.768610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-18 00:40:56.768637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-18 00:40:56.768724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-18 00:40:56.768751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-18 00:40:56.768862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-18 00:40:56.768889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-18 00:40:56.769030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-18 00:40:56.769058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-18 00:40:56.769165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-18 00:40:56.769192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-18 00:40:56.769333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-18 00:40:56.769361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-18 00:40:56.769481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-18 00:40:56.769509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-18 00:40:56.769624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-18 00:40:56.769652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-18 00:40:56.769764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-18 00:40:56.769791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-18 00:40:56.769932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-18 00:40:56.769960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-18 00:40:56.770076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-18 00:40:56.770103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-18 00:40:56.770196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-18 00:40:56.770225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-18 00:40:56.770342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-18 00:40:56.770369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-18 00:40:56.770476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-18 00:40:56.770503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-18 00:40:56.770621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-18 00:40:56.770648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-18 00:40:56.770759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-18 00:40:56.770786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-18 00:40:56.770866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-18 00:40:56.770893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-18 00:40:56.770984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-18 00:40:56.771012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-18 00:40:56.771096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-18 00:40:56.771123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-18 00:40:56.771227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-18 00:40:56.771254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-18 00:40:56.771361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-18 00:40:56.771390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-18 00:40:56.771479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-18 00:40:56.771506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-18 00:40:56.771634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-18 00:40:56.771675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-18 00:40:56.771800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-18 00:40:56.771829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-18 00:40:56.771918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-18 00:40:56.771951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-18 00:40:56.772044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-18 00:40:56.772071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-18 00:40:56.772192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-18 00:40:56.772219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-18 00:40:56.772332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-18 00:40:56.772359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-18 00:40:56.772500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-18 00:40:56.772527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-18 00:40:56.772640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-18 00:40:56.772667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-18 00:40:56.772752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-18 00:40:56.772779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-18 00:40:56.772895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-18 00:40:56.772922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-18 00:40:56.773035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-18 00:40:56.773062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-18 00:40:56.773150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-18 00:40:56.773177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-18 00:40:56.773260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-18 00:40:56.773288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-18 00:40:56.773375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-18 00:40:56.773400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-18 00:40:56.773478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-18 00:40:56.773506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-18 00:40:56.773588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-18 00:40:56.773616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-18 00:40:56.773737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-18 00:40:56.773764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-18 00:40:56.773856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-18 00:40:56.773881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-18 00:40:56.773999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-18 00:40:56.774026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-18 00:40:56.774133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-18 00:40:56.774160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-18 00:40:56.774243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-18 00:40:56.774271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-18 00:40:56.774357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-18 00:40:56.774384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-18 00:40:56.774493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-18 00:40:56.774521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-18 00:40:56.774631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-18 00:40:56.774658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-18 00:40:56.774744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-18 00:40:56.774770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-18 00:40:56.774877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-18 00:40:56.774904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-18 00:40:56.774995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-18 00:40:56.775022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-18 00:40:56.775133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-18 00:40:56.775160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-18 00:40:56.775295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-18 00:40:56.775331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-18 00:40:56.775411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-18 00:40:56.775444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.163 qpair failed and we were unable to recover it. 00:35:33.163 [2024-11-18 00:40:56.775558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.163 [2024-11-18 00:40:56.775586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.775671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.775698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.775779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.775806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.775917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.775944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.776029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.776056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.776155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.776195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.776325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.776355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.776492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.776522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.776660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.776693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.776789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.776817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.776946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.776974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.777090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.777117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.777236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.777263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.777399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.777429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.777568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.777596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.777744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.777772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.777877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.777904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.778021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.778048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.778161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.778189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.778329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.778358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.778445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.778478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.778574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.778602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.778711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.778738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.778851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.778879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.779019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.779046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.779159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.779187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.779278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.779307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.779427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.779455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.779545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.779573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.779687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.779714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.779824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.779851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.779939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.779966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.780047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.780074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.780203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.780244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.780367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.780397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.780488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.780515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.780626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.780654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.780827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.780881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.781087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.781149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.781261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.781294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.781427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.781454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.781568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.781595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.781704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.781731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.781854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.781881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.781994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.782022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.782132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.782159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.782274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.782301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.782400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.782427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.782509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.782535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.782625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.782653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.782744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.782772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.782859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.782886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.782985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.783014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.783107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.783136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.783248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.783275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.783401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.783429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.783543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.783578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.783687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.783714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.783804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.783831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.783922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.783949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.784092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.784119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.784232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.784259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.784342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.784368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.784478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.784506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.784618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.784645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.784724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.784752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.784895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.784929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.785047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.785074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.164 qpair failed and we were unable to recover it. 00:35:33.164 [2024-11-18 00:40:56.785190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.164 [2024-11-18 00:40:56.785217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-18 00:40:56.785414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-18 00:40:56.785443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-18 00:40:56.785582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-18 00:40:56.785609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-18 00:40:56.785747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-18 00:40:56.785775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-18 00:40:56.785862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-18 00:40:56.785890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-18 00:40:56.785998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-18 00:40:56.786037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-18 00:40:56.786134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-18 00:40:56.786160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-18 00:40:56.786276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-18 00:40:56.786303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-18 00:40:56.786414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-18 00:40:56.786441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-18 00:40:56.786592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-18 00:40:56.786620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-18 00:40:56.786711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-18 00:40:56.786738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-18 00:40:56.786950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-18 00:40:56.786978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-18 00:40:56.787094] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca970 is same with the state(6) to be set 00:35:33.165 [2024-11-18 00:40:56.787257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-18 00:40:56.787286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-18 00:40:56.787452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-18 00:40:56.787493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-18 00:40:56.787680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-18 00:40:56.787735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-18 00:40:56.787881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-18 00:40:56.787944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-18 00:40:56.788082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-18 00:40:56.788136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-18 00:40:56.788287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-18 00:40:56.788322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-18 00:40:56.788443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-18 00:40:56.788472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-18 00:40:56.788587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-18 00:40:56.788614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-18 00:40:56.788750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-18 00:40:56.788798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-18 00:40:56.788958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-18 00:40:56.789009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-18 00:40:56.789148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-18 00:40:56.789175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-18 00:40:56.789263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-18 00:40:56.789290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-18 00:40:56.789403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-18 00:40:56.789438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-18 00:40:56.789570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-18 00:40:56.789599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-18 00:40:56.789692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-18 00:40:56.789718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-18 00:40:56.789803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-18 00:40:56.789830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-18 00:40:56.789995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-18 00:40:56.790043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-18 00:40:56.790156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-18 00:40:56.790183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-18 00:40:56.790272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-18 00:40:56.790299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-18 00:40:56.790398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-18 00:40:56.790427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-18 00:40:56.790599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-18 00:40:56.790640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-18 00:40:56.790762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-18 00:40:56.790801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-18 00:40:56.790924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-18 00:40:56.790952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-18 00:40:56.791063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-18 00:40:56.791090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-18 00:40:56.791201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-18 00:40:56.791229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-18 00:40:56.791322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-18 00:40:56.791360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-18 00:40:56.791489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-18 00:40:56.791518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-18 00:40:56.791632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-18 00:40:56.791673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-18 00:40:56.791819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-18 00:40:56.791869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-18 00:40:56.792001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-18 00:40:56.792051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-18 00:40:56.792192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-18 00:40:56.792219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-18 00:40:56.792346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-18 00:40:56.792375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-18 00:40:56.792513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-18 00:40:56.792540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-18 00:40:56.792654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-18 00:40:56.792682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-18 00:40:56.792818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-18 00:40:56.792866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-18 00:40:56.793032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-18 00:40:56.793079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-18 00:40:56.793176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-18 00:40:56.793203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-18 00:40:56.793297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-18 00:40:56.793337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-18 00:40:56.793438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-18 00:40:56.793465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-18 00:40:56.793556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-18 00:40:56.793587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-18 00:40:56.793678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-18 00:40:56.793706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-18 00:40:56.793797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-18 00:40:56.793824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-18 00:40:56.793941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-18 00:40:56.793968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-18 00:40:56.794091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-18 00:40:56.794132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-18 00:40:56.794264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-18 00:40:56.794304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-18 00:40:56.794415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-18 00:40:56.794445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-18 00:40:56.794561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-18 00:40:56.794590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-18 00:40:56.794701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-18 00:40:56.794729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-18 00:40:56.794873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-18 00:40:56.794900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-18 00:40:56.795015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-18 00:40:56.795050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.165 qpair failed and we were unable to recover it. 00:35:33.165 [2024-11-18 00:40:56.795196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-11-18 00:40:56.795227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-18 00:40:56.795327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-18 00:40:56.795369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-18 00:40:56.795488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-18 00:40:56.795516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-18 00:40:56.796805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-18 00:40:56.796840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-18 00:40:56.797082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-18 00:40:56.797140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-18 00:40:56.797263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-18 00:40:56.797290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-18 00:40:56.797411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-18 00:40:56.797436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-18 00:40:56.797553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-18 00:40:56.797580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-18 00:40:56.797704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-18 00:40:56.797732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-18 00:40:56.797844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-18 00:40:56.797870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-18 00:40:56.797983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-18 00:40:56.798010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-18 00:40:56.798095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-18 00:40:56.798122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-18 00:40:56.798264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-18 00:40:56.798291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-18 00:40:56.798409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-18 00:40:56.798436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-18 00:40:56.798532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-18 00:40:56.798559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-18 00:40:56.798672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-18 00:40:56.798699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-18 00:40:56.798822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-18 00:40:56.798853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-18 00:40:56.798942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-18 00:40:56.798970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-18 00:40:56.799062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-18 00:40:56.799090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-18 00:40:56.799182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-18 00:40:56.799208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-18 00:40:56.799327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-18 00:40:56.799354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-18 00:40:56.799463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-18 00:40:56.799490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-18 00:40:56.799573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-18 00:40:56.799599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-18 00:40:56.799684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-18 00:40:56.799710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-18 00:40:56.799823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-18 00:40:56.799856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-18 00:40:56.800018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-18 00:40:56.800071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-18 00:40:56.800208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-18 00:40:56.800236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-18 00:40:56.800381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-18 00:40:56.800409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-18 00:40:56.800527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-18 00:40:56.800555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-18 00:40:56.800668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-18 00:40:56.800695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-18 00:40:56.800815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-18 00:40:56.800851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-18 00:40:56.800972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-18 00:40:56.800999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-18 00:40:56.801142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-18 00:40:56.801168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-18 00:40:56.801262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-18 00:40:56.801290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-18 00:40:56.801383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-18 00:40:56.801410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-18 00:40:56.801519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-18 00:40:56.801545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-18 00:40:56.801662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-18 00:40:56.801689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-18 00:40:56.801773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-18 00:40:56.801800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-18 00:40:56.801915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-18 00:40:56.801942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-18 00:40:56.802060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-18 00:40:56.802086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-18 00:40:56.802167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-18 00:40:56.802193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-18 00:40:56.802270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-18 00:40:56.802294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-18 00:40:56.802394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-18 00:40:56.802427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-18 00:40:56.802522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-18 00:40:56.802558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-18 00:40:56.802651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-18 00:40:56.802677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-18 00:40:56.802796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-18 00:40:56.802822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-18 00:40:56.802931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-18 00:40:56.802957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-18 00:40:56.803099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-18 00:40:56.803127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-18 00:40:56.803241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-18 00:40:56.803269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-18 00:40:56.803419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-18 00:40:56.803458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-18 00:40:56.803604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-18 00:40:56.803654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-18 00:40:56.803735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-18 00:40:56.803761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-18 00:40:56.803845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-18 00:40:56.803870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-18 00:40:56.804010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-18 00:40:56.804052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-18 00:40:56.804162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-18 00:40:56.804188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-18 00:40:56.804308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-18 00:40:56.804348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-18 00:40:56.804432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-18 00:40:56.804458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-18 00:40:56.804598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-18 00:40:56.804693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-18 00:40:56.804877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-18 00:40:56.804927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-18 00:40:56.805078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-18 00:40:56.805131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-18 00:40:56.805255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-18 00:40:56.805282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-18 00:40:56.805405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-18 00:40:56.805432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-18 00:40:56.805512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-18 00:40:56.805539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-18 00:40:56.805656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-18 00:40:56.805683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-18 00:40:56.805852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-18 00:40:56.805878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-18 00:40:56.805956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-18 00:40:56.805981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-18 00:40:56.806094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-11-18 00:40:56.806121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.166 qpair failed and we were unable to recover it. 00:35:33.166 [2024-11-18 00:40:56.806241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-18 00:40:56.806267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-18 00:40:56.806389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-18 00:40:56.806415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-18 00:40:56.806511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-18 00:40:56.806542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-18 00:40:56.806630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-18 00:40:56.806662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-18 00:40:56.806814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-18 00:40:56.806841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-18 00:40:56.806954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-18 00:40:56.806981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-18 00:40:56.807094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-18 00:40:56.807121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-18 00:40:56.807243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-18 00:40:56.807274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-18 00:40:56.807380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-18 00:40:56.807407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-18 00:40:56.807491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-18 00:40:56.807516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-18 00:40:56.807658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-18 00:40:56.807684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-18 00:40:56.807764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-18 00:40:56.807789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-18 00:40:56.807885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-18 00:40:56.807917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-18 00:40:56.808046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-18 00:40:56.808071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-18 00:40:56.808857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-18 00:40:56.808898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-18 00:40:56.809134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-18 00:40:56.809189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-18 00:40:56.809345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-18 00:40:56.809374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-18 00:40:56.809466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-18 00:40:56.809495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-18 00:40:56.809588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-18 00:40:56.809615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-18 00:40:56.809731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-18 00:40:56.809761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-18 00:40:56.809893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-18 00:40:56.809927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-18 00:40:56.810067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-18 00:40:56.810097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-18 00:40:56.810227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-18 00:40:56.810254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-18 00:40:56.810353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-18 00:40:56.810381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-18 00:40:56.810467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-18 00:40:56.810493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-18 00:40:56.810608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-18 00:40:56.810635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-18 00:40:56.810743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-18 00:40:56.810769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-18 00:40:56.810911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-18 00:40:56.810937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-18 00:40:56.811021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-18 00:40:56.811047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-18 00:40:56.811165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-18 00:40:56.811192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-18 00:40:56.811277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-18 00:40:56.811308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-18 00:40:56.811406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-18 00:40:56.811434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-18 00:40:56.811545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-18 00:40:56.811571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-18 00:40:56.811686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-18 00:40:56.811713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-18 00:40:56.811832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-18 00:40:56.811858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-18 00:40:56.811941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-18 00:40:56.811967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-18 00:40:56.812058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-18 00:40:56.812086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-18 00:40:56.812195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-18 00:40:56.812222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-18 00:40:56.812362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-18 00:40:56.812389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-18 00:40:56.812513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-18 00:40:56.812540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-18 00:40:56.812623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-18 00:40:56.812650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-18 00:40:56.812797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-18 00:40:56.812825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-18 00:40:56.812940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-18 00:40:56.812966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-18 00:40:56.813049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-18 00:40:56.813074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-18 00:40:56.813160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-18 00:40:56.813187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-18 00:40:56.813301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-18 00:40:56.813337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-18 00:40:56.813450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-18 00:40:56.813476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-18 00:40:56.813561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-18 00:40:56.813586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-18 00:40:56.813665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-18 00:40:56.813691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-18 00:40:56.813823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-18 00:40:56.813848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-18 00:40:56.813958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-18 00:40:56.813985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-18 00:40:56.814084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-18 00:40:56.814124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-18 00:40:56.814239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-18 00:40:56.814267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-18 00:40:56.814400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-18 00:40:56.814428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-18 00:40:56.814575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-18 00:40:56.814602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-18 00:40:56.814687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-18 00:40:56.814715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-18 00:40:56.814810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-18 00:40:56.814838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-18 00:40:56.814954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-18 00:40:56.814987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-18 00:40:56.815074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-18 00:40:56.815101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-18 00:40:56.815187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-18 00:40:56.815215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-18 00:40:56.815326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-18 00:40:56.815354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-18 00:40:56.815446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-18 00:40:56.815473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.167 qpair failed and we were unable to recover it. 00:35:33.167 [2024-11-18 00:40:56.815586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-11-18 00:40:56.815613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-18 00:40:56.815700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-18 00:40:56.815728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-18 00:40:56.815869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-18 00:40:56.815897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-18 00:40:56.816043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-18 00:40:56.816070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-18 00:40:56.816177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-18 00:40:56.816205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-18 00:40:56.816328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-18 00:40:56.816354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-18 00:40:56.816472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-18 00:40:56.816498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-18 00:40:56.816586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-18 00:40:56.816611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-18 00:40:56.816722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-18 00:40:56.816748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-18 00:40:56.816843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-18 00:40:56.816870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-18 00:40:56.816980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-18 00:40:56.817007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-18 00:40:56.817088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-18 00:40:56.817124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-18 00:40:56.817235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-18 00:40:56.817262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-18 00:40:56.817380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-18 00:40:56.817406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-18 00:40:56.817493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-18 00:40:56.817519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-18 00:40:56.817612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-18 00:40:56.817637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-18 00:40:56.817746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-18 00:40:56.817772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-18 00:40:56.817882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-18 00:40:56.817908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-18 00:40:56.818023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-18 00:40:56.818051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-18 00:40:56.818153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-18 00:40:56.818178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-18 00:40:56.818349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-18 00:40:56.818376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-18 00:40:56.818518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-18 00:40:56.818544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-18 00:40:56.818691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-18 00:40:56.818721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-18 00:40:56.818862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-18 00:40:56.818888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-18 00:40:56.819033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-18 00:40:56.819059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-18 00:40:56.819173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-18 00:40:56.819199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-18 00:40:56.819340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-18 00:40:56.819367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-18 00:40:56.819458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-18 00:40:56.819484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-18 00:40:56.819574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-18 00:40:56.819600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-18 00:40:56.819707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-18 00:40:56.819733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-18 00:40:56.819851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-18 00:40:56.819878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-18 00:40:56.820015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-18 00:40:56.820054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-18 00:40:56.820144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-18 00:40:56.820172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-18 00:40:56.820293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-18 00:40:56.820327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-18 00:40:56.820444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-18 00:40:56.820472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-18 00:40:56.820552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-18 00:40:56.820578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-18 00:40:56.820708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-18 00:40:56.820735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-18 00:40:56.820828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-18 00:40:56.820855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-18 00:40:56.820937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-18 00:40:56.820962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-18 00:40:56.821081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-18 00:40:56.821107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-18 00:40:56.821246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-18 00:40:56.821271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-18 00:40:56.821385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-18 00:40:56.821412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-18 00:40:56.821527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-18 00:40:56.821552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-18 00:40:56.821642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-18 00:40:56.821668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-18 00:40:56.821758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-18 00:40:56.821783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-18 00:40:56.821874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-18 00:40:56.821901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-18 00:40:56.821983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-18 00:40:56.822011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-18 00:40:56.822123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-18 00:40:56.822150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-18 00:40:56.822289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-18 00:40:56.822322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-18 00:40:56.822438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-18 00:40:56.822470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-18 00:40:56.822599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-18 00:40:56.822626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-18 00:40:56.822707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-18 00:40:56.822734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-18 00:40:56.822824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-18 00:40:56.822851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-18 00:40:56.822989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-18 00:40:56.823016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-18 00:40:56.823142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-18 00:40:56.823168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-18 00:40:56.823281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-18 00:40:56.823307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-18 00:40:56.823403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-18 00:40:56.823429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-18 00:40:56.823544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-18 00:40:56.823571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-18 00:40:56.823686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-18 00:40:56.823713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-18 00:40:56.823822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-18 00:40:56.823849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-18 00:40:56.823965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-18 00:40:56.823992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-18 00:40:56.824079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-18 00:40:56.824107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-18 00:40:56.824236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-18 00:40:56.824281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-18 00:40:56.824404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-18 00:40:56.824434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-18 00:40:56.824531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-18 00:40:56.824559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-18 00:40:56.824690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-18 00:40:56.824719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-18 00:40:56.824822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-18 00:40:56.824849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.168 qpair failed and we were unable to recover it. 00:35:33.168 [2024-11-18 00:40:56.824963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.168 [2024-11-18 00:40:56.824990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.825105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.825131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.825248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.825275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.825369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.825397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.825488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.825516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.825610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.825637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.825754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.825781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.825897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.825923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.826009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.826036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.826129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.826157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.826297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.826329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.826435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.826462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.826572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.826600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.826741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.826768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.826913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.826940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.827058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.827085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.827216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.827245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.827360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.827387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.827496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.827523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.827669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.827696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.827827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.827856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.827985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.828013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.828165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.828199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.828327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.828381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.828493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.828519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.828631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.828659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.828747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.828774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.828937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.828965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.829105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.829134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.829253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.829282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.829395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.829422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.829547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.829573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.829690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.829733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.829858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.829887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.830039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.830067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.830220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.830249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.830360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.830388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.830474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.830501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.830632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.830659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.830764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.830793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.830941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.830969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.831100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.831129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.831267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.831296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.831478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.831518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.831639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.831666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.831798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.831841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.831949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.831978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.832118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.832144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.832229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.832254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.832353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.832383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.832501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.832527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.832616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.832642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.832760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.832786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.832901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.832926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.833017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.833042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.833127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.833153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.833275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.833301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.833412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.833438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.833526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.833552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.833701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.833727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.833831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.833862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.833987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.834015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.834138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.834165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.834258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.834285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.834402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.834429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.834517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.834544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.834681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.834708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.834856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.834883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.169 qpair failed and we were unable to recover it. 00:35:33.169 [2024-11-18 00:40:56.835000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.169 [2024-11-18 00:40:56.835028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-18 00:40:56.835159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-18 00:40:56.835187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-18 00:40:56.835301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-18 00:40:56.835338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-18 00:40:56.835427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-18 00:40:56.835453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-18 00:40:56.835561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-18 00:40:56.835613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-18 00:40:56.835748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-18 00:40:56.835776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-18 00:40:56.835952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-18 00:40:56.835996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-18 00:40:56.836139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-18 00:40:56.836165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-18 00:40:56.836324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-18 00:40:56.836351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-18 00:40:56.836456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-18 00:40:56.836482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-18 00:40:56.836623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-18 00:40:56.836648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-18 00:40:56.836788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-18 00:40:56.836817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-18 00:40:56.836961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-18 00:40:56.836987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-18 00:40:56.837098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-18 00:40:56.837123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-18 00:40:56.837236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-18 00:40:56.837262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-18 00:40:56.837345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-18 00:40:56.837370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-18 00:40:56.837471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-18 00:40:56.837499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-18 00:40:56.837582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-18 00:40:56.837609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-18 00:40:56.837722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-18 00:40:56.837749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-18 00:40:56.837889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-18 00:40:56.837916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-18 00:40:56.837996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-18 00:40:56.838022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-18 00:40:56.838109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-18 00:40:56.838136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-18 00:40:56.838260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-18 00:40:56.838287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-18 00:40:56.838399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-18 00:40:56.838425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-18 00:40:56.838507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-18 00:40:56.838532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-18 00:40:56.838612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-18 00:40:56.838638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-18 00:40:56.838783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-18 00:40:56.838810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-18 00:40:56.838941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-18 00:40:56.838984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-18 00:40:56.839100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-18 00:40:56.839128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-18 00:40:56.839221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-18 00:40:56.839248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-18 00:40:56.839329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-18 00:40:56.839374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-18 00:40:56.839471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-18 00:40:56.839498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-18 00:40:56.839613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-18 00:40:56.839640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-18 00:40:56.839756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-18 00:40:56.839784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-18 00:40:56.839889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-18 00:40:56.839917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-18 00:40:56.840046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-18 00:40:56.840074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-18 00:40:56.840167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-18 00:40:56.840196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-18 00:40:56.840300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-18 00:40:56.840336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-18 00:40:56.840423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-18 00:40:56.840448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-18 00:40:56.840528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-18 00:40:56.840555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-18 00:40:56.840686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-18 00:40:56.840728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-18 00:40:56.840885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-18 00:40:56.840926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-18 00:40:56.841037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-18 00:40:56.841064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-18 00:40:56.841178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-18 00:40:56.841206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-18 00:40:56.841335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-18 00:40:56.841363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-18 00:40:56.841446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-18 00:40:56.841473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-18 00:40:56.841577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-18 00:40:56.841604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-18 00:40:56.841721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-18 00:40:56.841749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-18 00:40:56.841866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-18 00:40:56.841899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-18 00:40:56.842045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-18 00:40:56.842072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-18 00:40:56.842233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-18 00:40:56.842259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-18 00:40:56.842375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-18 00:40:56.842401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-18 00:40:56.842542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-18 00:40:56.842568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-18 00:40:56.842663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-18 00:40:56.842688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-18 00:40:56.842826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-18 00:40:56.842854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-18 00:40:56.843013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-18 00:40:56.843038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-18 00:40:56.843191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-18 00:40:56.843217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-18 00:40:56.843351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-18 00:40:56.843378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-18 00:40:56.843464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-18 00:40:56.843490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-18 00:40:56.843611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.170 [2024-11-18 00:40:56.843637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.170 qpair failed and we were unable to recover it. 00:35:33.170 [2024-11-18 00:40:56.843775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.843800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-18 00:40:56.843914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.843939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-18 00:40:56.844082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.844108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-18 00:40:56.844226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.844251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-18 00:40:56.844379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.844407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-18 00:40:56.844504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.844531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-18 00:40:56.844646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.844673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-18 00:40:56.844788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.844815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-18 00:40:56.844932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.844959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-18 00:40:56.845074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.845101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-18 00:40:56.845193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.845220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-18 00:40:56.845316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.845343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-18 00:40:56.845469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.845497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-18 00:40:56.845618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.845646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-18 00:40:56.845795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.845823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-18 00:40:56.845967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.845999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-18 00:40:56.846190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.846236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-18 00:40:56.846367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.846394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-18 00:40:56.846506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.846549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-18 00:40:56.846705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.846748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-18 00:40:56.846913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.846957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-18 00:40:56.847041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.847066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-18 00:40:56.847154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.847180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-18 00:40:56.847289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.847328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-18 00:40:56.847445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.847471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-18 00:40:56.847583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.847609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-18 00:40:56.847724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.847750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-18 00:40:56.847860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.847885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-18 00:40:56.847969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.847994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-18 00:40:56.848117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.848143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-18 00:40:56.848259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.848284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-18 00:40:56.848381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.848410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-18 00:40:56.848527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.848554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-18 00:40:56.848639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.848666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-18 00:40:56.848763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.848790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-18 00:40:56.848868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.848895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-18 00:40:56.849012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.849038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-18 00:40:56.849125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.849152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-18 00:40:56.849273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.849299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-18 00:40:56.849425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.849451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-18 00:40:56.849535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.849562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-18 00:40:56.849670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.849696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-18 00:40:56.849782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.849812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-18 00:40:56.849892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.849917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-18 00:40:56.850029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.850055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-18 00:40:56.850175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.850201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-18 00:40:56.850322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.850349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-18 00:40:56.850444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.850471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-18 00:40:56.850561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.850589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-18 00:40:56.850666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.850692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-18 00:40:56.850782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.850809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-18 00:40:56.850890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.850917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-18 00:40:56.851032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.851059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-18 00:40:56.851202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.851228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-18 00:40:56.851319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.851345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-18 00:40:56.851461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.851487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-18 00:40:56.851637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.851663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-18 00:40:56.851744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.851770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-18 00:40:56.851867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.851894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-18 00:40:56.852026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.852052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-18 00:40:56.852161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.852186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-18 00:40:56.852270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.852296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-18 00:40:56.852451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.852477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-18 00:40:56.852589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.852615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-18 00:40:56.852705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.852739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-18 00:40:56.852864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.852891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-18 00:40:56.853003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.853028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.171 [2024-11-18 00:40:56.853114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.171 [2024-11-18 00:40:56.853140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.171 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-18 00:40:56.853229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-18 00:40:56.853256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-18 00:40:56.853344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-18 00:40:56.853376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-18 00:40:56.853501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-18 00:40:56.853540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-18 00:40:56.853648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-18 00:40:56.853676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-18 00:40:56.853763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-18 00:40:56.853791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-18 00:40:56.853929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-18 00:40:56.853956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-18 00:40:56.854070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-18 00:40:56.854097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-18 00:40:56.854211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-18 00:40:56.854239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-18 00:40:56.854361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-18 00:40:56.854389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-18 00:40:56.854505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-18 00:40:56.854531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-18 00:40:56.854639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-18 00:40:56.854665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-18 00:40:56.854777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-18 00:40:56.854803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-18 00:40:56.854915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-18 00:40:56.854941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-18 00:40:56.855059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-18 00:40:56.855085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-18 00:40:56.855206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-18 00:40:56.855233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-18 00:40:56.855331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-18 00:40:56.855358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-18 00:40:56.855458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-18 00:40:56.855485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-18 00:40:56.855628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-18 00:40:56.855670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-18 00:40:56.855833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-18 00:40:56.855877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-18 00:40:56.855961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-18 00:40:56.855987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-18 00:40:56.856102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-18 00:40:56.856128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-18 00:40:56.856266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-18 00:40:56.856292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-18 00:40:56.856433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-18 00:40:56.856474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-18 00:40:56.856603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-18 00:40:56.856645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-18 00:40:56.856737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-18 00:40:56.856764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-18 00:40:56.856873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-18 00:40:56.856899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-18 00:40:56.857005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-18 00:40:56.857031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-18 00:40:56.857152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-18 00:40:56.857178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-18 00:40:56.857295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-18 00:40:56.857334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-18 00:40:56.857415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-18 00:40:56.857441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-18 00:40:56.857524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-18 00:40:56.857551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-18 00:40:56.857665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-18 00:40:56.857692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-18 00:40:56.857784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-18 00:40:56.857810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-18 00:40:56.857903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-18 00:40:56.857930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-18 00:40:56.858040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-18 00:40:56.858067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-18 00:40:56.858176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-18 00:40:56.858203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-18 00:40:56.858326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-18 00:40:56.858354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-18 00:40:56.858463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-18 00:40:56.858490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-18 00:40:56.858600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-18 00:40:56.858626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-18 00:40:56.858788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-18 00:40:56.858814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-18 00:40:56.858929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-18 00:40:56.858954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-18 00:40:56.859100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-18 00:40:56.859130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-18 00:40:56.859221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-18 00:40:56.859248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-18 00:40:56.859363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-18 00:40:56.859391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-18 00:40:56.859503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-18 00:40:56.859530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-18 00:40:56.859624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-18 00:40:56.859662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-18 00:40:56.859770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-18 00:40:56.859796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-18 00:40:56.859878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-18 00:40:56.859905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-18 00:40:56.860048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-18 00:40:56.860075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-18 00:40:56.860187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-18 00:40:56.860213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-18 00:40:56.860333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-18 00:40:56.860361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-18 00:40:56.860461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-18 00:40:56.860489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-18 00:40:56.860613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-18 00:40:56.860641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-18 00:40:56.860743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-18 00:40:56.860770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-18 00:40:56.860936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-18 00:40:56.860964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-18 00:40:56.861104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-18 00:40:56.861137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-18 00:40:56.861272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-18 00:40:56.861298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-18 00:40:56.861439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-18 00:40:56.861467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-18 00:40:56.861587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-18 00:40:56.861631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-18 00:40:56.861776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-18 00:40:56.861803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-18 00:40:56.861902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-18 00:40:56.861929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-18 00:40:56.862045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-18 00:40:56.862072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-18 00:40:56.862187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-18 00:40:56.862215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.172 qpair failed and we were unable to recover it. 00:35:33.172 [2024-11-18 00:40:56.862307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.172 [2024-11-18 00:40:56.862360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-18 00:40:56.862446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-18 00:40:56.862473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-18 00:40:56.862582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-18 00:40:56.862609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-18 00:40:56.862702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-18 00:40:56.862729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-18 00:40:56.862850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-18 00:40:56.862877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-18 00:40:56.863020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-18 00:40:56.863048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-18 00:40:56.863166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-18 00:40:56.863193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-18 00:40:56.863316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-18 00:40:56.863361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-18 00:40:56.863451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-18 00:40:56.863477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-18 00:40:56.863575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-18 00:40:56.863603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-18 00:40:56.863748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-18 00:40:56.863776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-18 00:40:56.863901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-18 00:40:56.863944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-18 00:40:56.864038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-18 00:40:56.864066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-18 00:40:56.864214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-18 00:40:56.864242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-18 00:40:56.864387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-18 00:40:56.864414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-18 00:40:56.864513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-18 00:40:56.864539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-18 00:40:56.864639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-18 00:40:56.864666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-18 00:40:56.864813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-18 00:40:56.864842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-18 00:40:56.864964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-18 00:40:56.864991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-18 00:40:56.865093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-18 00:40:56.865139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-18 00:40:56.865319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-18 00:40:56.865350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-18 00:40:56.865503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-18 00:40:56.865530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-18 00:40:56.865688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-18 00:40:56.865715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-18 00:40:56.865867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-18 00:40:56.865895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-18 00:40:56.866014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-18 00:40:56.866042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-18 00:40:56.866126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-18 00:40:56.866154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-18 00:40:56.866303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-18 00:40:56.866352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-18 00:40:56.866453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-18 00:40:56.866481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-18 00:40:56.866570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-18 00:40:56.866607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-18 00:40:56.866719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-18 00:40:56.866746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-18 00:40:56.866840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-18 00:40:56.866866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-18 00:40:56.867000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-18 00:40:56.867028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-18 00:40:56.867147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-18 00:40:56.867179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-18 00:40:56.867322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-18 00:40:56.867372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-18 00:40:56.867487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-18 00:40:56.867514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-18 00:40:56.867657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-18 00:40:56.867686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-18 00:40:56.867811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-18 00:40:56.867855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-18 00:40:56.868003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-18 00:40:56.868031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-18 00:40:56.868150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-18 00:40:56.868178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-18 00:40:56.868335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-18 00:40:56.868375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-18 00:40:56.868498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-18 00:40:56.868526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-18 00:40:56.868644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-18 00:40:56.868674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-18 00:40:56.868791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-18 00:40:56.868818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-18 00:40:56.868904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-18 00:40:56.868930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-18 00:40:56.869049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-18 00:40:56.869076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-18 00:40:56.869219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-18 00:40:56.869246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-18 00:40:56.869384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-18 00:40:56.869415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-18 00:40:56.869515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-18 00:40:56.869543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-18 00:40:56.869659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-18 00:40:56.869685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-18 00:40:56.869798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-18 00:40:56.869825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-18 00:40:56.869917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-18 00:40:56.869949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-18 00:40:56.870039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-18 00:40:56.870066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-18 00:40:56.870280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-18 00:40:56.870308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-18 00:40:56.870434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-18 00:40:56.870461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-18 00:40:56.870577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-18 00:40:56.870605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-18 00:40:56.870735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-18 00:40:56.870762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-18 00:40:56.870881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-18 00:40:56.870909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-18 00:40:56.871007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-18 00:40:56.871035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-18 00:40:56.871159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-18 00:40:56.871187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-18 00:40:56.871333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-18 00:40:56.871380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-18 00:40:56.871525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-18 00:40:56.871551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-18 00:40:56.871697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-18 00:40:56.871740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-18 00:40:56.871867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-18 00:40:56.871896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-18 00:40:56.872011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-18 00:40:56.872042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-18 00:40:56.872129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-18 00:40:56.872157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-18 00:40:56.872280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-18 00:40:56.872308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-18 00:40:56.872477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.173 [2024-11-18 00:40:56.872504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.173 qpair failed and we were unable to recover it. 00:35:33.173 [2024-11-18 00:40:56.872612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-18 00:40:56.872638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-18 00:40:56.872737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-18 00:40:56.872763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-18 00:40:56.872870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-18 00:40:56.872898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-18 00:40:56.873036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-18 00:40:56.873063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-18 00:40:56.873199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-18 00:40:56.873227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-18 00:40:56.873357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-18 00:40:56.873385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-18 00:40:56.873497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-18 00:40:56.873524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-18 00:40:56.873637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-18 00:40:56.873663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-18 00:40:56.873765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-18 00:40:56.873793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-18 00:40:56.873893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-18 00:40:56.873920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-18 00:40:56.874085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-18 00:40:56.874113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-18 00:40:56.874204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-18 00:40:56.874232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-18 00:40:56.874386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-18 00:40:56.874415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-18 00:40:56.874503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-18 00:40:56.874529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-18 00:40:56.874645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-18 00:40:56.874673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-18 00:40:56.874814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-18 00:40:56.874848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-18 00:40:56.874969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-18 00:40:56.874996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-18 00:40:56.875095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-18 00:40:56.875122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-18 00:40:56.875276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-18 00:40:56.875303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-18 00:40:56.875445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-18 00:40:56.875471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-18 00:40:56.875557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-18 00:40:56.875590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-18 00:40:56.875681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-18 00:40:56.875707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-18 00:40:56.875871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-18 00:40:56.875899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-18 00:40:56.876078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-18 00:40:56.876105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-18 00:40:56.876239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-18 00:40:56.876277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-18 00:40:56.876442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-18 00:40:56.876471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-18 00:40:56.876559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-18 00:40:56.876597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-18 00:40:56.876714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-18 00:40:56.876740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-18 00:40:56.876894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-18 00:40:56.876939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-18 00:40:56.877089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-18 00:40:56.877124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-18 00:40:56.877234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-18 00:40:56.877263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-18 00:40:56.877405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-18 00:40:56.877433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-18 00:40:56.877571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-18 00:40:56.877603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-18 00:40:56.877683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-18 00:40:56.877710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-18 00:40:56.877838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-18 00:40:56.877886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-18 00:40:56.878012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-18 00:40:56.878055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-18 00:40:56.878172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-18 00:40:56.878200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-18 00:40:56.878306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-18 00:40:56.878357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-18 00:40:56.878475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-18 00:40:56.878503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-18 00:40:56.878645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-18 00:40:56.878689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-18 00:40:56.878799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-18 00:40:56.878827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-18 00:40:56.879010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-18 00:40:56.879043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-18 00:40:56.879160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-18 00:40:56.879187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-18 00:40:56.879275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-18 00:40:56.879303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-18 00:40:56.879453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-18 00:40:56.879481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-18 00:40:56.879592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-18 00:40:56.879619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-18 00:40:56.879739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-18 00:40:56.879768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-18 00:40:56.879857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-18 00:40:56.879885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-18 00:40:56.880039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-18 00:40:56.880066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-18 00:40:56.880166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-18 00:40:56.880195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-18 00:40:56.880299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-18 00:40:56.880342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-18 00:40:56.880464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-18 00:40:56.880491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-18 00:40:56.880574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-18 00:40:56.880601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-18 00:40:56.880716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-18 00:40:56.880744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-18 00:40:56.880860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-18 00:40:56.880887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-18 00:40:56.880994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-18 00:40:56.881022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-18 00:40:56.881146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-18 00:40:56.881189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-18 00:40:56.881292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-18 00:40:56.881329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-18 00:40:56.881437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-18 00:40:56.881464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-18 00:40:56.881625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-18 00:40:56.881653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-18 00:40:56.881769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-18 00:40:56.881797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-18 00:40:56.881915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-18 00:40:56.881942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-18 00:40:56.882031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.174 [2024-11-18 00:40:56.882059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.174 qpair failed and we were unable to recover it. 00:35:33.174 [2024-11-18 00:40:56.882201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-18 00:40:56.882227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-18 00:40:56.882370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-18 00:40:56.882407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-18 00:40:56.882523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-18 00:40:56.882550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-18 00:40:56.882652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-18 00:40:56.882681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-18 00:40:56.882770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-18 00:40:56.882814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-18 00:40:56.882943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-18 00:40:56.882969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-18 00:40:56.883102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-18 00:40:56.883129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-18 00:40:56.883230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-18 00:40:56.883257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-18 00:40:56.883399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-18 00:40:56.883426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-18 00:40:56.883507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-18 00:40:56.883539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-18 00:40:56.883673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-18 00:40:56.883701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-18 00:40:56.883837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-18 00:40:56.883881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-18 00:40:56.883998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-18 00:40:56.884026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-18 00:40:56.884141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-18 00:40:56.884168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-18 00:40:56.884285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-18 00:40:56.884322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-18 00:40:56.884426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-18 00:40:56.884452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-18 00:40:56.884538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-18 00:40:56.884565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-18 00:40:56.884674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-18 00:40:56.884700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-18 00:40:56.884798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-18 00:40:56.884826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-18 00:40:56.884935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-18 00:40:56.884965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-18 00:40:56.885079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-18 00:40:56.885106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-18 00:40:56.885226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-18 00:40:56.885253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-18 00:40:56.885372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-18 00:40:56.885415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-18 00:40:56.885500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-18 00:40:56.885527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-18 00:40:56.885666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-18 00:40:56.885708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-18 00:40:56.885792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-18 00:40:56.885818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-18 00:40:56.885952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-18 00:40:56.885979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-18 00:40:56.886070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-18 00:40:56.886097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-18 00:40:56.886245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-18 00:40:56.886273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-18 00:40:56.886389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-18 00:40:56.886416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-18 00:40:56.886499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-18 00:40:56.886526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-18 00:40:56.886669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-18 00:40:56.886696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-18 00:40:56.886842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-18 00:40:56.886869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-18 00:40:56.886986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-18 00:40:56.887023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-18 00:40:56.887137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-18 00:40:56.887179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-18 00:40:56.887270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-18 00:40:56.887298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-18 00:40:56.887422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-18 00:40:56.887449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-18 00:40:56.887568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-18 00:40:56.887611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-18 00:40:56.887754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-18 00:40:56.887800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-18 00:40:56.887944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-18 00:40:56.887989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-18 00:40:56.888094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-18 00:40:56.888121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-18 00:40:56.888232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-18 00:40:56.888258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-18 00:40:56.888399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-18 00:40:56.888426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-18 00:40:56.888517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-18 00:40:56.888543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-18 00:40:56.888681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-18 00:40:56.888707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-18 00:40:56.888795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-18 00:40:56.888837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-18 00:40:56.888985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-18 00:40:56.889014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-18 00:40:56.889100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-18 00:40:56.889128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-18 00:40:56.889216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-18 00:40:56.889243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-18 00:40:56.889351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-18 00:40:56.889383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-18 00:40:56.889495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-18 00:40:56.889522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-18 00:40:56.889637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-18 00:40:56.889664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-18 00:40:56.889744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.175 [2024-11-18 00:40:56.889772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.175 qpair failed and we were unable to recover it. 00:35:33.175 [2024-11-18 00:40:56.889852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-18 00:40:56.889878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-18 00:40:56.889987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-18 00:40:56.890014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-18 00:40:56.890096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-18 00:40:56.890124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-18 00:40:56.890239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-18 00:40:56.890265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-18 00:40:56.890392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-18 00:40:56.890419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-18 00:40:56.890539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-18 00:40:56.890566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-18 00:40:56.890672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-18 00:40:56.890699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-18 00:40:56.890792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-18 00:40:56.890820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-18 00:40:56.890910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-18 00:40:56.890938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-18 00:40:56.891050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-18 00:40:56.891077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-18 00:40:56.891165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-18 00:40:56.891194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-18 00:40:56.891320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-18 00:40:56.891370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-18 00:40:56.891484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-18 00:40:56.891510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-18 00:40:56.891679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-18 00:40:56.891721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-18 00:40:56.891832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-18 00:40:56.891858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-18 00:40:56.891956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-18 00:40:56.891983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-18 00:40:56.892131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-18 00:40:56.892174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-18 00:40:56.892341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-18 00:40:56.892386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-18 00:40:56.892503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-18 00:40:56.892528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-18 00:40:56.892626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-18 00:40:56.892652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-18 00:40:56.892736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-18 00:40:56.892762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-18 00:40:56.892874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-18 00:40:56.892916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-18 00:40:56.893073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-18 00:40:56.893100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-18 00:40:56.893244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-18 00:40:56.893270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-18 00:40:56.893357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-18 00:40:56.893383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-18 00:40:56.893497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-18 00:40:56.893523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-18 00:40:56.893636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-18 00:40:56.893663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-18 00:40:56.893770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-18 00:40:56.893796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-18 00:40:56.893908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-18 00:40:56.893934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-18 00:40:56.894093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-18 00:40:56.894119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-18 00:40:56.894231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-18 00:40:56.894258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-18 00:40:56.894376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-18 00:40:56.894404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-18 00:40:56.894489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-18 00:40:56.894517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-18 00:40:56.894648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-18 00:40:56.894676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-18 00:40:56.894797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-18 00:40:56.894825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-18 00:40:56.894936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-18 00:40:56.894964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-18 00:40:56.895046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-18 00:40:56.895078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-18 00:40:56.895247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-18 00:40:56.895274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-18 00:40:56.895395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-18 00:40:56.895421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-18 00:40:56.895551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-18 00:40:56.895579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-18 00:40:56.895694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-18 00:40:56.895722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-18 00:40:56.895840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-18 00:40:56.895866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-18 00:40:56.895973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-18 00:40:56.895999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-18 00:40:56.896098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-18 00:40:56.896125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-18 00:40:56.896282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-18 00:40:56.896341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-18 00:40:56.896460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-18 00:40:56.896486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-18 00:40:56.896576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-18 00:40:56.896603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-18 00:40:56.896698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-18 00:40:56.896725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-18 00:40:56.896823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-18 00:40:56.896865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-18 00:40:56.897019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-18 00:40:56.897063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-18 00:40:56.897211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-18 00:40:56.897239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-18 00:40:56.897327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-18 00:40:56.897355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-18 00:40:56.897503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-18 00:40:56.897531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-18 00:40:56.897676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-18 00:40:56.897703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-18 00:40:56.897788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-18 00:40:56.897815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-18 00:40:56.897914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-18 00:40:56.897943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-18 00:40:56.898053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-18 00:40:56.898080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-18 00:40:56.898201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-18 00:40:56.898228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-18 00:40:56.898308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-18 00:40:56.898343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-18 00:40:56.898457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-18 00:40:56.898486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-18 00:40:56.898618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-18 00:40:56.898645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-18 00:40:56.898773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-18 00:40:56.898800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-18 00:40:56.898912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.176 [2024-11-18 00:40:56.898940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.176 qpair failed and we were unable to recover it. 00:35:33.176 [2024-11-18 00:40:56.899066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-18 00:40:56.899094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-18 00:40:56.899238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-18 00:40:56.899282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-18 00:40:56.899404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-18 00:40:56.899433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-18 00:40:56.899516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-18 00:40:56.899543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-18 00:40:56.899696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-18 00:40:56.899723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-18 00:40:56.899863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-18 00:40:56.899889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-18 00:40:56.900002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-18 00:40:56.900038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-18 00:40:56.900153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-18 00:40:56.900179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-18 00:40:56.900289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-18 00:40:56.900322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-18 00:40:56.900463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-18 00:40:56.900492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-18 00:40:56.900615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-18 00:40:56.900652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-18 00:40:56.900789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-18 00:40:56.900816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-18 00:40:56.900896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-18 00:40:56.900921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-18 00:40:56.901060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-18 00:40:56.901091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-18 00:40:56.901203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-18 00:40:56.901230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-18 00:40:56.901329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-18 00:40:56.901383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-18 00:40:56.901490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-18 00:40:56.901517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-18 00:40:56.901637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-18 00:40:56.901663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-18 00:40:56.901766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-18 00:40:56.901792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-18 00:40:56.901910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-18 00:40:56.901936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-18 00:40:56.902052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-18 00:40:56.902078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-18 00:40:56.902200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-18 00:40:56.902226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-18 00:40:56.902316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-18 00:40:56.902344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-18 00:40:56.902453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-18 00:40:56.902479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-18 00:40:56.902607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-18 00:40:56.902633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-18 00:40:56.902717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-18 00:40:56.902748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-18 00:40:56.902832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-18 00:40:56.902858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-18 00:40:56.903008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-18 00:40:56.903034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-18 00:40:56.903119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-18 00:40:56.903145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-18 00:40:56.903239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-18 00:40:56.903265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-18 00:40:56.903356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-18 00:40:56.903383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-18 00:40:56.903520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-18 00:40:56.903545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-18 00:40:56.903659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-18 00:40:56.903684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-18 00:40:56.903809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-18 00:40:56.903835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-18 00:40:56.903912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-18 00:40:56.903938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-18 00:40:56.904054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-18 00:40:56.904080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-18 00:40:56.904160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-18 00:40:56.904186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-18 00:40:56.904270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-18 00:40:56.904295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-18 00:40:56.904408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-18 00:40:56.904434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-18 00:40:56.904578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-18 00:40:56.904604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-18 00:40:56.904732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-18 00:40:56.904771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-18 00:40:56.904905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-18 00:40:56.904933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-18 00:40:56.905082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-18 00:40:56.905109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-18 00:40:56.905221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-18 00:40:56.905247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-18 00:40:56.905401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-18 00:40:56.905458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-18 00:40:56.905576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-18 00:40:56.905603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-18 00:40:56.905762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-18 00:40:56.905827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-18 00:40:56.905974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-18 00:40:56.906000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-18 00:40:56.906124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-18 00:40:56.906150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-18 00:40:56.906264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-18 00:40:56.906289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-18 00:40:56.906399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-18 00:40:56.906491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-18 00:40:56.906684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-18 00:40:56.906757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-18 00:40:56.906899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-18 00:40:56.906935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-18 00:40:56.907074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-18 00:40:56.907110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-18 00:40:56.907208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-18 00:40:56.907235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-18 00:40:56.907361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-18 00:40:56.907407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-18 00:40:56.907515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-18 00:40:56.907542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-18 00:40:56.907622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-18 00:40:56.907649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-18 00:40:56.907733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-18 00:40:56.907759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-18 00:40:56.907876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-18 00:40:56.907903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-18 00:40:56.908019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-18 00:40:56.908045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-18 00:40:56.908139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-18 00:40:56.908166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.177 qpair failed and we were unable to recover it. 00:35:33.177 [2024-11-18 00:40:56.908315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.177 [2024-11-18 00:40:56.908358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-18 00:40:56.908497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-18 00:40:56.908523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-18 00:40:56.908647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-18 00:40:56.908673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-18 00:40:56.908765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-18 00:40:56.908792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-18 00:40:56.908879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-18 00:40:56.908909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-18 00:40:56.909055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-18 00:40:56.909082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-18 00:40:56.909223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-18 00:40:56.909249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-18 00:40:56.909338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-18 00:40:56.909366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-18 00:40:56.909478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-18 00:40:56.909504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-18 00:40:56.909622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-18 00:40:56.909648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-18 00:40:56.909765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-18 00:40:56.909793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-18 00:40:56.909938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-18 00:40:56.909965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-18 00:40:56.910071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-18 00:40:56.910099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-18 00:40:56.910187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-18 00:40:56.910214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-18 00:40:56.910327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-18 00:40:56.910354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-18 00:40:56.910450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-18 00:40:56.910477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-18 00:40:56.910601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-18 00:40:56.910627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-18 00:40:56.910713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-18 00:40:56.910739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-18 00:40:56.910821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-18 00:40:56.910853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-18 00:40:56.910940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-18 00:40:56.910967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-18 00:40:56.911080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-18 00:40:56.911106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-18 00:40:56.911195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-18 00:40:56.911221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-18 00:40:56.911306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-18 00:40:56.911340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-18 00:40:56.911430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-18 00:40:56.911456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-18 00:40:56.911578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-18 00:40:56.911605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-18 00:40:56.911747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-18 00:40:56.911772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-18 00:40:56.911863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-18 00:40:56.911889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-18 00:40:56.912034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-18 00:40:56.912060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-18 00:40:56.912174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-18 00:40:56.912202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-18 00:40:56.912319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-18 00:40:56.912347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-18 00:40:56.912435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-18 00:40:56.912461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-18 00:40:56.912574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-18 00:40:56.912600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-18 00:40:56.912728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-18 00:40:56.912755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-18 00:40:56.912868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-18 00:40:56.912894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-18 00:40:56.913036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-18 00:40:56.913063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-18 00:40:56.913204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-18 00:40:56.913230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-18 00:40:56.913321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-18 00:40:56.913348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-18 00:40:56.913464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-18 00:40:56.913491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-18 00:40:56.913622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-18 00:40:56.913662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-18 00:40:56.913790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-18 00:40:56.913824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-18 00:40:56.913905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-18 00:40:56.913932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-18 00:40:56.914048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-18 00:40:56.914074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-18 00:40:56.914170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-18 00:40:56.914196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-18 00:40:56.914305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-18 00:40:56.914346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-18 00:40:56.914432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-18 00:40:56.914457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-18 00:40:56.914616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-18 00:40:56.914642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-18 00:40:56.914782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-18 00:40:56.914808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-18 00:40:56.914952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-18 00:40:56.914979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-18 00:40:56.915125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-18 00:40:56.915151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-18 00:40:56.915266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-18 00:40:56.915292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-18 00:40:56.915429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-18 00:40:56.915454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-18 00:40:56.915579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-18 00:40:56.915605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-18 00:40:56.915683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-18 00:40:56.915710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-18 00:40:56.915823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-18 00:40:56.915849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-18 00:40:56.915973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-18 00:40:56.916000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.178 [2024-11-18 00:40:56.916145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.178 [2024-11-18 00:40:56.916171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.178 qpair failed and we were unable to recover it. 00:35:33.179 [2024-11-18 00:40:56.916291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.179 [2024-11-18 00:40:56.916329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.179 qpair failed and we were unable to recover it. 00:35:33.179 [2024-11-18 00:40:56.916450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.179 [2024-11-18 00:40:56.916477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.179 qpair failed and we were unable to recover it. 00:35:33.179 [2024-11-18 00:40:56.916588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.179 [2024-11-18 00:40:56.916620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.179 qpair failed and we were unable to recover it. 00:35:33.179 [2024-11-18 00:40:56.916740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.179 [2024-11-18 00:40:56.916767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.179 qpair failed and we were unable to recover it. 00:35:33.179 [2024-11-18 00:40:56.916908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.179 [2024-11-18 00:40:56.916939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.179 qpair failed and we were unable to recover it. 00:35:33.179 [2024-11-18 00:40:56.917076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.179 [2024-11-18 00:40:56.917103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.179 qpair failed and we were unable to recover it. 00:35:33.179 [2024-11-18 00:40:56.917222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.179 [2024-11-18 00:40:56.917248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.179 qpair failed and we were unable to recover it. 00:35:33.179 [2024-11-18 00:40:56.917334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.179 [2024-11-18 00:40:56.917361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.179 qpair failed and we were unable to recover it. 00:35:33.179 [2024-11-18 00:40:56.917481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.179 [2024-11-18 00:40:56.917507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.179 qpair failed and we were unable to recover it. 00:35:33.179 [2024-11-18 00:40:56.917597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.179 [2024-11-18 00:40:56.917623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.179 qpair failed and we were unable to recover it. 00:35:33.179 [2024-11-18 00:40:56.917743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.179 [2024-11-18 00:40:56.917770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.179 qpair failed and we were unable to recover it. 00:35:33.179 [2024-11-18 00:40:56.917905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.179 [2024-11-18 00:40:56.917931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.179 qpair failed and we were unable to recover it. 00:35:33.179 [2024-11-18 00:40:56.918037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.179 [2024-11-18 00:40:56.918064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.179 qpair failed and we were unable to recover it. 00:35:33.179 [2024-11-18 00:40:56.918190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.179 [2024-11-18 00:40:56.918217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.179 qpair failed and we were unable to recover it. 00:35:33.179 [2024-11-18 00:40:56.918304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.179 [2024-11-18 00:40:56.918341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.179 qpair failed and we were unable to recover it. 00:35:33.179 [2024-11-18 00:40:56.918455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.179 [2024-11-18 00:40:56.918482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.179 qpair failed and we were unable to recover it. 00:35:33.179 [2024-11-18 00:40:56.918643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.179 [2024-11-18 00:40:56.918670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.179 qpair failed and we were unable to recover it. 00:35:33.179 [2024-11-18 00:40:56.918784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.179 [2024-11-18 00:40:56.918812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.179 qpair failed and we were unable to recover it. 00:35:33.179 [2024-11-18 00:40:56.918930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.179 [2024-11-18 00:40:56.918958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.179 qpair failed and we were unable to recover it. 00:35:33.179 [2024-11-18 00:40:56.919056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.179 [2024-11-18 00:40:56.919083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.179 qpair failed and we were unable to recover it. 00:35:33.179 [2024-11-18 00:40:56.919220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.179 [2024-11-18 00:40:56.919247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.179 qpair failed and we were unable to recover it. 00:35:33.179 [2024-11-18 00:40:56.919325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.179 [2024-11-18 00:40:56.919355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.179 qpair failed and we were unable to recover it. 00:35:33.179 [2024-11-18 00:40:56.919445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.179 [2024-11-18 00:40:56.919472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.179 qpair failed and we were unable to recover it. 00:35:33.179 [2024-11-18 00:40:56.919591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.179 [2024-11-18 00:40:56.919618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.179 qpair failed and we were unable to recover it. 00:35:33.179 [2024-11-18 00:40:56.919692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.179 [2024-11-18 00:40:56.919718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.179 qpair failed and we were unable to recover it. 00:35:33.179 [2024-11-18 00:40:56.919797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.179 [2024-11-18 00:40:56.919824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.179 qpair failed and we were unable to recover it. 00:35:33.179 [2024-11-18 00:40:56.919896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.179 [2024-11-18 00:40:56.919922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.179 qpair failed and we were unable to recover it. 00:35:33.179 [2024-11-18 00:40:56.920064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.179 [2024-11-18 00:40:56.920090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.179 qpair failed and we were unable to recover it. 00:35:33.179 [2024-11-18 00:40:56.920181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.179 [2024-11-18 00:40:56.920207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.179 qpair failed and we were unable to recover it. 00:35:33.179 [2024-11-18 00:40:56.920332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.179 [2024-11-18 00:40:56.920359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.179 qpair failed and we were unable to recover it. 00:35:33.179 [2024-11-18 00:40:56.920471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.179 [2024-11-18 00:40:56.920497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.179 qpair failed and we were unable to recover it. 00:35:33.179 [2024-11-18 00:40:56.920620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.179 [2024-11-18 00:40:56.920648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.179 qpair failed and we were unable to recover it. 00:35:33.179 [2024-11-18 00:40:56.920763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.179 [2024-11-18 00:40:56.920789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.179 qpair failed and we were unable to recover it. 00:35:33.179 [2024-11-18 00:40:56.920874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.179 [2024-11-18 00:40:56.920900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.179 qpair failed and we were unable to recover it. 00:35:33.179 [2024-11-18 00:40:56.921010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.179 [2024-11-18 00:40:56.921036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.179 qpair failed and we were unable to recover it. 00:35:33.179 [2024-11-18 00:40:56.921154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.179 [2024-11-18 00:40:56.921194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.179 qpair failed and we were unable to recover it. 00:35:33.179 [2024-11-18 00:40:56.921340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.179 [2024-11-18 00:40:56.921380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.179 qpair failed and we were unable to recover it. 00:35:33.179 [2024-11-18 00:40:56.921525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.179 [2024-11-18 00:40:56.921553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.179 qpair failed and we were unable to recover it. 00:35:33.179 [2024-11-18 00:40:56.921671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.179 [2024-11-18 00:40:56.921698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.179 qpair failed and we were unable to recover it. 00:35:33.179 [2024-11-18 00:40:56.921835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.179 [2024-11-18 00:40:56.921860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.179 qpair failed and we were unable to recover it. 00:35:33.179 [2024-11-18 00:40:56.921977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.179 [2024-11-18 00:40:56.922004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.179 qpair failed and we were unable to recover it. 00:35:33.179 [2024-11-18 00:40:56.922123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.179 [2024-11-18 00:40:56.922150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.179 qpair failed and we were unable to recover it. 00:35:33.179 [2024-11-18 00:40:56.922236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.179 [2024-11-18 00:40:56.922267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.179 qpair failed and we were unable to recover it. 00:35:33.179 [2024-11-18 00:40:56.922397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.179 [2024-11-18 00:40:56.922424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.179 qpair failed and we were unable to recover it. 00:35:33.179 [2024-11-18 00:40:56.922534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.179 [2024-11-18 00:40:56.922561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.179 qpair failed and we were unable to recover it. 00:35:33.179 [2024-11-18 00:40:56.922652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.179 [2024-11-18 00:40:56.922679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.179 qpair failed and we were unable to recover it. 00:35:33.179 [2024-11-18 00:40:56.922825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.179 [2024-11-18 00:40:56.922851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.179 qpair failed and we were unable to recover it. 00:35:33.179 [2024-11-18 00:40:56.922936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.179 [2024-11-18 00:40:56.922962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.179 qpair failed and we were unable to recover it. 00:35:33.179 [2024-11-18 00:40:56.923076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.179 [2024-11-18 00:40:56.923102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.179 qpair failed and we were unable to recover it. 00:35:33.179 [2024-11-18 00:40:56.923221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.179 [2024-11-18 00:40:56.923247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.179 qpair failed and we were unable to recover it. 00:35:33.179 [2024-11-18 00:40:56.923354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.179 [2024-11-18 00:40:56.923380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.179 qpair failed and we were unable to recover it. 00:35:33.179 [2024-11-18 00:40:56.923464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.179 [2024-11-18 00:40:56.923490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.179 qpair failed and we were unable to recover it. 00:35:33.179 [2024-11-18 00:40:56.923614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.179 [2024-11-18 00:40:56.923641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.179 qpair failed and we were unable to recover it. 00:35:33.179 [2024-11-18 00:40:56.923783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.179 [2024-11-18 00:40:56.923809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.179 qpair failed and we were unable to recover it. 00:35:33.179 [2024-11-18 00:40:56.923931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.179 [2024-11-18 00:40:56.923957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.179 qpair failed and we were unable to recover it. 00:35:33.179 [2024-11-18 00:40:56.924045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.179 [2024-11-18 00:40:56.924072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.179 qpair failed and we were unable to recover it. 00:35:33.179 [2024-11-18 00:40:56.924196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.179 [2024-11-18 00:40:56.924223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.179 qpair failed and we were unable to recover it. 00:35:33.179 [2024-11-18 00:40:56.924363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.179 [2024-11-18 00:40:56.924403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.179 qpair failed and we were unable to recover it. 00:35:33.179 [2024-11-18 00:40:56.924495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.179 [2024-11-18 00:40:56.924521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.179 qpair failed and we were unable to recover it. 00:35:33.179 [2024-11-18 00:40:56.924646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.179 [2024-11-18 00:40:56.924672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.179 qpair failed and we were unable to recover it. 00:35:33.179 [2024-11-18 00:40:56.924780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.179 [2024-11-18 00:40:56.924806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.179 qpair failed and we were unable to recover it. 00:35:33.179 [2024-11-18 00:40:56.924905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.179 [2024-11-18 00:40:56.924930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.179 qpair failed and we were unable to recover it. 00:35:33.179 [2024-11-18 00:40:56.925038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.180 [2024-11-18 00:40:56.925064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.180 qpair failed and we were unable to recover it. 00:35:33.180 [2024-11-18 00:40:56.925201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.180 [2024-11-18 00:40:56.925225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.180 qpair failed and we were unable to recover it. 00:35:33.180 [2024-11-18 00:40:56.925340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.180 [2024-11-18 00:40:56.925367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.180 qpair failed and we were unable to recover it. 00:35:33.180 [2024-11-18 00:40:56.925508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.180 [2024-11-18 00:40:56.925534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.180 qpair failed and we were unable to recover it. 00:35:33.180 [2024-11-18 00:40:56.925614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.180 [2024-11-18 00:40:56.925640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.180 qpair failed and we were unable to recover it. 00:35:33.180 [2024-11-18 00:40:56.925729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.180 [2024-11-18 00:40:56.925755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.180 qpair failed and we were unable to recover it. 00:35:33.180 [2024-11-18 00:40:56.925900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.180 [2024-11-18 00:40:56.925925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.180 qpair failed and we were unable to recover it. 00:35:33.180 [2024-11-18 00:40:56.926036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.180 [2024-11-18 00:40:56.926067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.180 qpair failed and we were unable to recover it. 00:35:33.180 [2024-11-18 00:40:56.926176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.180 [2024-11-18 00:40:56.926202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.180 qpair failed and we were unable to recover it. 00:35:33.180 [2024-11-18 00:40:56.926321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.180 [2024-11-18 00:40:56.926347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.180 qpair failed and we were unable to recover it. 00:35:33.180 [2024-11-18 00:40:56.926440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.180 [2024-11-18 00:40:56.926465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.180 qpair failed and we were unable to recover it. 00:35:33.180 [2024-11-18 00:40:56.926552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.180 [2024-11-18 00:40:56.926578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.180 qpair failed and we were unable to recover it. 00:35:33.180 [2024-11-18 00:40:56.926668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.180 [2024-11-18 00:40:56.926695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.180 qpair failed and we were unable to recover it. 00:35:33.180 [2024-11-18 00:40:56.926809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.180 [2024-11-18 00:40:56.926836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.180 qpair failed and we were unable to recover it. 00:35:33.180 [2024-11-18 00:40:56.926954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.180 [2024-11-18 00:40:56.926980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.180 qpair failed and we were unable to recover it. 00:35:33.180 [2024-11-18 00:40:56.927105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.180 [2024-11-18 00:40:56.927131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.180 qpair failed and we were unable to recover it. 00:35:33.180 [2024-11-18 00:40:56.927246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.180 [2024-11-18 00:40:56.927272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.180 qpair failed and we were unable to recover it. 00:35:33.180 [2024-11-18 00:40:56.927374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.180 [2024-11-18 00:40:56.927401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.180 qpair failed and we were unable to recover it. 00:35:33.180 [2024-11-18 00:40:56.927512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.180 [2024-11-18 00:40:56.927539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.180 qpair failed and we were unable to recover it. 00:35:33.180 [2024-11-18 00:40:56.927620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.180 [2024-11-18 00:40:56.927646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.180 qpair failed and we were unable to recover it. 00:35:33.180 [2024-11-18 00:40:56.927781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.180 [2024-11-18 00:40:56.927807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.180 qpair failed and we were unable to recover it. 00:35:33.180 [2024-11-18 00:40:56.927926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.180 [2024-11-18 00:40:56.927953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.180 qpair failed and we were unable to recover it. 00:35:33.180 [2024-11-18 00:40:56.928075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.180 [2024-11-18 00:40:56.928101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.180 qpair failed and we were unable to recover it. 00:35:33.180 [2024-11-18 00:40:56.928181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.180 [2024-11-18 00:40:56.928207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.180 qpair failed and we were unable to recover it. 00:35:33.180 [2024-11-18 00:40:56.928283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.180 [2024-11-18 00:40:56.928309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.180 qpair failed and we were unable to recover it. 00:35:33.180 [2024-11-18 00:40:56.928433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.180 [2024-11-18 00:40:56.928459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.180 qpair failed and we were unable to recover it. 00:35:33.180 [2024-11-18 00:40:56.928579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.180 [2024-11-18 00:40:56.928605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.180 qpair failed and we were unable to recover it. 00:35:33.180 [2024-11-18 00:40:56.928747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.180 [2024-11-18 00:40:56.928774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.180 qpair failed and we were unable to recover it. 00:35:33.180 [2024-11-18 00:40:56.928866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.180 [2024-11-18 00:40:56.928892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.180 qpair failed and we were unable to recover it. 00:35:33.180 [2024-11-18 00:40:56.929000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.180 [2024-11-18 00:40:56.929026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.180 qpair failed and we were unable to recover it. 00:35:33.180 [2024-11-18 00:40:56.929143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.180 [2024-11-18 00:40:56.929169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.180 qpair failed and we were unable to recover it. 00:35:33.180 [2024-11-18 00:40:56.929286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.180 [2024-11-18 00:40:56.929319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.180 qpair failed and we were unable to recover it. 00:35:33.180 [2024-11-18 00:40:56.929464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.180 [2024-11-18 00:40:56.929490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.180 qpair failed and we were unable to recover it. 00:35:33.180 [2024-11-18 00:40:56.929639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.180 [2024-11-18 00:40:56.929666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.180 qpair failed and we were unable to recover it. 00:35:33.180 [2024-11-18 00:40:56.929762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.180 [2024-11-18 00:40:56.929788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.180 qpair failed and we were unable to recover it. 00:35:33.180 [2024-11-18 00:40:56.929946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.180 [2024-11-18 00:40:56.929971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.180 qpair failed and we were unable to recover it. 00:35:33.180 [2024-11-18 00:40:56.930114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.180 [2024-11-18 00:40:56.930140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.180 qpair failed and we were unable to recover it. 00:35:33.180 [2024-11-18 00:40:56.930261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.180 [2024-11-18 00:40:56.930287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.180 qpair failed and we were unable to recover it. 00:35:33.180 [2024-11-18 00:40:56.930388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.180 [2024-11-18 00:40:56.930434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.180 qpair failed and we were unable to recover it. 00:35:33.180 [2024-11-18 00:40:56.930536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.180 [2024-11-18 00:40:56.930585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.180 qpair failed and we were unable to recover it. 00:35:33.180 [2024-11-18 00:40:56.930719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.180 [2024-11-18 00:40:56.930771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.180 qpair failed and we were unable to recover it. 00:35:33.180 [2024-11-18 00:40:56.930943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.180 [2024-11-18 00:40:56.930977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.180 qpair failed and we were unable to recover it. 00:35:33.180 [2024-11-18 00:40:56.931058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.180 [2024-11-18 00:40:56.931085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.180 qpair failed and we were unable to recover it. 00:35:33.180 [2024-11-18 00:40:56.931202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.180 [2024-11-18 00:40:56.931229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.180 qpair failed and we were unable to recover it. 00:35:33.180 [2024-11-18 00:40:56.931352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.180 [2024-11-18 00:40:56.931378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.180 qpair failed and we were unable to recover it. 00:35:33.180 [2024-11-18 00:40:56.931495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.180 [2024-11-18 00:40:56.931521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.180 qpair failed and we were unable to recover it. 00:35:33.180 [2024-11-18 00:40:56.931610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.180 [2024-11-18 00:40:56.931637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.180 qpair failed and we were unable to recover it. 00:35:33.180 [2024-11-18 00:40:56.931723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.180 [2024-11-18 00:40:56.931756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.180 qpair failed and we were unable to recover it. 00:35:33.180 [2024-11-18 00:40:56.931847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.180 [2024-11-18 00:40:56.931873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.180 qpair failed and we were unable to recover it. 00:35:33.180 [2024-11-18 00:40:56.931969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.180 [2024-11-18 00:40:56.932003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.180 qpair failed and we were unable to recover it. 00:35:33.180 [2024-11-18 00:40:56.932124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.180 [2024-11-18 00:40:56.932151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.180 qpair failed and we were unable to recover it. 00:35:33.180 [2024-11-18 00:40:56.932248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.180 [2024-11-18 00:40:56.932275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.180 qpair failed and we were unable to recover it. 00:35:33.180 [2024-11-18 00:40:56.932394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.180 [2024-11-18 00:40:56.932421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.180 qpair failed and we were unable to recover it. 00:35:33.180 [2024-11-18 00:40:56.932538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.180 [2024-11-18 00:40:56.932563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.180 qpair failed and we were unable to recover it. 00:35:33.180 [2024-11-18 00:40:56.932652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.180 [2024-11-18 00:40:56.932677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.180 qpair failed and we were unable to recover it. 00:35:33.180 [2024-11-18 00:40:56.932790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.180 [2024-11-18 00:40:56.932818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.180 qpair failed and we were unable to recover it. 00:35:33.180 [2024-11-18 00:40:56.932976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.180 [2024-11-18 00:40:56.933003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.180 qpair failed and we were unable to recover it. 00:35:33.180 [2024-11-18 00:40:56.933095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.180 [2024-11-18 00:40:56.933120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.180 qpair failed and we were unable to recover it. 00:35:33.180 [2024-11-18 00:40:56.933248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.180 [2024-11-18 00:40:56.933275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.180 qpair failed and we were unable to recover it. 00:35:33.180 [2024-11-18 00:40:56.933374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.180 [2024-11-18 00:40:56.933400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.180 qpair failed and we were unable to recover it. 00:35:33.180 [2024-11-18 00:40:56.933479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.180 [2024-11-18 00:40:56.933504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.180 qpair failed and we were unable to recover it. 00:35:33.180 [2024-11-18 00:40:56.933635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.180 [2024-11-18 00:40:56.933665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.180 qpair failed and we were unable to recover it. 00:35:33.180 [2024-11-18 00:40:56.933798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.180 [2024-11-18 00:40:56.933825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.180 qpair failed and we were unable to recover it. 00:35:33.181 [2024-11-18 00:40:56.933908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.181 [2024-11-18 00:40:56.933935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.181 qpair failed and we were unable to recover it. 00:35:33.181 [2024-11-18 00:40:56.934052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.181 [2024-11-18 00:40:56.934078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.181 qpair failed and we were unable to recover it. 00:35:33.181 [2024-11-18 00:40:56.934200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.181 [2024-11-18 00:40:56.934226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.181 qpair failed and we were unable to recover it. 00:35:33.181 [2024-11-18 00:40:56.934342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.181 [2024-11-18 00:40:56.934370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.181 qpair failed and we were unable to recover it. 00:35:33.181 [2024-11-18 00:40:56.934482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.181 [2024-11-18 00:40:56.934508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.181 qpair failed and we were unable to recover it. 00:35:33.181 [2024-11-18 00:40:56.934601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.181 [2024-11-18 00:40:56.934628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.181 qpair failed and we were unable to recover it. 00:35:33.181 [2024-11-18 00:40:56.934750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.181 [2024-11-18 00:40:56.934776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.181 qpair failed and we were unable to recover it. 00:35:33.181 [2024-11-18 00:40:56.934859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.181 [2024-11-18 00:40:56.934888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.181 qpair failed and we were unable to recover it. 00:35:33.181 [2024-11-18 00:40:56.935009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.181 [2024-11-18 00:40:56.935036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.181 qpair failed and we were unable to recover it. 00:35:33.181 [2024-11-18 00:40:56.935199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.181 [2024-11-18 00:40:56.935242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.181 qpair failed and we were unable to recover it. 00:35:33.181 [2024-11-18 00:40:56.935381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.181 [2024-11-18 00:40:56.935412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.181 qpair failed and we were unable to recover it. 00:35:33.181 [2024-11-18 00:40:56.935517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.181 [2024-11-18 00:40:56.935543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.181 qpair failed and we were unable to recover it. 00:35:33.181 [2024-11-18 00:40:56.935655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.181 [2024-11-18 00:40:56.935681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.181 qpair failed and we were unable to recover it. 00:35:33.181 [2024-11-18 00:40:56.935770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.181 [2024-11-18 00:40:56.935795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.181 qpair failed and we were unable to recover it. 00:35:33.181 [2024-11-18 00:40:56.935923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.181 [2024-11-18 00:40:56.935948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.181 qpair failed and we were unable to recover it. 00:35:33.181 [2024-11-18 00:40:56.936056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.181 [2024-11-18 00:40:56.936081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.181 qpair failed and we were unable to recover it. 00:35:33.181 [2024-11-18 00:40:56.936170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.181 [2024-11-18 00:40:56.936200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.181 qpair failed and we were unable to recover it. 00:35:33.181 [2024-11-18 00:40:56.936299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.181 [2024-11-18 00:40:56.936335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.181 qpair failed and we were unable to recover it. 00:35:33.181 [2024-11-18 00:40:56.936452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.181 [2024-11-18 00:40:56.936479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.181 qpair failed and we were unable to recover it. 00:35:33.181 [2024-11-18 00:40:56.936563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.181 [2024-11-18 00:40:56.936589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.181 qpair failed and we were unable to recover it. 00:35:33.181 [2024-11-18 00:40:56.936694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.181 [2024-11-18 00:40:56.936721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.181 qpair failed and we were unable to recover it. 00:35:33.181 [2024-11-18 00:40:56.936846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.181 [2024-11-18 00:40:56.936873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.181 qpair failed and we were unable to recover it. 00:35:33.181 [2024-11-18 00:40:56.936988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.181 [2024-11-18 00:40:56.937016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.181 qpair failed and we were unable to recover it. 00:35:33.181 [2024-11-18 00:40:56.937137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.181 [2024-11-18 00:40:56.937162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.181 qpair failed and we were unable to recover it. 00:35:33.181 [2024-11-18 00:40:56.937283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.181 [2024-11-18 00:40:56.937320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.181 qpair failed and we were unable to recover it. 00:35:33.181 [2024-11-18 00:40:56.937419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.181 [2024-11-18 00:40:56.937444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.181 qpair failed and we were unable to recover it. 00:35:33.181 [2024-11-18 00:40:56.937533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.181 [2024-11-18 00:40:56.937559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.181 qpair failed and we were unable to recover it. 00:35:33.181 [2024-11-18 00:40:56.937681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.181 [2024-11-18 00:40:56.937707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.181 qpair failed and we were unable to recover it. 00:35:33.181 [2024-11-18 00:40:56.937821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.181 [2024-11-18 00:40:56.937847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.181 qpair failed and we were unable to recover it. 00:35:33.181 [2024-11-18 00:40:56.937994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.181 [2024-11-18 00:40:56.938020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.181 qpair failed and we were unable to recover it. 00:35:33.181 [2024-11-18 00:40:56.938111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.181 [2024-11-18 00:40:56.938138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.181 qpair failed and we were unable to recover it. 00:35:33.181 [2024-11-18 00:40:56.938224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.181 [2024-11-18 00:40:56.938250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.181 qpair failed and we were unable to recover it. 00:35:33.181 [2024-11-18 00:40:56.938371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.181 [2024-11-18 00:40:56.938397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.181 qpair failed and we were unable to recover it. 00:35:33.181 [2024-11-18 00:40:56.938479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.181 [2024-11-18 00:40:56.938507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.181 qpair failed and we were unable to recover it. 00:35:33.181 [2024-11-18 00:40:56.938585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.181 [2024-11-18 00:40:56.938610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.181 qpair failed and we were unable to recover it. 00:35:33.181 [2024-11-18 00:40:56.938711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.181 [2024-11-18 00:40:56.938737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.181 qpair failed and we were unable to recover it. 00:35:33.181 [2024-11-18 00:40:56.938898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.181 [2024-11-18 00:40:56.938936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.181 qpair failed and we were unable to recover it. 00:35:33.181 [2024-11-18 00:40:56.939058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.181 [2024-11-18 00:40:56.939086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.181 qpair failed and we were unable to recover it. 00:35:33.464 [2024-11-18 00:40:56.939174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.464 [2024-11-18 00:40:56.939201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.464 qpair failed and we were unable to recover it. 00:35:33.464 [2024-11-18 00:40:56.939329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.464 [2024-11-18 00:40:56.939357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.464 qpair failed and we were unable to recover it. 00:35:33.464 [2024-11-18 00:40:56.939457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.464 [2024-11-18 00:40:56.939483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.464 qpair failed and we were unable to recover it. 00:35:33.464 [2024-11-18 00:40:56.939572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.464 [2024-11-18 00:40:56.939599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.464 qpair failed and we were unable to recover it. 00:35:33.464 [2024-11-18 00:40:56.939706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.464 [2024-11-18 00:40:56.939733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.464 qpair failed and we were unable to recover it. 00:35:33.464 [2024-11-18 00:40:56.939816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.464 [2024-11-18 00:40:56.939845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.464 qpair failed and we were unable to recover it. 00:35:33.464 [2024-11-18 00:40:56.939941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.464 [2024-11-18 00:40:56.939967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.464 qpair failed and we were unable to recover it. 00:35:33.464 [2024-11-18 00:40:56.940073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.464 [2024-11-18 00:40:56.940120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.464 qpair failed and we were unable to recover it. 00:35:33.464 [2024-11-18 00:40:56.940246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.464 [2024-11-18 00:40:56.940276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-18 00:40:56.940392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-18 00:40:56.940421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-18 00:40:56.940547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-18 00:40:56.940574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-18 00:40:56.940686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-18 00:40:56.940712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-18 00:40:56.940821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-18 00:40:56.940854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-18 00:40:56.940976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-18 00:40:56.941008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-18 00:40:56.941121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-18 00:40:56.941147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-18 00:40:56.941271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-18 00:40:56.941297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-18 00:40:56.941420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-18 00:40:56.941446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-18 00:40:56.941553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-18 00:40:56.941579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-18 00:40:56.941671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-18 00:40:56.941697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-18 00:40:56.941789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-18 00:40:56.941814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-18 00:40:56.941894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-18 00:40:56.941919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-18 00:40:56.942004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-18 00:40:56.942033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-18 00:40:56.942133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-18 00:40:56.942162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-18 00:40:56.942246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-18 00:40:56.942275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-18 00:40:56.942380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-18 00:40:56.942408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-18 00:40:56.942493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-18 00:40:56.942519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-18 00:40:56.942647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-18 00:40:56.942674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-18 00:40:56.942832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-18 00:40:56.942858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-18 00:40:56.942968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-18 00:40:56.942994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-18 00:40:56.943078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-18 00:40:56.943104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-18 00:40:56.943249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-18 00:40:56.943276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-18 00:40:56.943407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-18 00:40:56.943434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-18 00:40:56.943549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-18 00:40:56.943576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-18 00:40:56.943694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-18 00:40:56.943720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-18 00:40:56.943836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-18 00:40:56.943863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-18 00:40:56.943983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-18 00:40:56.944012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-18 00:40:56.944102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-18 00:40:56.944129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-18 00:40:56.944216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-18 00:40:56.944243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-18 00:40:56.944379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-18 00:40:56.944406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-18 00:40:56.944491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-18 00:40:56.944518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-18 00:40:56.944640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-18 00:40:56.944672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-18 00:40:56.944797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-18 00:40:56.944825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-18 00:40:56.944940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-18 00:40:56.944967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-18 00:40:56.945111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-18 00:40:56.945138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-18 00:40:56.945287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-18 00:40:56.945320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-18 00:40:56.945417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.465 [2024-11-18 00:40:56.945444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.465 qpair failed and we were unable to recover it. 00:35:33.465 [2024-11-18 00:40:56.945555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-18 00:40:56.945581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-18 00:40:56.945694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-18 00:40:56.945720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-18 00:40:56.945833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-18 00:40:56.945859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-18 00:40:56.945959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-18 00:40:56.945986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-18 00:40:56.946102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-18 00:40:56.946130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-18 00:40:56.946217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-18 00:40:56.946244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-18 00:40:56.946352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-18 00:40:56.946382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-18 00:40:56.946459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-18 00:40:56.946485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-18 00:40:56.946589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-18 00:40:56.946617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-18 00:40:56.946763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-18 00:40:56.946789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-18 00:40:56.946887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-18 00:40:56.946914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-18 00:40:56.947076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-18 00:40:56.947102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-18 00:40:56.947220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-18 00:40:56.947247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-18 00:40:56.947348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-18 00:40:56.947384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-18 00:40:56.947499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-18 00:40:56.947526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-18 00:40:56.947643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-18 00:40:56.947670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-18 00:40:56.947780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-18 00:40:56.947807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-18 00:40:56.947919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-18 00:40:56.947946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-18 00:40:56.948066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-18 00:40:56.948093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-18 00:40:56.948201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-18 00:40:56.948227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-18 00:40:56.948347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-18 00:40:56.948383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-18 00:40:56.948481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-18 00:40:56.948507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-18 00:40:56.948648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-18 00:40:56.948675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-18 00:40:56.948792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-18 00:40:56.948819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-18 00:40:56.948954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-18 00:40:56.948980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-18 00:40:56.949100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-18 00:40:56.949127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-18 00:40:56.949245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-18 00:40:56.949273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-18 00:40:56.949384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-18 00:40:56.949423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-18 00:40:56.949514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-18 00:40:56.949542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-18 00:40:56.949660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-18 00:40:56.949687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-18 00:40:56.949778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-18 00:40:56.949804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-18 00:40:56.949918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-18 00:40:56.949945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-18 00:40:56.950072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-18 00:40:56.950099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-18 00:40:56.950239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-18 00:40:56.950265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-18 00:40:56.950397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-18 00:40:56.950428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-18 00:40:56.950545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-18 00:40:56.950572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-18 00:40:56.950711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.466 [2024-11-18 00:40:56.950737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.466 qpair failed and we were unable to recover it. 00:35:33.466 [2024-11-18 00:40:56.950877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-18 00:40:56.950903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-18 00:40:56.951019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-18 00:40:56.951046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-18 00:40:56.951157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-18 00:40:56.951196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-18 00:40:56.951328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-18 00:40:56.951358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-18 00:40:56.951476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-18 00:40:56.951504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-18 00:40:56.951651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-18 00:40:56.951677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-18 00:40:56.951785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-18 00:40:56.951811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-18 00:40:56.951936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-18 00:40:56.951963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-18 00:40:56.952076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-18 00:40:56.952102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-18 00:40:56.952252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-18 00:40:56.952279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-18 00:40:56.952446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-18 00:40:56.952476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-18 00:40:56.952626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-18 00:40:56.952660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-18 00:40:56.952800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-18 00:40:56.952826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-18 00:40:56.952951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-18 00:40:56.952979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-18 00:40:56.953066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-18 00:40:56.953092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-18 00:40:56.953222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-18 00:40:56.953249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-18 00:40:56.953365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-18 00:40:56.953392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-18 00:40:56.953475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-18 00:40:56.953501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-18 00:40:56.953614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-18 00:40:56.953641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-18 00:40:56.953757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-18 00:40:56.953786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-18 00:40:56.953910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-18 00:40:56.953938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-18 00:40:56.954051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-18 00:40:56.954077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-18 00:40:56.954227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-18 00:40:56.954253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-18 00:40:56.954382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-18 00:40:56.954409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-18 00:40:56.954500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-18 00:40:56.954525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-18 00:40:56.954646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-18 00:40:56.954673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-18 00:40:56.954807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-18 00:40:56.954833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-18 00:40:56.954913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-18 00:40:56.954938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-18 00:40:56.955067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-18 00:40:56.955094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-18 00:40:56.955260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-18 00:40:56.955286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-18 00:40:56.955432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-18 00:40:56.955471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-18 00:40:56.955601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-18 00:40:56.955635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-18 00:40:56.955727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-18 00:40:56.955753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-18 00:40:56.955833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-18 00:40:56.955859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-18 00:40:56.955944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-18 00:40:56.955970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-18 00:40:56.956079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-18 00:40:56.956105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-18 00:40:56.956258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-18 00:40:56.956284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.467 [2024-11-18 00:40:56.956410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.467 [2024-11-18 00:40:56.956443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.467 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-18 00:40:56.956560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-18 00:40:56.956595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-18 00:40:56.956681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-18 00:40:56.956706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-18 00:40:56.956796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-18 00:40:56.956822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-18 00:40:56.956903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-18 00:40:56.956929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-18 00:40:56.957044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-18 00:40:56.957069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-18 00:40:56.957176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-18 00:40:56.957201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-18 00:40:56.957330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-18 00:40:56.957370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-18 00:40:56.957465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-18 00:40:56.957493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-18 00:40:56.957618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-18 00:40:56.957647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-18 00:40:56.957782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-18 00:40:56.957809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-18 00:40:56.957926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-18 00:40:56.957954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-18 00:40:56.958073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-18 00:40:56.958100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-18 00:40:56.958181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-18 00:40:56.958208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-18 00:40:56.958307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-18 00:40:56.958340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-18 00:40:56.958429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-18 00:40:56.958455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-18 00:40:56.958566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-18 00:40:56.958597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-18 00:40:56.958688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-18 00:40:56.958713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-18 00:40:56.958866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-18 00:40:56.958892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-18 00:40:56.958980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-18 00:40:56.959005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-18 00:40:56.959086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-18 00:40:56.959112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-18 00:40:56.959192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-18 00:40:56.959219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-18 00:40:56.959341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-18 00:40:56.959379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-18 00:40:56.959492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-18 00:40:56.959517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-18 00:40:56.959672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-18 00:40:56.959697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-18 00:40:56.959789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-18 00:40:56.959832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-18 00:40:56.960013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-18 00:40:56.960039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-18 00:40:56.960145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-18 00:40:56.960176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-18 00:40:56.960273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-18 00:40:56.960300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-18 00:40:56.960424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-18 00:40:56.960449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-18 00:40:56.960538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-18 00:40:56.960564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-18 00:40:56.960678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-18 00:40:56.960703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-18 00:40:56.960844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-18 00:40:56.960869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-18 00:40:56.960977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-18 00:40:56.961003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-18 00:40:56.961124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-18 00:40:56.961166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-18 00:40:56.961294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-18 00:40:56.961329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-18 00:40:56.961447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-18 00:40:56.961473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-18 00:40:56.961593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.468 [2024-11-18 00:40:56.961619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.468 qpair failed and we were unable to recover it. 00:35:33.468 [2024-11-18 00:40:56.961763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-18 00:40:56.961790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-18 00:40:56.961927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-18 00:40:56.961953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-18 00:40:56.962091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-18 00:40:56.962116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-18 00:40:56.962221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-18 00:40:56.962247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-18 00:40:56.962335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-18 00:40:56.962362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-18 00:40:56.962448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-18 00:40:56.962474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-18 00:40:56.962572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-18 00:40:56.962600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-18 00:40:56.962755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-18 00:40:56.962782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-18 00:40:56.962922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-18 00:40:56.962948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-18 00:40:56.963062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-18 00:40:56.963088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-18 00:40:56.963172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-18 00:40:56.963198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-18 00:40:56.963323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-18 00:40:56.963353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-18 00:40:56.963465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-18 00:40:56.963491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-18 00:40:56.963567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-18 00:40:56.963593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-18 00:40:56.963720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-18 00:40:56.963745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-18 00:40:56.963866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-18 00:40:56.963892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-18 00:40:56.963969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-18 00:40:56.963998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-18 00:40:56.964091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-18 00:40:56.964118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-18 00:40:56.964245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-18 00:40:56.964271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-18 00:40:56.964419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-18 00:40:56.964446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-18 00:40:56.964581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-18 00:40:56.964607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-18 00:40:56.964751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-18 00:40:56.964777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-18 00:40:56.964868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-18 00:40:56.964894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-18 00:40:56.965007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-18 00:40:56.965034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-18 00:40:56.965155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-18 00:40:56.965189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-18 00:40:56.965271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-18 00:40:56.965296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-18 00:40:56.965399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-18 00:40:56.965426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-18 00:40:56.965537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-18 00:40:56.965562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-18 00:40:56.965657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-18 00:40:56.965685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-18 00:40:56.965836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-18 00:40:56.965863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-18 00:40:56.966009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-18 00:40:56.966035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-18 00:40:56.966154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.469 [2024-11-18 00:40:56.966181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.469 qpair failed and we were unable to recover it. 00:35:33.469 [2024-11-18 00:40:56.966263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-18 00:40:56.966290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-18 00:40:56.966397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-18 00:40:56.966424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-18 00:40:56.966537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-18 00:40:56.966563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-18 00:40:56.966643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-18 00:40:56.966669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-18 00:40:56.966777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-18 00:40:56.966804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-18 00:40:56.966914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-18 00:40:56.966940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-18 00:40:56.967022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-18 00:40:56.967048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-18 00:40:56.967167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-18 00:40:56.967194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-18 00:40:56.967319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-18 00:40:56.967346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-18 00:40:56.967434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-18 00:40:56.967460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-18 00:40:56.967576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-18 00:40:56.967602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-18 00:40:56.967709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-18 00:40:56.967748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-18 00:40:56.967887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-18 00:40:56.967916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-18 00:40:56.968044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-18 00:40:56.968071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-18 00:40:56.968212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-18 00:40:56.968238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-18 00:40:56.968376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-18 00:40:56.968405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-18 00:40:56.968559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-18 00:40:56.968586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-18 00:40:56.968727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-18 00:40:56.968754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-18 00:40:56.968895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-18 00:40:56.968921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-18 00:40:56.969032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-18 00:40:56.969059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-18 00:40:56.969172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-18 00:40:56.969199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-18 00:40:56.969364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-18 00:40:56.969393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-18 00:40:56.969530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-18 00:40:56.969573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-18 00:40:56.969681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-18 00:40:56.969708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-18 00:40:56.969821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-18 00:40:56.969855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-18 00:40:56.969977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-18 00:40:56.970004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-18 00:40:56.970083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-18 00:40:56.970110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-18 00:40:56.970226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-18 00:40:56.970253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-18 00:40:56.970397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-18 00:40:56.970425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-18 00:40:56.970569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-18 00:40:56.970596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-18 00:40:56.970719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-18 00:40:56.970745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-18 00:40:56.970873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-18 00:40:56.970899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-18 00:40:56.971021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-18 00:40:56.971048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-18 00:40:56.971199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-18 00:40:56.971225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-18 00:40:56.971383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-18 00:40:56.971411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-18 00:40:56.971526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-18 00:40:56.971553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-18 00:40:56.971672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-18 00:40:56.971699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.470 [2024-11-18 00:40:56.971811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.470 [2024-11-18 00:40:56.971838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.470 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-18 00:40:56.971959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-18 00:40:56.971986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-18 00:40:56.972126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-18 00:40:56.972152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-18 00:40:56.972244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-18 00:40:56.972272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-18 00:40:56.972418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-18 00:40:56.972445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-18 00:40:56.972590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-18 00:40:56.972616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-18 00:40:56.972707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-18 00:40:56.972735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-18 00:40:56.972857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-18 00:40:56.972884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-18 00:40:56.972994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-18 00:40:56.973021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-18 00:40:56.973115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-18 00:40:56.973142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-18 00:40:56.973257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-18 00:40:56.973284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-18 00:40:56.973383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-18 00:40:56.973410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-18 00:40:56.973504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-18 00:40:56.973531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-18 00:40:56.973614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-18 00:40:56.973641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-18 00:40:56.973759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-18 00:40:56.973786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-18 00:40:56.973909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-18 00:40:56.973936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-18 00:40:56.974034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-18 00:40:56.974073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-18 00:40:56.974194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-18 00:40:56.974222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-18 00:40:56.974323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-18 00:40:56.974361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-18 00:40:56.974477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-18 00:40:56.974504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-18 00:40:56.974618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-18 00:40:56.974644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-18 00:40:56.974732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-18 00:40:56.974758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-18 00:40:56.974845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-18 00:40:56.974872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-18 00:40:56.974989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-18 00:40:56.975017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-18 00:40:56.975095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-18 00:40:56.975126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-18 00:40:56.975235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-18 00:40:56.975261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-18 00:40:56.975377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-18 00:40:56.975404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-18 00:40:56.975536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-18 00:40:56.975567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-18 00:40:56.975661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-18 00:40:56.975688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-18 00:40:56.975799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-18 00:40:56.975826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-18 00:40:56.975937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-18 00:40:56.975964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-18 00:40:56.976098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-18 00:40:56.976125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-18 00:40:56.976243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-18 00:40:56.976270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-18 00:40:56.976397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-18 00:40:56.976424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-18 00:40:56.976554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-18 00:40:56.976582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-18 00:40:56.976696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-18 00:40:56.976722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-18 00:40:56.976801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-18 00:40:56.976827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-18 00:40:56.976948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-18 00:40:56.976974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.471 qpair failed and we were unable to recover it. 00:35:33.471 [2024-11-18 00:40:56.977121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.471 [2024-11-18 00:40:56.977148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-18 00:40:56.977263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-18 00:40:56.977291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-18 00:40:56.977452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-18 00:40:56.977479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-18 00:40:56.977567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-18 00:40:56.977594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-18 00:40:56.977739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-18 00:40:56.977766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-18 00:40:56.977906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-18 00:40:56.977932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-18 00:40:56.978059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-18 00:40:56.978085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-18 00:40:56.978209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-18 00:40:56.978235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-18 00:40:56.978324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-18 00:40:56.978362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-18 00:40:56.978474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-18 00:40:56.978500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-18 00:40:56.978617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-18 00:40:56.978643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-18 00:40:56.978786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-18 00:40:56.978812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-18 00:40:56.978923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-18 00:40:56.978949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-18 00:40:56.979034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-18 00:40:56.979060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-18 00:40:56.979181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-18 00:40:56.979207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-18 00:40:56.979320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-18 00:40:56.979347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-18 00:40:56.979505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-18 00:40:56.979544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-18 00:40:56.983474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-18 00:40:56.983516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-18 00:40:56.983670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-18 00:40:56.983699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-18 00:40:56.983844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-18 00:40:56.983870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-18 00:40:56.983990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-18 00:40:56.984016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-18 00:40:56.984178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-18 00:40:56.984230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-18 00:40:56.984414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-18 00:40:56.984442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-18 00:40:56.984560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-18 00:40:56.984594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-18 00:40:56.984772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-18 00:40:56.984798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-18 00:40:56.984912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-18 00:40:56.984938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-18 00:40:56.985085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-18 00:40:56.985112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-18 00:40:56.985250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-18 00:40:56.985275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-18 00:40:56.985419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-18 00:40:56.985458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-18 00:40:56.985610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-18 00:40:56.985639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-18 00:40:56.985736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-18 00:40:56.985764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-18 00:40:56.985850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-18 00:40:56.985876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-18 00:40:56.985985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-18 00:40:56.986011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-18 00:40:56.986164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-18 00:40:56.986191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-18 00:40:56.986334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-18 00:40:56.986370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-18 00:40:56.986466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-18 00:40:56.986492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-18 00:40:56.986605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-18 00:40:56.986631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.472 [2024-11-18 00:40:56.986746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.472 [2024-11-18 00:40:56.986773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.472 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-18 00:40:56.986890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-18 00:40:56.986917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-18 00:40:56.987031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-18 00:40:56.987058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-18 00:40:56.987211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-18 00:40:56.987239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-18 00:40:56.987364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-18 00:40:56.987392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-18 00:40:56.987485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-18 00:40:56.987511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-18 00:40:56.987594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-18 00:40:56.987622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-18 00:40:56.987741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-18 00:40:56.987768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-18 00:40:56.987914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-18 00:40:56.987939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-18 00:40:56.988030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-18 00:40:56.988056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-18 00:40:56.988183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-18 00:40:56.988210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-18 00:40:56.988303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-18 00:40:56.988337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-18 00:40:56.988485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-18 00:40:56.988511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-18 00:40:56.988651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-18 00:40:56.988678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-18 00:40:56.988798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-18 00:40:56.988824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-18 00:40:56.988917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-18 00:40:56.988945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-18 00:40:56.989036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-18 00:40:56.989063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-18 00:40:56.989149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-18 00:40:56.989176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-18 00:40:56.989289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-18 00:40:56.989322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-18 00:40:56.989438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-18 00:40:56.989465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-18 00:40:56.989609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-18 00:40:56.989635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-18 00:40:56.989743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-18 00:40:56.989770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-18 00:40:56.989886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-18 00:40:56.989911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-18 00:40:56.990000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-18 00:40:56.990027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-18 00:40:56.990120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-18 00:40:56.990147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-18 00:40:56.990268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-18 00:40:56.990294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-18 00:40:56.990456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-18 00:40:56.990482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-18 00:40:56.990622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-18 00:40:56.990648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-18 00:40:56.990792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-18 00:40:56.990819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-18 00:40:56.990937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-18 00:40:56.990963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-18 00:40:56.991103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-18 00:40:56.991129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-18 00:40:56.991245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-18 00:40:56.991271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-18 00:40:56.991404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-18 00:40:56.991431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-18 00:40:56.991565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-18 00:40:56.991592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-18 00:40:56.991732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-18 00:40:56.991759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-18 00:40:56.991876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-18 00:40:56.991902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.473 qpair failed and we were unable to recover it. 00:35:33.473 [2024-11-18 00:40:56.991984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.473 [2024-11-18 00:40:56.992012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-18 00:40:56.992135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-18 00:40:56.992161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-18 00:40:56.992249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-18 00:40:56.992275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-18 00:40:56.992365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-18 00:40:56.992391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-18 00:40:56.992511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-18 00:40:56.992538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-18 00:40:56.992656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-18 00:40:56.992681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-18 00:40:56.992792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-18 00:40:56.992818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-18 00:40:56.992906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-18 00:40:56.992931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-18 00:40:56.993054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-18 00:40:56.993080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-18 00:40:56.993166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-18 00:40:56.993191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-18 00:40:56.993274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-18 00:40:56.993306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-18 00:40:56.993394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-18 00:40:56.993420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-18 00:40:56.993531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-18 00:40:56.993557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-18 00:40:56.993651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-18 00:40:56.993677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-18 00:40:56.993788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-18 00:40:56.993814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-18 00:40:56.993928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-18 00:40:56.993956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-18 00:40:56.994047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-18 00:40:56.994073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-18 00:40:56.994202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-18 00:40:56.994240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-18 00:40:56.994397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-18 00:40:56.994425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-18 00:40:56.994569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-18 00:40:56.994596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-18 00:40:56.994710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-18 00:40:56.994736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-18 00:40:56.994848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-18 00:40:56.994874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-18 00:40:56.994965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-18 00:40:56.994991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-18 00:40:56.995100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-18 00:40:56.995126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-18 00:40:56.995316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-18 00:40:56.995347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-18 00:40:56.995471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-18 00:40:56.995498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-18 00:40:56.995616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-18 00:40:56.995644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-18 00:40:56.995728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-18 00:40:56.995755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-18 00:40:56.995849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-18 00:40:56.995876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-18 00:40:56.995993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-18 00:40:56.996020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-18 00:40:56.996118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-18 00:40:56.996144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-18 00:40:56.996282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-18 00:40:56.996308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-18 00:40:56.996429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-18 00:40:56.996456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-18 00:40:56.996538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-18 00:40:56.996565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-18 00:40:56.996665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-18 00:40:56.996692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-18 00:40:56.996802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-18 00:40:56.996830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.474 [2024-11-18 00:40:56.996916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.474 [2024-11-18 00:40:56.996943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.474 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-18 00:40:56.997066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-18 00:40:56.997098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-18 00:40:56.997230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-18 00:40:56.997256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-18 00:40:56.997406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-18 00:40:56.997434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-18 00:40:56.997549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-18 00:40:56.997576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-18 00:40:56.997663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-18 00:40:56.997688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-18 00:40:56.997770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-18 00:40:56.997795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-18 00:40:56.997883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-18 00:40:56.997909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-18 00:40:56.998014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-18 00:40:56.998039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-18 00:40:56.998164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-18 00:40:56.998190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-18 00:40:56.998279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-18 00:40:56.998304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-18 00:40:56.998449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-18 00:40:56.998487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-18 00:40:56.998621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-18 00:40:56.998650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-18 00:40:56.998774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-18 00:40:56.998801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-18 00:40:56.998916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-18 00:40:56.998943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-18 00:40:56.999090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-18 00:40:56.999117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-18 00:40:56.999222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-18 00:40:56.999249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-18 00:40:56.999371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-18 00:40:56.999398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-18 00:40:56.999505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-18 00:40:56.999532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-18 00:40:56.999617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-18 00:40:56.999643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-18 00:40:56.999731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-18 00:40:56.999757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-18 00:40:56.999907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-18 00:40:56.999934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-18 00:40:57.000081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-18 00:40:57.000108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-18 00:40:57.000225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-18 00:40:57.000252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-18 00:40:57.000390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-18 00:40:57.000416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-18 00:40:57.000533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-18 00:40:57.000560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-18 00:40:57.000704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-18 00:40:57.000732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-18 00:40:57.000890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-18 00:40:57.000962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-18 00:40:57.001139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-18 00:40:57.001202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-18 00:40:57.001337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-18 00:40:57.001387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-18 00:40:57.001549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-18 00:40:57.001585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-18 00:40:57.001730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-18 00:40:57.001767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-18 00:40:57.001883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-18 00:40:57.001910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-18 00:40:57.001995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-18 00:40:57.002022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-18 00:40:57.002141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-18 00:40:57.002168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-18 00:40:57.002262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-18 00:40:57.002289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-18 00:40:57.002413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-18 00:40:57.002441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-18 00:40:57.002568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-18 00:40:57.002595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.475 [2024-11-18 00:40:57.002681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.475 [2024-11-18 00:40:57.002708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.475 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-18 00:40:57.002827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-18 00:40:57.002857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-18 00:40:57.002985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-18 00:40:57.003013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-18 00:40:57.003126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-18 00:40:57.003157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-18 00:40:57.003280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-18 00:40:57.003307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-18 00:40:57.003408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-18 00:40:57.003436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-18 00:40:57.003555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-18 00:40:57.003582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-18 00:40:57.003697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-18 00:40:57.003724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-18 00:40:57.003870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-18 00:40:57.003897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-18 00:40:57.003983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-18 00:40:57.004009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-18 00:40:57.004109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-18 00:40:57.004148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-18 00:40:57.004247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-18 00:40:57.004276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-18 00:40:57.004390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-18 00:40:57.004418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-18 00:40:57.004497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-18 00:40:57.004522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-18 00:40:57.004648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-18 00:40:57.004674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-18 00:40:57.004783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-18 00:40:57.004810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-18 00:40:57.004930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-18 00:40:57.004956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-18 00:40:57.005086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-18 00:40:57.005114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-18 00:40:57.005237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-18 00:40:57.005263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-18 00:40:57.005391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-18 00:40:57.005420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-18 00:40:57.005503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-18 00:40:57.005532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-18 00:40:57.005644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-18 00:40:57.005670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-18 00:40:57.005759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-18 00:40:57.005786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-18 00:40:57.005899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-18 00:40:57.005926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-18 00:40:57.006041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-18 00:40:57.006068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-18 00:40:57.006156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-18 00:40:57.006184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-18 00:40:57.006271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-18 00:40:57.006297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-18 00:40:57.006388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-18 00:40:57.006415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-18 00:40:57.006526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-18 00:40:57.006552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-18 00:40:57.006669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-18 00:40:57.006695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-18 00:40:57.006791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-18 00:40:57.006837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-18 00:40:57.006985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-18 00:40:57.007013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-18 00:40:57.007119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-18 00:40:57.007147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-18 00:40:57.007264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-18 00:40:57.007291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-18 00:40:57.007384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-18 00:40:57.007412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-18 00:40:57.007530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-18 00:40:57.007558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-18 00:40:57.007708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-18 00:40:57.007742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-18 00:40:57.007875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-18 00:40:57.007909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.476 qpair failed and we were unable to recover it. 00:35:33.476 [2024-11-18 00:40:57.008014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.476 [2024-11-18 00:40:57.008048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-18 00:40:57.008220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-18 00:40:57.008248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-18 00:40:57.008414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-18 00:40:57.008463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-18 00:40:57.008577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-18 00:40:57.008612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-18 00:40:57.008788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-18 00:40:57.008821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-18 00:40:57.008928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-18 00:40:57.008961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-18 00:40:57.009065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-18 00:40:57.009097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-18 00:40:57.009233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-18 00:40:57.009266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-18 00:40:57.009395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-18 00:40:57.009423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-18 00:40:57.009514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-18 00:40:57.009542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-18 00:40:57.009659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-18 00:40:57.009706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-18 00:40:57.009857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-18 00:40:57.009904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-18 00:40:57.010070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-18 00:40:57.010114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-18 00:40:57.010245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-18 00:40:57.010274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-18 00:40:57.010369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-18 00:40:57.010398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-18 00:40:57.010513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-18 00:40:57.010558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-18 00:40:57.010673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-18 00:40:57.010700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-18 00:40:57.010967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-18 00:40:57.011031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-18 00:40:57.011214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-18 00:40:57.011278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-18 00:40:57.011533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-18 00:40:57.011594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-18 00:40:57.011731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-18 00:40:57.011802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-18 00:40:57.012034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-18 00:40:57.012099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-18 00:40:57.012234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-18 00:40:57.012263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-18 00:40:57.012392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-18 00:40:57.012419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-18 00:40:57.012548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-18 00:40:57.012595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-18 00:40:57.012710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-18 00:40:57.012738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-18 00:40:57.012833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-18 00:40:57.012860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-18 00:40:57.012976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-18 00:40:57.013003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-18 00:40:57.013122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-18 00:40:57.013149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-18 00:40:57.013283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-18 00:40:57.013330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-18 00:40:57.013462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-18 00:40:57.013508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-18 00:40:57.013643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-18 00:40:57.013678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-18 00:40:57.013813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-18 00:40:57.013856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-18 00:40:57.014023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-18 00:40:57.014058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-18 00:40:57.014188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-18 00:40:57.014223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-18 00:40:57.014369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-18 00:40:57.014397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-18 00:40:57.014491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-18 00:40:57.014518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-18 00:40:57.014693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-18 00:40:57.014736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.477 qpair failed and we were unable to recover it. 00:35:33.477 [2024-11-18 00:40:57.014894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.477 [2024-11-18 00:40:57.014942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-18 00:40:57.015070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-18 00:40:57.015102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-18 00:40:57.015235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-18 00:40:57.015268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-18 00:40:57.015414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-18 00:40:57.015441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-18 00:40:57.015536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-18 00:40:57.015564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-18 00:40:57.015747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-18 00:40:57.015779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-18 00:40:57.015916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-18 00:40:57.015948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-18 00:40:57.016073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-18 00:40:57.016100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-18 00:40:57.016263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-18 00:40:57.016296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-18 00:40:57.016452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-18 00:40:57.016479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-18 00:40:57.016573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-18 00:40:57.016620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-18 00:40:57.016781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-18 00:40:57.016814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-18 00:40:57.016982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-18 00:40:57.017015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-18 00:40:57.017120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-18 00:40:57.017154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-18 00:40:57.017287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-18 00:40:57.017338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-18 00:40:57.017425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-18 00:40:57.017473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-18 00:40:57.017611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-18 00:40:57.017644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-18 00:40:57.017787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-18 00:40:57.017819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-18 00:40:57.017932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-18 00:40:57.017998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-18 00:40:57.018108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-18 00:40:57.018137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-18 00:40:57.018259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-18 00:40:57.018288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-18 00:40:57.018428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-18 00:40:57.018461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-18 00:40:57.018582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-18 00:40:57.018609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-18 00:40:57.018689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-18 00:40:57.018716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-18 00:40:57.018831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-18 00:40:57.018859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-18 00:40:57.018971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-18 00:40:57.018998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-18 00:40:57.019136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-18 00:40:57.019162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-18 00:40:57.019274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-18 00:40:57.019301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-18 00:40:57.019428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-18 00:40:57.019456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-18 00:40:57.019605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-18 00:40:57.019657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-18 00:40:57.019825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-18 00:40:57.019861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-18 00:40:57.020006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-18 00:40:57.020042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-18 00:40:57.020239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-18 00:40:57.020302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-18 00:40:57.020477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.478 [2024-11-18 00:40:57.020504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.478 qpair failed and we were unable to recover it. 00:35:33.478 [2024-11-18 00:40:57.020654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-18 00:40:57.020719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-18 00:40:57.020938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-18 00:40:57.021002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-18 00:40:57.021258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-18 00:40:57.021327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-18 00:40:57.021455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-18 00:40:57.021484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-18 00:40:57.021585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-18 00:40:57.021613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-18 00:40:57.021786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-18 00:40:57.021833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-18 00:40:57.021977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-18 00:40:57.022025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-18 00:40:57.022113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-18 00:40:57.022140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-18 00:40:57.022259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-18 00:40:57.022288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-18 00:40:57.022467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-18 00:40:57.022507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-18 00:40:57.022629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-18 00:40:57.022659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-18 00:40:57.022785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-18 00:40:57.022812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-18 00:40:57.022940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-18 00:40:57.022976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-18 00:40:57.023102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-18 00:40:57.023137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-18 00:40:57.023280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-18 00:40:57.023336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-18 00:40:57.023469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-18 00:40:57.023496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-18 00:40:57.023611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-18 00:40:57.023646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-18 00:40:57.023779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-18 00:40:57.023814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-18 00:40:57.023990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-18 00:40:57.024040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-18 00:40:57.024128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-18 00:40:57.024156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-18 00:40:57.024280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-18 00:40:57.024307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-18 00:40:57.024461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-18 00:40:57.024508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-18 00:40:57.024611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-18 00:40:57.024645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-18 00:40:57.024770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-18 00:40:57.024816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-18 00:40:57.024930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-18 00:40:57.024980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-18 00:40:57.025104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-18 00:40:57.025132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-18 00:40:57.025244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-18 00:40:57.025271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-18 00:40:57.025368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-18 00:40:57.025401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-18 00:40:57.025528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-18 00:40:57.025577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-18 00:40:57.025716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-18 00:40:57.025751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-18 00:40:57.025891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-18 00:40:57.025925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-18 00:40:57.026036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-18 00:40:57.026072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-18 00:40:57.026233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-18 00:40:57.026284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-18 00:40:57.026456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-18 00:40:57.026485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-18 00:40:57.026625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-18 00:40:57.026671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-18 00:40:57.026816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-18 00:40:57.026861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-18 00:40:57.026968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-18 00:40:57.027017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-18 00:40:57.027136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.479 [2024-11-18 00:40:57.027163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.479 qpair failed and we were unable to recover it. 00:35:33.479 [2024-11-18 00:40:57.027304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-18 00:40:57.027346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-18 00:40:57.027431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-18 00:40:57.027459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-18 00:40:57.027550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-18 00:40:57.027577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-18 00:40:57.027672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-18 00:40:57.027699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-18 00:40:57.027787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-18 00:40:57.027815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-18 00:40:57.027906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-18 00:40:57.027933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-18 00:40:57.028028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-18 00:40:57.028058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-18 00:40:57.028148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-18 00:40:57.028176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-18 00:40:57.028333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-18 00:40:57.028373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-18 00:40:57.028513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-18 00:40:57.028549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-18 00:40:57.028722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-18 00:40:57.028757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-18 00:40:57.028930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-18 00:40:57.028974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-18 00:40:57.029076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-18 00:40:57.029153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-18 00:40:57.029331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-18 00:40:57.029361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-18 00:40:57.029483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-18 00:40:57.029510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-18 00:40:57.029659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-18 00:40:57.029703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-18 00:40:57.029804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-18 00:40:57.029846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-18 00:40:57.029985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-18 00:40:57.030020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-18 00:40:57.030193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-18 00:40:57.030222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-18 00:40:57.030302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-18 00:40:57.030336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-18 00:40:57.030453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-18 00:40:57.030480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-18 00:40:57.030593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-18 00:40:57.030627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-18 00:40:57.030775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-18 00:40:57.030822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-18 00:40:57.030967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-18 00:40:57.030994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-18 00:40:57.031111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-18 00:40:57.031140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-18 00:40:57.031282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-18 00:40:57.031308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-18 00:40:57.031449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-18 00:40:57.031490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-18 00:40:57.031592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-18 00:40:57.031621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-18 00:40:57.031738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-18 00:40:57.031766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-18 00:40:57.031872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-18 00:40:57.031906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-18 00:40:57.032022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-18 00:40:57.032056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-18 00:40:57.032203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-18 00:40:57.032230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-18 00:40:57.032320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-18 00:40:57.032348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-18 00:40:57.032437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-18 00:40:57.032488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-18 00:40:57.032655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-18 00:40:57.032689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-18 00:40:57.032827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-18 00:40:57.032860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-18 00:40:57.032963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-18 00:40:57.032997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.480 [2024-11-18 00:40:57.033178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.480 [2024-11-18 00:40:57.033243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.480 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-18 00:40:57.033436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-18 00:40:57.033464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-18 00:40:57.033541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-18 00:40:57.033587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-18 00:40:57.033752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-18 00:40:57.033786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-18 00:40:57.033933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-18 00:40:57.033966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-18 00:40:57.034117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-18 00:40:57.034183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-18 00:40:57.034369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-18 00:40:57.034401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-18 00:40:57.034513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-18 00:40:57.034540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-18 00:40:57.034629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-18 00:40:57.034657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-18 00:40:57.034815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-18 00:40:57.034843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-18 00:40:57.034971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-18 00:40:57.035019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-18 00:40:57.035196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-18 00:40:57.035232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-18 00:40:57.035390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-18 00:40:57.035416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-18 00:40:57.035528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-18 00:40:57.035555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-18 00:40:57.035659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-18 00:40:57.035686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-18 00:40:57.035828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-18 00:40:57.035862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-18 00:40:57.035976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-18 00:40:57.036019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-18 00:40:57.036122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-18 00:40:57.036157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-18 00:40:57.036272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-18 00:40:57.036307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-18 00:40:57.036458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-18 00:40:57.036495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-18 00:40:57.036660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-18 00:40:57.036695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-18 00:40:57.036798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-18 00:40:57.036833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-18 00:40:57.037005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-18 00:40:57.037043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-18 00:40:57.037205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-18 00:40:57.037257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-18 00:40:57.037448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-18 00:40:57.037478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-18 00:40:57.037594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-18 00:40:57.037621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-18 00:40:57.037770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-18 00:40:57.037797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-18 00:40:57.037938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-18 00:40:57.037965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-18 00:40:57.038102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-18 00:40:57.038165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-18 00:40:57.038286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-18 00:40:57.038322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-18 00:40:57.038450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-18 00:40:57.038477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-18 00:40:57.038599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-18 00:40:57.038633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-18 00:40:57.038863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-18 00:40:57.038898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-18 00:40:57.039036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-18 00:40:57.039075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-18 00:40:57.039219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-18 00:40:57.039254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-18 00:40:57.039412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-18 00:40:57.039442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-18 00:40:57.039563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-18 00:40:57.039594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-18 00:40:57.039703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-18 00:40:57.039740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-18 00:40:57.039938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.481 [2024-11-18 00:40:57.039989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.481 qpair failed and we were unable to recover it. 00:35:33.481 [2024-11-18 00:40:57.040162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 [2024-11-18 00:40:57.040215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.482 [2024-11-18 00:40:57.040300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 [2024-11-18 00:40:57.040344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.482 [2024-11-18 00:40:57.040485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 [2024-11-18 00:40:57.040573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.482 [2024-11-18 00:40:57.040703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 [2024-11-18 00:40:57.040768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.482 [2024-11-18 00:40:57.041043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 [2024-11-18 00:40:57.041093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.482 [2024-11-18 00:40:57.041207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 [2024-11-18 00:40:57.041236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.482 [2024-11-18 00:40:57.041349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 [2024-11-18 00:40:57.041377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.482 [2024-11-18 00:40:57.041493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 [2024-11-18 00:40:57.041521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.482 [2024-11-18 00:40:57.041636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 [2024-11-18 00:40:57.041671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.482 [2024-11-18 00:40:57.041865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 [2024-11-18 00:40:57.041930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.482 [2024-11-18 00:40:57.042093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 [2024-11-18 00:40:57.042127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.482 [2024-11-18 00:40:57.042294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 [2024-11-18 00:40:57.042332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.482 [2024-11-18 00:40:57.042421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 [2024-11-18 00:40:57.042448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.482 [2024-11-18 00:40:57.042560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 [2024-11-18 00:40:57.042607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.482 [2024-11-18 00:40:57.042779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 [2024-11-18 00:40:57.042824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.482 [2024-11-18 00:40:57.042970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 [2024-11-18 00:40:57.043004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.482 [2024-11-18 00:40:57.043112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 [2024-11-18 00:40:57.043146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.482 [2024-11-18 00:40:57.043318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 [2024-11-18 00:40:57.043346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.482 [2024-11-18 00:40:57.043424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 [2024-11-18 00:40:57.043451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.482 [2024-11-18 00:40:57.043568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 [2024-11-18 00:40:57.043596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.482 [2024-11-18 00:40:57.043717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 [2024-11-18 00:40:57.043750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.482 [2024-11-18 00:40:57.043869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 [2024-11-18 00:40:57.043908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.482 [2024-11-18 00:40:57.044036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 [2024-11-18 00:40:57.044075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.482 [2024-11-18 00:40:57.044231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 [2024-11-18 00:40:57.044271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.482 [2024-11-18 00:40:57.044421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 [2024-11-18 00:40:57.044450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.482 [2024-11-18 00:40:57.044543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 [2024-11-18 00:40:57.044571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.482 [2024-11-18 00:40:57.044706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 [2024-11-18 00:40:57.044746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.482 [2024-11-18 00:40:57.044907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 [2024-11-18 00:40:57.044956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.482 [2024-11-18 00:40:57.045075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 [2024-11-18 00:40:57.045104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.482 [2024-11-18 00:40:57.045221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 [2024-11-18 00:40:57.045249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.482 [2024-11-18 00:40:57.045338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 [2024-11-18 00:40:57.045366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.482 [2024-11-18 00:40:57.045480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 [2024-11-18 00:40:57.045507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.482 [2024-11-18 00:40:57.045588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 [2024-11-18 00:40:57.045614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.482 [2024-11-18 00:40:57.045705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 [2024-11-18 00:40:57.045734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.482 [2024-11-18 00:40:57.045875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 [2024-11-18 00:40:57.045925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.482 [2024-11-18 00:40:57.046084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 [2024-11-18 00:40:57.046131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.482 [2024-11-18 00:40:57.046253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.482 [2024-11-18 00:40:57.046281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.482 qpair failed and we were unable to recover it. 00:35:33.482 [2024-11-18 00:40:57.046383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-18 00:40:57.046411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-18 00:40:57.046500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-18 00:40:57.046528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-18 00:40:57.046662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-18 00:40:57.046715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-18 00:40:57.046885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-18 00:40:57.046934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-18 00:40:57.047023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-18 00:40:57.047051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-18 00:40:57.047163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-18 00:40:57.047191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-18 00:40:57.047283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-18 00:40:57.047317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-18 00:40:57.047455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-18 00:40:57.047483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-18 00:40:57.047647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-18 00:40:57.047681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-18 00:40:57.047819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-18 00:40:57.047852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-18 00:40:57.047958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-18 00:40:57.047992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-18 00:40:57.048162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-18 00:40:57.048204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-18 00:40:57.048309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-18 00:40:57.048360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-18 00:40:57.048536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-18 00:40:57.048570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-18 00:40:57.048684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-18 00:40:57.048720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-18 00:40:57.048913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-18 00:40:57.048962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-18 00:40:57.049052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-18 00:40:57.049079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-18 00:40:57.049191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-18 00:40:57.049219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-18 00:40:57.049302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-18 00:40:57.049339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-18 00:40:57.049437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-18 00:40:57.049471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-18 00:40:57.049615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-18 00:40:57.049662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-18 00:40:57.049802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-18 00:40:57.049838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-18 00:40:57.049970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-18 00:40:57.050004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-18 00:40:57.050173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-18 00:40:57.050208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-18 00:40:57.050382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-18 00:40:57.050410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-18 00:40:57.050561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-18 00:40:57.050608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-18 00:40:57.050715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-18 00:40:57.050745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-18 00:40:57.050873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-18 00:40:57.050906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-18 00:40:57.051008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-18 00:40:57.051054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-18 00:40:57.051156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-18 00:40:57.051183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-18 00:40:57.051319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-18 00:40:57.051349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-18 00:40:57.051444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-18 00:40:57.051472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-18 00:40:57.051605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-18 00:40:57.051639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.483 [2024-11-18 00:40:57.051742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.483 [2024-11-18 00:40:57.051768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.483 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-18 00:40:57.051898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-18 00:40:57.051927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-18 00:40:57.052081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-18 00:40:57.052108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-18 00:40:57.052202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-18 00:40:57.052230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-18 00:40:57.052396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-18 00:40:57.052430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-18 00:40:57.052589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-18 00:40:57.052623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-18 00:40:57.052729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-18 00:40:57.052764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-18 00:40:57.052897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-18 00:40:57.052931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-18 00:40:57.053039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-18 00:40:57.053073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-18 00:40:57.053178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-18 00:40:57.053212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-18 00:40:57.053378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-18 00:40:57.053427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-18 00:40:57.053564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-18 00:40:57.053610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-18 00:40:57.053715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-18 00:40:57.053749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-18 00:40:57.053848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-18 00:40:57.053875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-18 00:40:57.053986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-18 00:40:57.054013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-18 00:40:57.054125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-18 00:40:57.054152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-18 00:40:57.054233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-18 00:40:57.054261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-18 00:40:57.054352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-18 00:40:57.054380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-18 00:40:57.054494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-18 00:40:57.054521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-18 00:40:57.054660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-18 00:40:57.054689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-18 00:40:57.054783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-18 00:40:57.054811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-18 00:40:57.054925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-18 00:40:57.054952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-18 00:40:57.055039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-18 00:40:57.055067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-18 00:40:57.055177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-18 00:40:57.055204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-18 00:40:57.055353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-18 00:40:57.055390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-18 00:40:57.055532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-18 00:40:57.055566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-18 00:40:57.055711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-18 00:40:57.055745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-18 00:40:57.055875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-18 00:40:57.055909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-18 00:40:57.056050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-18 00:40:57.056084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-18 00:40:57.056201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-18 00:40:57.056245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-18 00:40:57.056329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-18 00:40:57.056357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-18 00:40:57.056473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-18 00:40:57.056500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-18 00:40:57.056632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-18 00:40:57.056696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-18 00:40:57.056816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-18 00:40:57.056853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-18 00:40:57.056991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-18 00:40:57.057027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-18 00:40:57.057127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-18 00:40:57.057161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-18 00:40:57.057279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-18 00:40:57.057326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-18 00:40:57.057441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-18 00:40:57.057469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.484 [2024-11-18 00:40:57.057584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.484 [2024-11-18 00:40:57.057621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.484 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-18 00:40:57.057799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-18 00:40:57.057833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-18 00:40:57.057941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-18 00:40:57.057975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-18 00:40:57.058122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-18 00:40:57.058159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-18 00:40:57.058360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-18 00:40:57.058401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-18 00:40:57.058498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-18 00:40:57.058528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-18 00:40:57.058697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-18 00:40:57.058743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-18 00:40:57.058914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-18 00:40:57.058966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-18 00:40:57.059097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-18 00:40:57.059124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-18 00:40:57.059237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-18 00:40:57.059264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-18 00:40:57.059400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-18 00:40:57.059448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-18 00:40:57.059593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-18 00:40:57.059640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-18 00:40:57.059811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-18 00:40:57.059860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-18 00:40:57.059950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-18 00:40:57.059977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-18 00:40:57.060088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-18 00:40:57.060116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-18 00:40:57.060204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-18 00:40:57.060231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-18 00:40:57.060378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-18 00:40:57.060407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-18 00:40:57.060504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-18 00:40:57.060532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-18 00:40:57.060659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-18 00:40:57.060686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-18 00:40:57.060771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-18 00:40:57.060799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-18 00:40:57.060913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-18 00:40:57.060947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-18 00:40:57.061094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-18 00:40:57.061127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-18 00:40:57.061272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-18 00:40:57.061300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-18 00:40:57.061422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-18 00:40:57.061454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-18 00:40:57.061578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-18 00:40:57.061607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-18 00:40:57.061787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-18 00:40:57.061822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-18 00:40:57.061961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-18 00:40:57.061997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-18 00:40:57.062136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-18 00:40:57.062177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-18 00:40:57.062329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-18 00:40:57.062362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-18 00:40:57.062490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-18 00:40:57.062519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-18 00:40:57.062647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-18 00:40:57.062681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-18 00:40:57.062832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-18 00:40:57.062867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-18 00:40:57.063048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-18 00:40:57.063076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-18 00:40:57.063284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-18 00:40:57.063316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-18 00:40:57.063406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-18 00:40:57.063441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-18 00:40:57.063586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-18 00:40:57.063614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-18 00:40:57.063764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-18 00:40:57.063798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-18 00:40:57.063968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-18 00:40:57.064002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.485 [2024-11-18 00:40:57.064111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.485 [2024-11-18 00:40:57.064148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.485 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-18 00:40:57.064307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-18 00:40:57.064363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-18 00:40:57.064456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-18 00:40:57.064483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-18 00:40:57.064637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-18 00:40:57.064664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-18 00:40:57.064763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-18 00:40:57.064790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-18 00:40:57.064972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-18 00:40:57.065033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-18 00:40:57.065189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-18 00:40:57.065225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-18 00:40:57.065387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-18 00:40:57.065428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-18 00:40:57.065533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-18 00:40:57.065564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-18 00:40:57.065752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-18 00:40:57.065787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-18 00:40:57.065942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-18 00:40:57.065977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-18 00:40:57.066117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-18 00:40:57.066152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-18 00:40:57.066266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-18 00:40:57.066294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-18 00:40:57.066463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-18 00:40:57.066503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-18 00:40:57.066696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-18 00:40:57.066732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-18 00:40:57.066860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-18 00:40:57.066919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-18 00:40:57.067070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-18 00:40:57.067104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-18 00:40:57.067248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-18 00:40:57.067282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-18 00:40:57.067400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-18 00:40:57.067428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-18 00:40:57.067519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-18 00:40:57.067546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-18 00:40:57.067691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-18 00:40:57.067725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-18 00:40:57.067857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-18 00:40:57.067902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-18 00:40:57.068070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-18 00:40:57.068104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-18 00:40:57.068244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-18 00:40:57.068288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-18 00:40:57.068416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-18 00:40:57.068444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-18 00:40:57.068572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-18 00:40:57.068598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-18 00:40:57.068727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-18 00:40:57.068755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-18 00:40:57.068853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-18 00:40:57.068881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-18 00:40:57.068966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-18 00:40:57.068994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-18 00:40:57.069119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-18 00:40:57.069146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-18 00:40:57.069224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-18 00:40:57.069251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-18 00:40:57.069388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-18 00:40:57.069417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-18 00:40:57.069503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-18 00:40:57.069530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-18 00:40:57.069644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-18 00:40:57.069672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-18 00:40:57.069796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-18 00:40:57.069830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-18 00:40:57.069998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-18 00:40:57.070032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-18 00:40:57.070138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-18 00:40:57.070172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-18 00:40:57.070326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-18 00:40:57.070373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.486 [2024-11-18 00:40:57.070493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.486 [2024-11-18 00:40:57.070521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.486 qpair failed and we were unable to recover it. 00:35:33.487 [2024-11-18 00:40:57.070656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.487 [2024-11-18 00:40:57.070703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.487 qpair failed and we were unable to recover it. 00:35:33.487 [2024-11-18 00:40:57.070844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.487 [2024-11-18 00:40:57.070878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.487 qpair failed and we were unable to recover it. 00:35:33.487 [2024-11-18 00:40:57.071106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.487 [2024-11-18 00:40:57.071140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.487 qpair failed and we were unable to recover it. 00:35:33.487 [2024-11-18 00:40:57.071273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.487 [2024-11-18 00:40:57.071299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.487 qpair failed and we were unable to recover it. 00:35:33.487 [2024-11-18 00:40:57.071400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.487 [2024-11-18 00:40:57.071427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.487 qpair failed and we were unable to recover it. 00:35:33.487 [2024-11-18 00:40:57.071539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.487 [2024-11-18 00:40:57.071567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.487 qpair failed and we were unable to recover it. 00:35:33.487 [2024-11-18 00:40:57.071659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.487 [2024-11-18 00:40:57.071707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.487 qpair failed and we were unable to recover it. 00:35:33.487 [2024-11-18 00:40:57.071844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.487 [2024-11-18 00:40:57.071877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.487 qpair failed and we were unable to recover it. 00:35:33.487 [2024-11-18 00:40:57.072013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.487 [2024-11-18 00:40:57.072048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.487 qpair failed and we were unable to recover it. 00:35:33.487 [2024-11-18 00:40:57.072196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.487 [2024-11-18 00:40:57.072222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.487 qpair failed and we were unable to recover it. 00:35:33.487 [2024-11-18 00:40:57.072326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.487 [2024-11-18 00:40:57.072359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.487 qpair failed and we were unable to recover it. 00:35:33.487 [2024-11-18 00:40:57.072466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.487 [2024-11-18 00:40:57.072506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.487 qpair failed and we were unable to recover it. 00:35:33.487 [2024-11-18 00:40:57.072640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.487 [2024-11-18 00:40:57.072679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.487 qpair failed and we were unable to recover it. 00:35:33.487 [2024-11-18 00:40:57.072834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.487 [2024-11-18 00:40:57.072898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.487 qpair failed and we were unable to recover it. 00:35:33.487 [2024-11-18 00:40:57.073075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.487 [2024-11-18 00:40:57.073110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.487 qpair failed and we were unable to recover it. 00:35:33.487 [2024-11-18 00:40:57.073221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.487 [2024-11-18 00:40:57.073255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.487 qpair failed and we were unable to recover it. 00:35:33.487 [2024-11-18 00:40:57.073435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.487 [2024-11-18 00:40:57.073462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.487 qpair failed and we were unable to recover it. 00:35:33.487 [2024-11-18 00:40:57.073651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.487 [2024-11-18 00:40:57.073720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.487 qpair failed and we were unable to recover it. 00:35:33.487 [2024-11-18 00:40:57.073838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.487 [2024-11-18 00:40:57.073887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.487 qpair failed and we were unable to recover it. 00:35:33.487 [2024-11-18 00:40:57.074024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.487 [2024-11-18 00:40:57.074068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.487 qpair failed and we were unable to recover it. 00:35:33.487 [2024-11-18 00:40:57.074155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.487 [2024-11-18 00:40:57.074184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.487 qpair failed and we were unable to recover it. 00:35:33.487 [2024-11-18 00:40:57.074268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.487 [2024-11-18 00:40:57.074294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.487 qpair failed and we were unable to recover it. 00:35:33.487 [2024-11-18 00:40:57.074409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.487 [2024-11-18 00:40:57.074444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.487 qpair failed and we were unable to recover it. 00:35:33.487 [2024-11-18 00:40:57.074600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.487 [2024-11-18 00:40:57.074648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.487 qpair failed and we were unable to recover it. 00:35:33.487 [2024-11-18 00:40:57.074822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.487 [2024-11-18 00:40:57.074871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.487 qpair failed and we were unable to recover it. 00:35:33.487 [2024-11-18 00:40:57.075015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.487 [2024-11-18 00:40:57.075043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.487 qpair failed and we were unable to recover it. 00:35:33.487 [2024-11-18 00:40:57.075163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.487 [2024-11-18 00:40:57.075192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.487 qpair failed and we were unable to recover it. 00:35:33.487 [2024-11-18 00:40:57.075327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.487 [2024-11-18 00:40:57.075388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.487 qpair failed and we were unable to recover it. 00:35:33.487 [2024-11-18 00:40:57.075534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.487 [2024-11-18 00:40:57.075567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.487 qpair failed and we were unable to recover it. 00:35:33.487 [2024-11-18 00:40:57.075693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.487 [2024-11-18 00:40:57.075719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.487 qpair failed and we were unable to recover it. 00:35:33.487 [2024-11-18 00:40:57.075900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.487 [2024-11-18 00:40:57.075935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.487 qpair failed and we were unable to recover it. 00:35:33.487 [2024-11-18 00:40:57.076040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.487 [2024-11-18 00:40:57.076074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.487 qpair failed and we were unable to recover it. 00:35:33.487 [2024-11-18 00:40:57.076185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.487 [2024-11-18 00:40:57.076212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.487 qpair failed and we were unable to recover it. 00:35:33.487 [2024-11-18 00:40:57.076374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.487 [2024-11-18 00:40:57.076415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.488 qpair failed and we were unable to recover it. 00:35:33.488 [2024-11-18 00:40:57.076513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.488 [2024-11-18 00:40:57.076542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.488 qpair failed and we were unable to recover it. 00:35:33.488 [2024-11-18 00:40:57.076671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.488 [2024-11-18 00:40:57.076708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.488 qpair failed and we were unable to recover it. 00:35:33.488 [2024-11-18 00:40:57.076851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.488 [2024-11-18 00:40:57.076895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.488 qpair failed and we were unable to recover it. 00:35:33.488 [2024-11-18 00:40:57.077000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.488 [2024-11-18 00:40:57.077035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.488 qpair failed and we were unable to recover it. 00:35:33.488 [2024-11-18 00:40:57.077218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.488 [2024-11-18 00:40:57.077301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.488 qpair failed and we were unable to recover it. 00:35:33.488 [2024-11-18 00:40:57.077462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.488 [2024-11-18 00:40:57.077490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.488 qpair failed and we were unable to recover it. 00:35:33.488 [2024-11-18 00:40:57.077578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.488 [2024-11-18 00:40:57.077628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.488 qpair failed and we were unable to recover it. 00:35:33.488 [2024-11-18 00:40:57.077829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.488 [2024-11-18 00:40:57.077865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.488 qpair failed and we were unable to recover it. 00:35:33.488 [2024-11-18 00:40:57.078036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.488 [2024-11-18 00:40:57.078073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.488 qpair failed and we were unable to recover it. 00:35:33.488 [2024-11-18 00:40:57.078223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.488 [2024-11-18 00:40:57.078257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.488 qpair failed and we were unable to recover it. 00:35:33.488 [2024-11-18 00:40:57.078391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.488 [2024-11-18 00:40:57.078419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.488 qpair failed and we were unable to recover it. 00:35:33.488 [2024-11-18 00:40:57.078501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.488 [2024-11-18 00:40:57.078528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.488 qpair failed and we were unable to recover it. 00:35:33.488 [2024-11-18 00:40:57.078603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.488 [2024-11-18 00:40:57.078649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.488 qpair failed and we were unable to recover it. 00:35:33.488 [2024-11-18 00:40:57.078790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.488 [2024-11-18 00:40:57.078824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.488 qpair failed and we were unable to recover it. 00:35:33.488 [2024-11-18 00:40:57.079004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.488 [2024-11-18 00:40:57.079040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.488 qpair failed and we were unable to recover it. 00:35:33.488 [2024-11-18 00:40:57.079150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.488 [2024-11-18 00:40:57.079193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.488 qpair failed and we were unable to recover it. 00:35:33.488 [2024-11-18 00:40:57.079335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.488 [2024-11-18 00:40:57.079363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.488 qpair failed and we were unable to recover it. 00:35:33.488 [2024-11-18 00:40:57.079508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.488 [2024-11-18 00:40:57.079536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.488 qpair failed and we were unable to recover it. 00:35:33.488 [2024-11-18 00:40:57.079721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.488 [2024-11-18 00:40:57.079756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.488 qpair failed and we were unable to recover it. 00:35:33.488 [2024-11-18 00:40:57.079877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.488 [2024-11-18 00:40:57.079928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.488 qpair failed and we were unable to recover it. 00:35:33.488 [2024-11-18 00:40:57.080081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.488 [2024-11-18 00:40:57.080119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.488 qpair failed and we were unable to recover it. 00:35:33.488 [2024-11-18 00:40:57.080263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.488 [2024-11-18 00:40:57.080299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.488 qpair failed and we were unable to recover it. 00:35:33.488 [2024-11-18 00:40:57.080446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.488 [2024-11-18 00:40:57.080474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.488 qpair failed and we were unable to recover it. 00:35:33.488 [2024-11-18 00:40:57.080584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.488 [2024-11-18 00:40:57.080611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.488 qpair failed and we were unable to recover it. 00:35:33.488 [2024-11-18 00:40:57.080724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.488 [2024-11-18 00:40:57.080770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.488 qpair failed and we were unable to recover it. 00:35:33.488 [2024-11-18 00:40:57.080926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.488 [2024-11-18 00:40:57.080963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.488 qpair failed and we were unable to recover it. 00:35:33.488 [2024-11-18 00:40:57.081102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.488 [2024-11-18 00:40:57.081137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.488 qpair failed and we were unable to recover it. 00:35:33.488 [2024-11-18 00:40:57.081295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.488 [2024-11-18 00:40:57.081338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.488 qpair failed and we were unable to recover it. 00:35:33.488 [2024-11-18 00:40:57.081445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.488 [2024-11-18 00:40:57.081497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.488 qpair failed and we were unable to recover it. 00:35:33.488 [2024-11-18 00:40:57.081658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.488 [2024-11-18 00:40:57.081694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.488 qpair failed and we were unable to recover it. 00:35:33.488 [2024-11-18 00:40:57.081807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.488 [2024-11-18 00:40:57.081842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.488 qpair failed and we were unable to recover it. 00:35:33.488 [2024-11-18 00:40:57.081988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.488 [2024-11-18 00:40:57.082030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.488 qpair failed and we were unable to recover it. 00:35:33.488 [2024-11-18 00:40:57.082180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.488 [2024-11-18 00:40:57.082219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.488 qpair failed and we were unable to recover it. 00:35:33.488 [2024-11-18 00:40:57.082341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.488 [2024-11-18 00:40:57.082388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.488 qpair failed and we were unable to recover it. 00:35:33.488 [2024-11-18 00:40:57.082492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.488 [2024-11-18 00:40:57.082542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.488 qpair failed and we were unable to recover it. 00:35:33.488 [2024-11-18 00:40:57.082716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.488 [2024-11-18 00:40:57.082753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.488 qpair failed and we were unable to recover it. 00:35:33.488 [2024-11-18 00:40:57.082861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.488 [2024-11-18 00:40:57.082899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.488 qpair failed and we were unable to recover it. 00:35:33.488 [2024-11-18 00:40:57.083074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.488 [2024-11-18 00:40:57.083176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.489 qpair failed and we were unable to recover it. 00:35:33.489 [2024-11-18 00:40:57.083323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.489 [2024-11-18 00:40:57.083352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.489 qpair failed and we were unable to recover it. 00:35:33.489 [2024-11-18 00:40:57.083497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.489 [2024-11-18 00:40:57.083525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.489 qpair failed and we were unable to recover it. 00:35:33.489 [2024-11-18 00:40:57.083639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.489 [2024-11-18 00:40:57.083673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.489 qpair failed and we were unable to recover it. 00:35:33.489 [2024-11-18 00:40:57.083862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.489 [2024-11-18 00:40:57.083889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.489 qpair failed and we were unable to recover it. 00:35:33.489 [2024-11-18 00:40:57.084005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.489 [2024-11-18 00:40:57.084032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.489 qpair failed and we were unable to recover it. 00:35:33.489 [2024-11-18 00:40:57.084119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.489 [2024-11-18 00:40:57.084149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.489 qpair failed and we were unable to recover it. 00:35:33.489 [2024-11-18 00:40:57.084287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.489 [2024-11-18 00:40:57.084336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.489 qpair failed and we were unable to recover it. 00:35:33.489 [2024-11-18 00:40:57.084473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.489 [2024-11-18 00:40:57.084504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.489 qpair failed and we were unable to recover it. 00:35:33.489 [2024-11-18 00:40:57.084630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.489 [2024-11-18 00:40:57.084657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.489 qpair failed and we were unable to recover it. 00:35:33.489 [2024-11-18 00:40:57.084773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.489 [2024-11-18 00:40:57.084800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.489 qpair failed and we were unable to recover it. 00:35:33.489 [2024-11-18 00:40:57.084888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.489 [2024-11-18 00:40:57.084914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.489 qpair failed and we were unable to recover it. 00:35:33.489 [2024-11-18 00:40:57.085023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.489 [2024-11-18 00:40:57.085052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.489 qpair failed and we were unable to recover it. 00:35:33.489 [2024-11-18 00:40:57.085166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.489 [2024-11-18 00:40:57.085207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.489 qpair failed and we were unable to recover it. 00:35:33.489 [2024-11-18 00:40:57.085332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.489 [2024-11-18 00:40:57.085363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.489 qpair failed and we were unable to recover it. 00:35:33.489 [2024-11-18 00:40:57.085479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.489 [2024-11-18 00:40:57.085508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.489 qpair failed and we were unable to recover it. 00:35:33.489 [2024-11-18 00:40:57.085596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.489 [2024-11-18 00:40:57.085625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.489 qpair failed and we were unable to recover it. 00:35:33.489 [2024-11-18 00:40:57.085741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.489 [2024-11-18 00:40:57.085768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.489 qpair failed and we were unable to recover it. 00:35:33.489 [2024-11-18 00:40:57.085859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.489 [2024-11-18 00:40:57.085886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.489 qpair failed and we were unable to recover it. 00:35:33.489 [2024-11-18 00:40:57.086013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.489 [2024-11-18 00:40:57.086040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.489 qpair failed and we were unable to recover it. 00:35:33.489 [2024-11-18 00:40:57.086159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.489 [2024-11-18 00:40:57.086191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.489 qpair failed and we were unable to recover it. 00:35:33.489 [2024-11-18 00:40:57.086288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.489 [2024-11-18 00:40:57.086325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.489 qpair failed and we were unable to recover it. 00:35:33.489 [2024-11-18 00:40:57.086452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.489 [2024-11-18 00:40:57.086481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.489 qpair failed and we were unable to recover it. 00:35:33.489 [2024-11-18 00:40:57.086596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.489 [2024-11-18 00:40:57.086623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.489 qpair failed and we were unable to recover it. 00:35:33.489 [2024-11-18 00:40:57.086781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.489 [2024-11-18 00:40:57.086817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.489 qpair failed and we were unable to recover it. 00:35:33.489 [2024-11-18 00:40:57.087026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.489 [2024-11-18 00:40:57.087062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.489 qpair failed and we were unable to recover it. 00:35:33.489 [2024-11-18 00:40:57.087208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.489 [2024-11-18 00:40:57.087244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.489 qpair failed and we were unable to recover it. 00:35:33.489 [2024-11-18 00:40:57.087413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.489 [2024-11-18 00:40:57.087441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.489 qpair failed and we were unable to recover it. 00:35:33.489 [2024-11-18 00:40:57.087632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.489 [2024-11-18 00:40:57.087659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.489 qpair failed and we were unable to recover it. 00:35:33.489 [2024-11-18 00:40:57.087839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.489 [2024-11-18 00:40:57.087872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.489 qpair failed and we were unable to recover it. 00:35:33.489 [2024-11-18 00:40:57.088056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.489 [2024-11-18 00:40:57.088083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.489 qpair failed and we were unable to recover it. 00:35:33.489 [2024-11-18 00:40:57.088211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.489 [2024-11-18 00:40:57.088239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.489 qpair failed and we were unable to recover it. 00:35:33.489 [2024-11-18 00:40:57.088371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.489 [2024-11-18 00:40:57.088399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.489 qpair failed and we were unable to recover it. 00:35:33.489 [2024-11-18 00:40:57.088588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.489 [2024-11-18 00:40:57.088615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.489 qpair failed and we were unable to recover it. 00:35:33.489 [2024-11-18 00:40:57.088830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.489 [2024-11-18 00:40:57.088863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.489 qpair failed and we were unable to recover it. 00:35:33.489 [2024-11-18 00:40:57.089037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.489 [2024-11-18 00:40:57.089072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.489 qpair failed and we were unable to recover it. 00:35:33.489 [2024-11-18 00:40:57.089208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.489 [2024-11-18 00:40:57.089242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.489 qpair failed and we were unable to recover it. 00:35:33.489 [2024-11-18 00:40:57.089382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.489 [2024-11-18 00:40:57.089410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.489 qpair failed and we were unable to recover it. 00:35:33.489 [2024-11-18 00:40:57.089524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.490 [2024-11-18 00:40:57.089551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.490 qpair failed and we were unable to recover it. 00:35:33.490 [2024-11-18 00:40:57.089708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.490 [2024-11-18 00:40:57.089758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.490 qpair failed and we were unable to recover it. 00:35:33.490 [2024-11-18 00:40:57.089878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.490 [2024-11-18 00:40:57.089920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.490 qpair failed and we were unable to recover it. 00:35:33.490 [2024-11-18 00:40:57.090071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.490 [2024-11-18 00:40:57.090107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.490 qpair failed and we were unable to recover it. 00:35:33.490 [2024-11-18 00:40:57.090231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.490 [2024-11-18 00:40:57.090267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.490 qpair failed and we were unable to recover it. 00:35:33.490 [2024-11-18 00:40:57.090394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.490 [2024-11-18 00:40:57.090421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.490 qpair failed and we were unable to recover it. 00:35:33.490 [2024-11-18 00:40:57.090531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.490 [2024-11-18 00:40:57.090559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.490 qpair failed and we were unable to recover it. 00:35:33.490 [2024-11-18 00:40:57.090714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.490 [2024-11-18 00:40:57.090749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.490 qpair failed and we were unable to recover it. 00:35:33.490 [2024-11-18 00:40:57.090882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.490 [2024-11-18 00:40:57.090925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.490 qpair failed and we were unable to recover it. 00:35:33.490 [2024-11-18 00:40:57.091066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.490 [2024-11-18 00:40:57.091100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.490 qpair failed and we were unable to recover it. 00:35:33.490 [2024-11-18 00:40:57.091234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.490 [2024-11-18 00:40:57.091273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.490 qpair failed and we were unable to recover it. 00:35:33.490 [2024-11-18 00:40:57.091422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.490 [2024-11-18 00:40:57.091450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.490 qpair failed and we were unable to recover it. 00:35:33.490 [2024-11-18 00:40:57.091537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.490 [2024-11-18 00:40:57.091564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.490 qpair failed and we were unable to recover it. 00:35:33.490 [2024-11-18 00:40:57.091650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.490 [2024-11-18 00:40:57.091677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.490 qpair failed and we were unable to recover it. 00:35:33.490 [2024-11-18 00:40:57.091798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.490 [2024-11-18 00:40:57.091824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.490 qpair failed and we were unable to recover it. 00:35:33.490 [2024-11-18 00:40:57.091906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.490 [2024-11-18 00:40:57.091933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.490 qpair failed and we were unable to recover it. 00:35:33.490 [2024-11-18 00:40:57.092063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.490 [2024-11-18 00:40:57.092121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.490 qpair failed and we were unable to recover it. 00:35:33.490 [2024-11-18 00:40:57.092272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.490 [2024-11-18 00:40:57.092301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.490 qpair failed and we were unable to recover it. 00:35:33.490 [2024-11-18 00:40:57.092425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.490 [2024-11-18 00:40:57.092454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.490 qpair failed and we were unable to recover it. 00:35:33.490 [2024-11-18 00:40:57.092538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.490 [2024-11-18 00:40:57.092567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.490 qpair failed and we were unable to recover it. 00:35:33.490 [2024-11-18 00:40:57.092693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.490 [2024-11-18 00:40:57.092729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.490 qpair failed and we were unable to recover it. 00:35:33.490 [2024-11-18 00:40:57.092853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.490 [2024-11-18 00:40:57.092881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.490 qpair failed and we were unable to recover it. 00:35:33.490 [2024-11-18 00:40:57.093037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.490 [2024-11-18 00:40:57.093072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.490 qpair failed and we were unable to recover it. 00:35:33.490 [2024-11-18 00:40:57.093210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.490 [2024-11-18 00:40:57.093244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.490 qpair failed and we were unable to recover it. 00:35:33.490 [2024-11-18 00:40:57.093408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.490 [2024-11-18 00:40:57.093436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.490 qpair failed and we were unable to recover it. 00:35:33.490 [2024-11-18 00:40:57.093545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.490 [2024-11-18 00:40:57.093572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.490 qpair failed and we were unable to recover it. 00:35:33.490 [2024-11-18 00:40:57.093738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.490 [2024-11-18 00:40:57.093772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.490 qpair failed and we were unable to recover it. 00:35:33.490 [2024-11-18 00:40:57.093913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.490 [2024-11-18 00:40:57.093948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.490 qpair failed and we were unable to recover it. 00:35:33.490 [2024-11-18 00:40:57.094064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.490 [2024-11-18 00:40:57.094101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.490 qpair failed and we were unable to recover it. 00:35:33.490 [2024-11-18 00:40:57.094257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.490 [2024-11-18 00:40:57.094298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.490 qpair failed and we were unable to recover it. 00:35:33.490 [2024-11-18 00:40:57.094407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.490 [2024-11-18 00:40:57.094436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.490 qpair failed and we were unable to recover it. 00:35:33.490 [2024-11-18 00:40:57.094549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.490 [2024-11-18 00:40:57.094597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.490 qpair failed and we were unable to recover it. 00:35:33.490 [2024-11-18 00:40:57.094741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.490 [2024-11-18 00:40:57.094788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.490 qpair failed and we were unable to recover it. 00:35:33.490 [2024-11-18 00:40:57.094900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.490 [2024-11-18 00:40:57.094927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.490 qpair failed and we were unable to recover it. 00:35:33.490 [2024-11-18 00:40:57.095043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.490 [2024-11-18 00:40:57.095071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.490 qpair failed and we were unable to recover it. 00:35:33.490 [2024-11-18 00:40:57.095177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.490 [2024-11-18 00:40:57.095219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.490 qpair failed and we were unable to recover it. 00:35:33.490 [2024-11-18 00:40:57.095348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.490 [2024-11-18 00:40:57.095378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.490 qpair failed and we were unable to recover it. 00:35:33.490 [2024-11-18 00:40:57.095489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.490 [2024-11-18 00:40:57.095539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.490 qpair failed and we were unable to recover it. 00:35:33.490 [2024-11-18 00:40:57.095663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.490 [2024-11-18 00:40:57.095690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.491 qpair failed and we were unable to recover it. 00:35:33.491 [2024-11-18 00:40:57.095777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.491 [2024-11-18 00:40:57.095803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.491 qpair failed and we were unable to recover it. 00:35:33.491 [2024-11-18 00:40:57.095941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.491 [2024-11-18 00:40:57.095987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.491 qpair failed and we were unable to recover it. 00:35:33.491 [2024-11-18 00:40:57.096097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.491 [2024-11-18 00:40:57.096126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.491 qpair failed and we were unable to recover it. 00:35:33.491 [2024-11-18 00:40:57.096219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.491 [2024-11-18 00:40:57.096246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.491 qpair failed and we were unable to recover it. 00:35:33.491 [2024-11-18 00:40:57.096341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.491 [2024-11-18 00:40:57.096391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.491 qpair failed and we were unable to recover it. 00:35:33.491 [2024-11-18 00:40:57.096540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.491 [2024-11-18 00:40:57.096576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.491 qpair failed and we were unable to recover it. 00:35:33.491 [2024-11-18 00:40:57.096777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.491 [2024-11-18 00:40:57.096830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.491 qpair failed and we were unable to recover it. 00:35:33.491 [2024-11-18 00:40:57.096981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.491 [2024-11-18 00:40:57.097019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.491 qpair failed and we were unable to recover it. 00:35:33.491 [2024-11-18 00:40:57.097136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.491 [2024-11-18 00:40:57.097164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.491 qpair failed and we were unable to recover it. 00:35:33.491 [2024-11-18 00:40:57.097253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.491 [2024-11-18 00:40:57.097280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.491 qpair failed and we were unable to recover it. 00:35:33.491 [2024-11-18 00:40:57.097457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.491 [2024-11-18 00:40:57.097505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.491 qpair failed and we were unable to recover it. 00:35:33.491 [2024-11-18 00:40:57.097646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.491 [2024-11-18 00:40:57.097701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.491 qpair failed and we were unable to recover it. 00:35:33.491 [2024-11-18 00:40:57.097839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.491 [2024-11-18 00:40:57.097888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.491 qpair failed and we were unable to recover it. 00:35:33.491 [2024-11-18 00:40:57.098026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.491 [2024-11-18 00:40:57.098061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.491 qpair failed and we were unable to recover it. 00:35:33.491 [2024-11-18 00:40:57.098178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.491 [2024-11-18 00:40:57.098206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.491 qpair failed and we were unable to recover it. 00:35:33.491 [2024-11-18 00:40:57.098350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.491 [2024-11-18 00:40:57.098378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.491 qpair failed and we were unable to recover it. 00:35:33.491 [2024-11-18 00:40:57.098496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.491 [2024-11-18 00:40:57.098525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.491 qpair failed and we were unable to recover it. 00:35:33.491 [2024-11-18 00:40:57.098726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.491 [2024-11-18 00:40:57.098756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.491 qpair failed and we were unable to recover it. 00:35:33.491 [2024-11-18 00:40:57.098895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.491 [2024-11-18 00:40:57.098922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.491 qpair failed and we were unable to recover it. 00:35:33.491 [2024-11-18 00:40:57.099040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.491 [2024-11-18 00:40:57.099067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.491 qpair failed and we were unable to recover it. 00:35:33.491 [2024-11-18 00:40:57.099187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.491 [2024-11-18 00:40:57.099215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.491 qpair failed and we were unable to recover it. 00:35:33.491 [2024-11-18 00:40:57.099323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.491 [2024-11-18 00:40:57.099368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.491 qpair failed and we were unable to recover it. 00:35:33.491 [2024-11-18 00:40:57.099531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.491 [2024-11-18 00:40:57.099584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.491 qpair failed and we were unable to recover it. 00:35:33.491 [2024-11-18 00:40:57.099729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.491 [2024-11-18 00:40:57.099790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.491 qpair failed and we were unable to recover it. 00:35:33.491 [2024-11-18 00:40:57.099965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.491 [2024-11-18 00:40:57.100022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.491 qpair failed and we were unable to recover it. 00:35:33.491 [2024-11-18 00:40:57.100145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.491 [2024-11-18 00:40:57.100173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.491 qpair failed and we were unable to recover it. 00:35:33.491 [2024-11-18 00:40:57.100288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.491 [2024-11-18 00:40:57.100321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.491 qpair failed and we were unable to recover it. 00:35:33.491 [2024-11-18 00:40:57.100482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.491 [2024-11-18 00:40:57.100523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.491 qpair failed and we were unable to recover it. 00:35:33.491 [2024-11-18 00:40:57.100653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.491 [2024-11-18 00:40:57.100682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.491 qpair failed and we were unable to recover it. 00:35:33.491 [2024-11-18 00:40:57.100780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.491 [2024-11-18 00:40:57.100812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.491 qpair failed and we were unable to recover it. 00:35:33.491 [2024-11-18 00:40:57.100934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.491 [2024-11-18 00:40:57.100962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.491 qpair failed and we were unable to recover it. 00:35:33.491 [2024-11-18 00:40:57.101110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.491 [2024-11-18 00:40:57.101138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.491 qpair failed and we were unable to recover it. 00:35:33.491 [2024-11-18 00:40:57.101222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.491 [2024-11-18 00:40:57.101249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.491 qpair failed and we were unable to recover it. 00:35:33.491 [2024-11-18 00:40:57.101378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.491 [2024-11-18 00:40:57.101407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.491 qpair failed and we were unable to recover it. 00:35:33.491 [2024-11-18 00:40:57.101548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.491 [2024-11-18 00:40:57.101586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.491 qpair failed and we were unable to recover it. 00:35:33.491 [2024-11-18 00:40:57.101705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.491 [2024-11-18 00:40:57.101742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.491 qpair failed and we were unable to recover it. 00:35:33.491 [2024-11-18 00:40:57.101887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.491 [2024-11-18 00:40:57.101922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.491 qpair failed and we were unable to recover it. 00:35:33.491 [2024-11-18 00:40:57.102024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.492 [2024-11-18 00:40:57.102058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.492 qpair failed and we were unable to recover it. 00:35:33.492 [2024-11-18 00:40:57.102227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.492 [2024-11-18 00:40:57.102273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.492 qpair failed and we were unable to recover it. 00:35:33.492 [2024-11-18 00:40:57.102381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.492 [2024-11-18 00:40:57.102411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.492 qpair failed and we were unable to recover it. 00:35:33.492 [2024-11-18 00:40:57.102525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.492 [2024-11-18 00:40:57.102560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.492 qpair failed and we were unable to recover it. 00:35:33.492 [2024-11-18 00:40:57.102686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.492 [2024-11-18 00:40:57.102720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.492 qpair failed and we were unable to recover it. 00:35:33.492 [2024-11-18 00:40:57.102862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.492 [2024-11-18 00:40:57.102896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.492 qpair failed and we were unable to recover it. 00:35:33.492 [2024-11-18 00:40:57.103018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.492 [2024-11-18 00:40:57.103068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.492 qpair failed and we were unable to recover it. 00:35:33.492 [2024-11-18 00:40:57.103245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.492 [2024-11-18 00:40:57.103285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.492 qpair failed and we were unable to recover it. 00:35:33.492 [2024-11-18 00:40:57.103407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.492 [2024-11-18 00:40:57.103439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.492 qpair failed and we were unable to recover it. 00:35:33.492 [2024-11-18 00:40:57.103554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.492 [2024-11-18 00:40:57.103583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.492 qpair failed and we were unable to recover it. 00:35:33.492 [2024-11-18 00:40:57.103719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.492 [2024-11-18 00:40:57.103754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.492 qpair failed and we were unable to recover it. 00:35:33.492 [2024-11-18 00:40:57.103934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.492 [2024-11-18 00:40:57.103974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.492 qpair failed and we were unable to recover it. 00:35:33.492 [2024-11-18 00:40:57.104088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.492 [2024-11-18 00:40:57.104161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.492 qpair failed and we were unable to recover it. 00:35:33.492 [2024-11-18 00:40:57.104319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.492 [2024-11-18 00:40:57.104348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.492 qpair failed and we were unable to recover it. 00:35:33.492 [2024-11-18 00:40:57.104469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.492 [2024-11-18 00:40:57.104497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.492 qpair failed and we were unable to recover it. 00:35:33.492 [2024-11-18 00:40:57.104665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.492 [2024-11-18 00:40:57.104716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.492 qpair failed and we were unable to recover it. 00:35:33.492 [2024-11-18 00:40:57.104888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.492 [2024-11-18 00:40:57.104922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.492 qpair failed and we were unable to recover it. 00:35:33.492 [2024-11-18 00:40:57.105034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.492 [2024-11-18 00:40:57.105068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.492 qpair failed and we were unable to recover it. 00:35:33.492 [2024-11-18 00:40:57.105251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.492 [2024-11-18 00:40:57.105286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.492 qpair failed and we were unable to recover it. 00:35:33.492 [2024-11-18 00:40:57.105411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.492 [2024-11-18 00:40:57.105438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.492 qpair failed and we were unable to recover it. 00:35:33.492 [2024-11-18 00:40:57.105529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.492 [2024-11-18 00:40:57.105556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.492 qpair failed and we were unable to recover it. 00:35:33.492 [2024-11-18 00:40:57.105729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.492 [2024-11-18 00:40:57.105756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.492 qpair failed and we were unable to recover it. 00:35:33.492 [2024-11-18 00:40:57.105950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.492 [2024-11-18 00:40:57.106023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.492 qpair failed and we were unable to recover it. 00:35:33.492 [2024-11-18 00:40:57.106184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.492 [2024-11-18 00:40:57.106258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.492 qpair failed and we were unable to recover it. 00:35:33.492 [2024-11-18 00:40:57.106442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.492 [2024-11-18 00:40:57.106470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.492 qpair failed and we were unable to recover it. 00:35:33.492 [2024-11-18 00:40:57.106597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.492 [2024-11-18 00:40:57.106624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.492 qpair failed and we were unable to recover it. 00:35:33.492 [2024-11-18 00:40:57.106742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.492 [2024-11-18 00:40:57.106776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.492 qpair failed and we were unable to recover it. 00:35:33.492 [2024-11-18 00:40:57.106930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.492 [2024-11-18 00:40:57.106964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.492 qpair failed and we were unable to recover it. 00:35:33.492 [2024-11-18 00:40:57.107103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.492 [2024-11-18 00:40:57.107144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.492 qpair failed and we were unable to recover it. 00:35:33.492 [2024-11-18 00:40:57.107262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.492 [2024-11-18 00:40:57.107296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.492 qpair failed and we were unable to recover it. 00:35:33.492 [2024-11-18 00:40:57.107421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.492 [2024-11-18 00:40:57.107449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.492 qpair failed and we were unable to recover it. 00:35:33.492 [2024-11-18 00:40:57.107581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.492 [2024-11-18 00:40:57.107621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.492 qpair failed and we were unable to recover it. 00:35:33.492 [2024-11-18 00:40:57.107870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.492 [2024-11-18 00:40:57.107923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.492 qpair failed and we were unable to recover it. 00:35:33.492 [2024-11-18 00:40:57.108028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.492 [2024-11-18 00:40:57.108056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.492 qpair failed and we were unable to recover it. 00:35:33.492 [2024-11-18 00:40:57.108147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.493 [2024-11-18 00:40:57.108176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.493 qpair failed and we were unable to recover it. 00:35:33.493 [2024-11-18 00:40:57.108267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.493 [2024-11-18 00:40:57.108294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.493 qpair failed and we were unable to recover it. 00:35:33.493 [2024-11-18 00:40:57.108395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.493 [2024-11-18 00:40:57.108423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.493 qpair failed and we were unable to recover it. 00:35:33.493 [2024-11-18 00:40:57.108510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.493 [2024-11-18 00:40:57.108539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.493 qpair failed and we were unable to recover it. 00:35:33.493 [2024-11-18 00:40:57.108652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.493 [2024-11-18 00:40:57.108679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.493 qpair failed and we were unable to recover it. 00:35:33.493 [2024-11-18 00:40:57.108765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.493 [2024-11-18 00:40:57.108793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.493 qpair failed and we were unable to recover it. 00:35:33.493 [2024-11-18 00:40:57.108936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.493 [2024-11-18 00:40:57.108964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.493 qpair failed and we were unable to recover it. 00:35:33.493 [2024-11-18 00:40:57.109054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.493 [2024-11-18 00:40:57.109094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.493 qpair failed and we were unable to recover it. 00:35:33.493 [2024-11-18 00:40:57.109234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.493 [2024-11-18 00:40:57.109263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.493 qpair failed and we were unable to recover it. 00:35:33.493 [2024-11-18 00:40:57.109414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.493 [2024-11-18 00:40:57.109451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.493 qpair failed and we were unable to recover it. 00:35:33.493 [2024-11-18 00:40:57.109623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.493 [2024-11-18 00:40:57.109658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.493 qpair failed and we were unable to recover it. 00:35:33.493 [2024-11-18 00:40:57.109797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.493 [2024-11-18 00:40:57.109831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.493 qpair failed and we were unable to recover it. 00:35:33.493 [2024-11-18 00:40:57.110005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.493 [2024-11-18 00:40:57.110056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.493 qpair failed and we were unable to recover it. 00:35:33.493 [2024-11-18 00:40:57.110147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.493 [2024-11-18 00:40:57.110175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.493 qpair failed and we were unable to recover it. 00:35:33.493 [2024-11-18 00:40:57.110321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.493 [2024-11-18 00:40:57.110348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.493 qpair failed and we were unable to recover it. 00:35:33.493 [2024-11-18 00:40:57.110435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.493 [2024-11-18 00:40:57.110462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.493 qpair failed and we were unable to recover it. 00:35:33.493 [2024-11-18 00:40:57.110606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.493 [2024-11-18 00:40:57.110640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.493 qpair failed and we were unable to recover it. 00:35:33.493 [2024-11-18 00:40:57.110747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.493 [2024-11-18 00:40:57.110781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.493 qpair failed and we were unable to recover it. 00:35:33.493 [2024-11-18 00:40:57.110885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.493 [2024-11-18 00:40:57.110919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.493 qpair failed and we were unable to recover it. 00:35:33.493 [2024-11-18 00:40:57.111120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.493 [2024-11-18 00:40:57.111157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.493 qpair failed and we were unable to recover it. 00:35:33.493 [2024-11-18 00:40:57.111272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.493 [2024-11-18 00:40:57.111299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.493 qpair failed and we were unable to recover it. 00:35:33.493 [2024-11-18 00:40:57.111419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.493 [2024-11-18 00:40:57.111458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.493 qpair failed and we were unable to recover it. 00:35:33.493 [2024-11-18 00:40:57.111637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.493 [2024-11-18 00:40:57.111674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.493 qpair failed and we were unable to recover it. 00:35:33.493 [2024-11-18 00:40:57.111820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.493 [2024-11-18 00:40:57.111856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.493 qpair failed and we were unable to recover it. 00:35:33.493 [2024-11-18 00:40:57.112002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.493 [2024-11-18 00:40:57.112039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.493 qpair failed and we were unable to recover it. 00:35:33.493 [2024-11-18 00:40:57.112199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.493 [2024-11-18 00:40:57.112235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.493 qpair failed and we were unable to recover it. 00:35:33.493 [2024-11-18 00:40:57.112391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.493 [2024-11-18 00:40:57.112418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.493 qpair failed and we were unable to recover it. 00:35:33.493 [2024-11-18 00:40:57.112558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.493 [2024-11-18 00:40:57.112592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.493 qpair failed and we were unable to recover it. 00:35:33.493 [2024-11-18 00:40:57.112702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.493 [2024-11-18 00:40:57.112736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.493 qpair failed and we were unable to recover it. 00:35:33.493 [2024-11-18 00:40:57.112852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.493 [2024-11-18 00:40:57.112878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.493 qpair failed and we were unable to recover it. 00:35:33.493 [2024-11-18 00:40:57.113039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.493 [2024-11-18 00:40:57.113074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.493 qpair failed and we were unable to recover it. 00:35:33.493 [2024-11-18 00:40:57.113233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.493 [2024-11-18 00:40:57.113284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.493 qpair failed and we were unable to recover it. 00:35:33.493 [2024-11-18 00:40:57.113468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.493 [2024-11-18 00:40:57.113509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.493 qpair failed and we were unable to recover it. 00:35:33.493 [2024-11-18 00:40:57.113662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.493 [2024-11-18 00:40:57.113699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.493 qpair failed and we were unable to recover it. 00:35:33.493 [2024-11-18 00:40:57.113850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.493 [2024-11-18 00:40:57.113884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.493 qpair failed and we were unable to recover it. 00:35:33.493 [2024-11-18 00:40:57.114077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.493 [2024-11-18 00:40:57.114113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.493 qpair failed and we were unable to recover it. 00:35:33.493 [2024-11-18 00:40:57.114259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.493 [2024-11-18 00:40:57.114289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.493 qpair failed and we were unable to recover it. 00:35:33.493 [2024-11-18 00:40:57.114435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.493 [2024-11-18 00:40:57.114464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.493 qpair failed and we were unable to recover it. 00:35:33.493 [2024-11-18 00:40:57.114560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.494 [2024-11-18 00:40:57.114591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.494 qpair failed and we were unable to recover it. 00:35:33.494 [2024-11-18 00:40:57.114811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.494 [2024-11-18 00:40:57.114840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.494 qpair failed and we were unable to recover it. 00:35:33.494 [2024-11-18 00:40:57.114992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.494 [2024-11-18 00:40:57.115029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.494 qpair failed and we were unable to recover it. 00:35:33.494 [2024-11-18 00:40:57.115149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.494 [2024-11-18 00:40:57.115184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.494 qpair failed and we were unable to recover it. 00:35:33.494 [2024-11-18 00:40:57.115294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.494 [2024-11-18 00:40:57.115341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.494 qpair failed and we were unable to recover it. 00:35:33.494 [2024-11-18 00:40:57.115448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.494 [2024-11-18 00:40:57.115476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.494 qpair failed and we were unable to recover it. 00:35:33.494 [2024-11-18 00:40:57.115597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.494 [2024-11-18 00:40:57.115625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.494 qpair failed and we were unable to recover it. 00:35:33.494 [2024-11-18 00:40:57.115762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.494 [2024-11-18 00:40:57.115796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.494 qpair failed and we were unable to recover it. 00:35:33.494 [2024-11-18 00:40:57.116001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.494 [2024-11-18 00:40:57.116036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.494 qpair failed and we were unable to recover it. 00:35:33.494 [2024-11-18 00:40:57.116191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.494 [2024-11-18 00:40:57.116219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.494 qpair failed and we were unable to recover it. 00:35:33.494 [2024-11-18 00:40:57.116349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.494 [2024-11-18 00:40:57.116377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.494 qpair failed and we were unable to recover it. 00:35:33.494 [2024-11-18 00:40:57.116463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.494 [2024-11-18 00:40:57.116491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.494 qpair failed and we were unable to recover it. 00:35:33.494 [2024-11-18 00:40:57.116579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.494 [2024-11-18 00:40:57.116635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.494 qpair failed and we were unable to recover it. 00:35:33.494 [2024-11-18 00:40:57.116781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.494 [2024-11-18 00:40:57.116817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.494 qpair failed and we were unable to recover it. 00:35:33.494 [2024-11-18 00:40:57.116964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.494 [2024-11-18 00:40:57.117013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.494 qpair failed and we were unable to recover it. 00:35:33.494 [2024-11-18 00:40:57.117123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.494 [2024-11-18 00:40:57.117168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.494 qpair failed and we were unable to recover it. 00:35:33.494 [2024-11-18 00:40:57.117318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.494 [2024-11-18 00:40:57.117347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.494 qpair failed and we were unable to recover it. 00:35:33.494 [2024-11-18 00:40:57.117460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.494 [2024-11-18 00:40:57.117487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.494 qpair failed and we were unable to recover it. 00:35:33.494 [2024-11-18 00:40:57.117628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.494 [2024-11-18 00:40:57.117676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.494 qpair failed and we were unable to recover it. 00:35:33.494 [2024-11-18 00:40:57.117833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.494 [2024-11-18 00:40:57.117873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.494 qpair failed and we were unable to recover it. 00:35:33.494 [2024-11-18 00:40:57.118048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.494 [2024-11-18 00:40:57.118126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.494 qpair failed and we were unable to recover it. 00:35:33.494 [2024-11-18 00:40:57.118283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.494 [2024-11-18 00:40:57.118318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.494 qpair failed and we were unable to recover it. 00:35:33.494 [2024-11-18 00:40:57.118410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.494 [2024-11-18 00:40:57.118439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.494 qpair failed and we were unable to recover it. 00:35:33.494 [2024-11-18 00:40:57.118571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.494 [2024-11-18 00:40:57.118604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.494 qpair failed and we were unable to recover it. 00:35:33.494 [2024-11-18 00:40:57.118709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.494 [2024-11-18 00:40:57.118737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.494 qpair failed and we were unable to recover it. 00:35:33.494 [2024-11-18 00:40:57.118842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.494 [2024-11-18 00:40:57.118870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.494 qpair failed and we were unable to recover it. 00:35:33.494 [2024-11-18 00:40:57.119042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.494 [2024-11-18 00:40:57.119077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.494 qpair failed and we were unable to recover it. 00:35:33.494 [2024-11-18 00:40:57.119209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.494 [2024-11-18 00:40:57.119243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.494 qpair failed and we were unable to recover it. 00:35:33.494 [2024-11-18 00:40:57.119400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.494 [2024-11-18 00:40:57.119428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.494 qpair failed and we were unable to recover it. 00:35:33.494 [2024-11-18 00:40:57.119530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.494 [2024-11-18 00:40:57.119570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.494 qpair failed and we were unable to recover it. 00:35:33.494 [2024-11-18 00:40:57.119717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.494 [2024-11-18 00:40:57.119766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.494 qpair failed and we were unable to recover it. 00:35:33.494 [2024-11-18 00:40:57.119861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.494 [2024-11-18 00:40:57.119888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.494 qpair failed and we were unable to recover it. 00:35:33.494 [2024-11-18 00:40:57.120056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.494 [2024-11-18 00:40:57.120103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.494 qpair failed and we were unable to recover it. 00:35:33.494 [2024-11-18 00:40:57.120190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.494 [2024-11-18 00:40:57.120217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.494 qpair failed and we were unable to recover it. 00:35:33.494 [2024-11-18 00:40:57.120306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.494 [2024-11-18 00:40:57.120346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.494 qpair failed and we were unable to recover it. 00:35:33.494 [2024-11-18 00:40:57.120467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.494 [2024-11-18 00:40:57.120494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.494 qpair failed and we were unable to recover it. 00:35:33.494 [2024-11-18 00:40:57.120584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.494 [2024-11-18 00:40:57.120611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.494 qpair failed and we were unable to recover it. 00:35:33.494 [2024-11-18 00:40:57.120731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.494 [2024-11-18 00:40:57.120758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.494 qpair failed and we were unable to recover it. 00:35:33.494 [2024-11-18 00:40:57.120880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.495 [2024-11-18 00:40:57.120909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.495 qpair failed and we were unable to recover it. 00:35:33.495 [2024-11-18 00:40:57.121001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.495 [2024-11-18 00:40:57.121029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.495 qpair failed and we were unable to recover it. 00:35:33.495 [2024-11-18 00:40:57.121148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.495 [2024-11-18 00:40:57.121176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.495 qpair failed and we were unable to recover it. 00:35:33.495 [2024-11-18 00:40:57.121258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.495 [2024-11-18 00:40:57.121287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.495 qpair failed and we were unable to recover it. 00:35:33.495 [2024-11-18 00:40:57.121408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.495 [2024-11-18 00:40:57.121435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.495 qpair failed and we were unable to recover it. 00:35:33.495 [2024-11-18 00:40:57.121528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.495 [2024-11-18 00:40:57.121555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.495 qpair failed and we were unable to recover it. 00:35:33.495 [2024-11-18 00:40:57.121665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.495 [2024-11-18 00:40:57.121699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.495 qpair failed and we were unable to recover it. 00:35:33.495 [2024-11-18 00:40:57.121897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.495 [2024-11-18 00:40:57.121945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.495 qpair failed and we were unable to recover it. 00:35:33.495 [2024-11-18 00:40:57.122061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.495 [2024-11-18 00:40:57.122089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.495 qpair failed and we were unable to recover it. 00:35:33.495 [2024-11-18 00:40:57.122181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.495 [2024-11-18 00:40:57.122207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.495 qpair failed and we were unable to recover it. 00:35:33.495 [2024-11-18 00:40:57.122323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.495 [2024-11-18 00:40:57.122356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.495 qpair failed and we were unable to recover it. 00:35:33.495 [2024-11-18 00:40:57.122440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.495 [2024-11-18 00:40:57.122468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.495 qpair failed and we were unable to recover it. 00:35:33.495 [2024-11-18 00:40:57.122588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.495 [2024-11-18 00:40:57.122615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.495 qpair failed and we were unable to recover it. 00:35:33.495 [2024-11-18 00:40:57.122758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.495 [2024-11-18 00:40:57.122785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.495 qpair failed and we were unable to recover it. 00:35:33.495 [2024-11-18 00:40:57.122908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.495 [2024-11-18 00:40:57.122935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.495 qpair failed and we were unable to recover it. 00:35:33.495 [2024-11-18 00:40:57.123049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.495 [2024-11-18 00:40:57.123077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.495 qpair failed and we were unable to recover it. 00:35:33.495 [2024-11-18 00:40:57.123169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.495 [2024-11-18 00:40:57.123198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.495 qpair failed and we were unable to recover it. 00:35:33.495 [2024-11-18 00:40:57.123293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.495 [2024-11-18 00:40:57.123332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.495 qpair failed and we were unable to recover it. 00:35:33.495 [2024-11-18 00:40:57.123530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.495 [2024-11-18 00:40:57.123575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.495 qpair failed and we were unable to recover it. 00:35:33.495 [2024-11-18 00:40:57.123662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.495 [2024-11-18 00:40:57.123690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.495 qpair failed and we were unable to recover it. 00:35:33.495 [2024-11-18 00:40:57.123821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.495 [2024-11-18 00:40:57.123849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.495 qpair failed and we were unable to recover it. 00:35:33.495 [2024-11-18 00:40:57.123939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.495 [2024-11-18 00:40:57.123967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.495 qpair failed and we were unable to recover it. 00:35:33.495 [2024-11-18 00:40:57.124059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.495 [2024-11-18 00:40:57.124086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.495 qpair failed and we were unable to recover it. 00:35:33.495 [2024-11-18 00:40:57.124168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.495 [2024-11-18 00:40:57.124195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.495 qpair failed and we were unable to recover it. 00:35:33.495 [2024-11-18 00:40:57.124320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.495 [2024-11-18 00:40:57.124348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.495 qpair failed and we were unable to recover it. 00:35:33.495 [2024-11-18 00:40:57.124461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.495 [2024-11-18 00:40:57.124493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.495 qpair failed and we were unable to recover it. 00:35:33.495 [2024-11-18 00:40:57.124590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.495 [2024-11-18 00:40:57.124618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.495 qpair failed and we were unable to recover it. 00:35:33.495 [2024-11-18 00:40:57.124733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.495 [2024-11-18 00:40:57.124760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.495 qpair failed and we were unable to recover it. 00:35:33.495 [2024-11-18 00:40:57.124845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.495 [2024-11-18 00:40:57.124871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.495 qpair failed and we were unable to recover it. 00:35:33.495 [2024-11-18 00:40:57.124985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.495 [2024-11-18 00:40:57.125012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.495 qpair failed and we were unable to recover it. 00:35:33.495 [2024-11-18 00:40:57.125167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.495 [2024-11-18 00:40:57.125197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.495 qpair failed and we were unable to recover it. 00:35:33.495 [2024-11-18 00:40:57.125322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.495 [2024-11-18 00:40:57.125351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.495 qpair failed and we were unable to recover it. 00:35:33.495 [2024-11-18 00:40:57.125431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.495 [2024-11-18 00:40:57.125459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.495 qpair failed and we were unable to recover it. 00:35:33.495 [2024-11-18 00:40:57.125573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.495 [2024-11-18 00:40:57.125607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.495 qpair failed and we were unable to recover it. 00:35:33.495 [2024-11-18 00:40:57.125750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.495 [2024-11-18 00:40:57.125777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.495 qpair failed and we were unable to recover it. 00:35:33.495 [2024-11-18 00:40:57.125897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.495 [2024-11-18 00:40:57.125925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.495 qpair failed and we were unable to recover it. 00:35:33.495 [2024-11-18 00:40:57.126032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.495 [2024-11-18 00:40:57.126077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.495 qpair failed and we were unable to recover it. 00:35:33.495 [2024-11-18 00:40:57.126189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.495 [2024-11-18 00:40:57.126217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.495 qpair failed and we were unable to recover it. 00:35:33.496 [2024-11-18 00:40:57.126321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.496 [2024-11-18 00:40:57.126363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.496 qpair failed and we were unable to recover it. 00:35:33.496 [2024-11-18 00:40:57.126524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.496 [2024-11-18 00:40:57.126563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.496 qpair failed and we were unable to recover it. 00:35:33.496 [2024-11-18 00:40:57.126669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.496 [2024-11-18 00:40:57.126707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.496 qpair failed and we were unable to recover it. 00:35:33.496 [2024-11-18 00:40:57.126850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.496 [2024-11-18 00:40:57.126888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.496 qpair failed and we were unable to recover it. 00:35:33.496 [2024-11-18 00:40:57.127064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.496 [2024-11-18 00:40:57.127102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.496 qpair failed and we were unable to recover it. 00:35:33.496 [2024-11-18 00:40:57.127231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.496 [2024-11-18 00:40:57.127278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.496 qpair failed and we were unable to recover it. 00:35:33.496 [2024-11-18 00:40:57.127380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.496 [2024-11-18 00:40:57.127408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.496 qpair failed and we were unable to recover it. 00:35:33.496 [2024-11-18 00:40:57.127509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.496 [2024-11-18 00:40:57.127543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.496 qpair failed and we were unable to recover it. 00:35:33.496 [2024-11-18 00:40:57.127688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.496 [2024-11-18 00:40:57.127726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.496 qpair failed and we were unable to recover it. 00:35:33.496 [2024-11-18 00:40:57.127929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.496 [2024-11-18 00:40:57.127965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.496 qpair failed and we were unable to recover it. 00:35:33.496 [2024-11-18 00:40:57.128128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.496 [2024-11-18 00:40:57.128179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.496 qpair failed and we were unable to recover it. 00:35:33.496 [2024-11-18 00:40:57.128296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.496 [2024-11-18 00:40:57.128331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.496 qpair failed and we were unable to recover it. 00:35:33.496 [2024-11-18 00:40:57.128446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.496 [2024-11-18 00:40:57.128473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.496 qpair failed and we were unable to recover it. 00:35:33.496 [2024-11-18 00:40:57.128621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.496 [2024-11-18 00:40:57.128648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.496 qpair failed and we were unable to recover it. 00:35:33.496 [2024-11-18 00:40:57.128767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.496 [2024-11-18 00:40:57.128813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.496 qpair failed and we were unable to recover it. 00:35:33.496 [2024-11-18 00:40:57.128903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.496 [2024-11-18 00:40:57.128930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.496 qpair failed and we were unable to recover it. 00:35:33.496 [2024-11-18 00:40:57.129012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.496 [2024-11-18 00:40:57.129040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.496 qpair failed and we were unable to recover it. 00:35:33.496 [2024-11-18 00:40:57.129160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.496 [2024-11-18 00:40:57.129186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.496 qpair failed and we were unable to recover it. 00:35:33.496 [2024-11-18 00:40:57.129303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.496 [2024-11-18 00:40:57.129343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.496 qpair failed and we were unable to recover it. 00:35:33.496 [2024-11-18 00:40:57.129460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.496 [2024-11-18 00:40:57.129488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.496 qpair failed and we were unable to recover it. 00:35:33.496 [2024-11-18 00:40:57.129580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.496 [2024-11-18 00:40:57.129607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.496 qpair failed and we were unable to recover it. 00:35:33.496 [2024-11-18 00:40:57.129703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.496 [2024-11-18 00:40:57.129730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.496 qpair failed and we were unable to recover it. 00:35:33.496 [2024-11-18 00:40:57.129831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.496 [2024-11-18 00:40:57.129861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.496 qpair failed and we were unable to recover it. 00:35:33.496 [2024-11-18 00:40:57.130009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.496 [2024-11-18 00:40:57.130038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.496 qpair failed and we were unable to recover it. 00:35:33.496 [2024-11-18 00:40:57.130152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.496 [2024-11-18 00:40:57.130179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.496 qpair failed and we were unable to recover it. 00:35:33.496 [2024-11-18 00:40:57.130323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.496 [2024-11-18 00:40:57.130351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.496 qpair failed and we were unable to recover it. 00:35:33.496 [2024-11-18 00:40:57.130486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.496 [2024-11-18 00:40:57.130522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.496 qpair failed and we were unable to recover it. 00:35:33.496 [2024-11-18 00:40:57.130711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.496 [2024-11-18 00:40:57.130763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.496 qpair failed and we were unable to recover it. 00:35:33.496 [2024-11-18 00:40:57.130883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.496 [2024-11-18 00:40:57.130935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.496 qpair failed and we were unable to recover it. 00:35:33.496 [2024-11-18 00:40:57.131155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.496 [2024-11-18 00:40:57.131224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.496 qpair failed and we were unable to recover it. 00:35:33.496 [2024-11-18 00:40:57.131385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.496 [2024-11-18 00:40:57.131412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.496 qpair failed and we were unable to recover it. 00:35:33.496 [2024-11-18 00:40:57.131553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.496 [2024-11-18 00:40:57.131586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.496 qpair failed and we were unable to recover it. 00:35:33.496 [2024-11-18 00:40:57.131731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.496 [2024-11-18 00:40:57.131766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.496 qpair failed and we were unable to recover it. 00:35:33.497 [2024-11-18 00:40:57.131913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.497 [2024-11-18 00:40:57.131949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.497 qpair failed and we were unable to recover it. 00:35:33.497 [2024-11-18 00:40:57.132100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.497 [2024-11-18 00:40:57.132134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.497 qpair failed and we were unable to recover it. 00:35:33.497 [2024-11-18 00:40:57.132275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.497 [2024-11-18 00:40:57.132318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.497 qpair failed and we were unable to recover it. 00:35:33.497 [2024-11-18 00:40:57.132455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.497 [2024-11-18 00:40:57.132482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.497 qpair failed and we were unable to recover it. 00:35:33.497 [2024-11-18 00:40:57.132635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.497 [2024-11-18 00:40:57.132669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.497 qpair failed and we were unable to recover it. 00:35:33.497 [2024-11-18 00:40:57.132814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.497 [2024-11-18 00:40:57.132849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.497 qpair failed and we were unable to recover it. 00:35:33.497 [2024-11-18 00:40:57.132992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.497 [2024-11-18 00:40:57.133031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.497 qpair failed and we were unable to recover it. 00:35:33.497 [2024-11-18 00:40:57.133149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.497 [2024-11-18 00:40:57.133184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.497 qpair failed and we were unable to recover it. 00:35:33.497 [2024-11-18 00:40:57.133344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.497 [2024-11-18 00:40:57.133372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.497 qpair failed and we were unable to recover it. 00:35:33.497 [2024-11-18 00:40:57.133486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.497 [2024-11-18 00:40:57.133513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.497 qpair failed and we were unable to recover it. 00:35:33.497 [2024-11-18 00:40:57.133656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.497 [2024-11-18 00:40:57.133706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.497 qpair failed and we were unable to recover it. 00:35:33.497 [2024-11-18 00:40:57.133793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.497 [2024-11-18 00:40:57.133820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.497 qpair failed and we were unable to recover it. 00:35:33.497 [2024-11-18 00:40:57.134012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.497 [2024-11-18 00:40:57.134064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.497 qpair failed and we were unable to recover it. 00:35:33.497 [2024-11-18 00:40:57.134174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.497 [2024-11-18 00:40:57.134222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.497 qpair failed and we were unable to recover it. 00:35:33.497 [2024-11-18 00:40:57.134336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.497 [2024-11-18 00:40:57.134365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.497 qpair failed and we were unable to recover it. 00:35:33.497 [2024-11-18 00:40:57.134483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.497 [2024-11-18 00:40:57.134511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.497 qpair failed and we were unable to recover it. 00:35:33.497 [2024-11-18 00:40:57.134601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.497 [2024-11-18 00:40:57.134629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.497 qpair failed and we were unable to recover it. 00:35:33.497 [2024-11-18 00:40:57.134789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.497 [2024-11-18 00:40:57.134817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.497 qpair failed and we were unable to recover it. 00:35:33.497 [2024-11-18 00:40:57.135071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.497 [2024-11-18 00:40:57.135109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.497 qpair failed and we were unable to recover it. 00:35:33.497 [2024-11-18 00:40:57.135295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.497 [2024-11-18 00:40:57.135340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.497 qpair failed and we were unable to recover it. 00:35:33.497 [2024-11-18 00:40:57.135449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.497 [2024-11-18 00:40:57.135478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.497 qpair failed and we were unable to recover it. 00:35:33.497 [2024-11-18 00:40:57.135600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.497 [2024-11-18 00:40:57.135638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.497 qpair failed and we were unable to recover it. 00:35:33.497 [2024-11-18 00:40:57.135799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.497 [2024-11-18 00:40:57.135834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.497 qpair failed and we were unable to recover it. 00:35:33.497 [2024-11-18 00:40:57.135948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.497 [2024-11-18 00:40:57.135982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.497 qpair failed and we were unable to recover it. 00:35:33.497 [2024-11-18 00:40:57.136125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.497 [2024-11-18 00:40:57.136160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.497 qpair failed and we were unable to recover it. 00:35:33.497 [2024-11-18 00:40:57.136329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.497 [2024-11-18 00:40:57.136373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.497 qpair failed and we were unable to recover it. 00:35:33.497 [2024-11-18 00:40:57.136455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.497 [2024-11-18 00:40:57.136483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.497 qpair failed and we were unable to recover it. 00:35:33.497 [2024-11-18 00:40:57.136578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.497 [2024-11-18 00:40:57.136619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.497 qpair failed and we were unable to recover it. 00:35:33.497 [2024-11-18 00:40:57.136716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.497 [2024-11-18 00:40:57.136745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.497 qpair failed and we were unable to recover it. 00:35:33.497 [2024-11-18 00:40:57.136864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.497 [2024-11-18 00:40:57.136914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.497 qpair failed and we were unable to recover it. 00:35:33.497 [2024-11-18 00:40:57.137093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.497 [2024-11-18 00:40:57.137138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.497 qpair failed and we were unable to recover it. 00:35:33.497 [2024-11-18 00:40:57.137247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.497 [2024-11-18 00:40:57.137274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.497 qpair failed and we were unable to recover it. 00:35:33.497 [2024-11-18 00:40:57.137441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.497 [2024-11-18 00:40:57.137482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.497 qpair failed and we were unable to recover it. 00:35:33.497 [2024-11-18 00:40:57.137591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.497 [2024-11-18 00:40:57.137620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.497 qpair failed and we were unable to recover it. 00:35:33.497 [2024-11-18 00:40:57.137705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.497 [2024-11-18 00:40:57.137739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.497 qpair failed and we were unable to recover it. 00:35:33.497 [2024-11-18 00:40:57.137855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.497 [2024-11-18 00:40:57.137882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.497 qpair failed and we were unable to recover it. 00:35:33.497 [2024-11-18 00:40:57.137998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.497 [2024-11-18 00:40:57.138028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.497 qpair failed and we were unable to recover it. 00:35:33.497 [2024-11-18 00:40:57.138176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.498 [2024-11-18 00:40:57.138204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.498 qpair failed and we were unable to recover it. 00:35:33.498 [2024-11-18 00:40:57.138350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.498 [2024-11-18 00:40:57.138379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.498 qpair failed and we were unable to recover it. 00:35:33.498 [2024-11-18 00:40:57.138465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.498 [2024-11-18 00:40:57.138493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.498 qpair failed and we were unable to recover it. 00:35:33.498 [2024-11-18 00:40:57.138588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.498 [2024-11-18 00:40:57.138619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.498 qpair failed and we were unable to recover it. 00:35:33.498 [2024-11-18 00:40:57.138768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.498 [2024-11-18 00:40:57.138802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.498 qpair failed and we were unable to recover it. 00:35:33.498 [2024-11-18 00:40:57.138971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.498 [2024-11-18 00:40:57.139006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.498 qpair failed and we were unable to recover it. 00:35:33.498 [2024-11-18 00:40:57.139111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.498 [2024-11-18 00:40:57.139146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.498 qpair failed and we were unable to recover it. 00:35:33.498 [2024-11-18 00:40:57.139294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.498 [2024-11-18 00:40:57.139333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.498 qpair failed and we were unable to recover it. 00:35:33.498 [2024-11-18 00:40:57.139452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.498 [2024-11-18 00:40:57.139479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.498 qpair failed and we were unable to recover it. 00:35:33.498 [2024-11-18 00:40:57.139647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.498 [2024-11-18 00:40:57.139695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.498 qpair failed and we were unable to recover it. 00:35:33.498 [2024-11-18 00:40:57.139796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.498 [2024-11-18 00:40:57.139830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.498 qpair failed and we were unable to recover it. 00:35:33.498 [2024-11-18 00:40:57.140006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.498 [2024-11-18 00:40:57.140054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.498 qpair failed and we were unable to recover it. 00:35:33.498 [2024-11-18 00:40:57.140201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.498 [2024-11-18 00:40:57.140230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.498 qpair failed and we were unable to recover it. 00:35:33.498 [2024-11-18 00:40:57.140356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.498 [2024-11-18 00:40:57.140384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.498 qpair failed and we were unable to recover it. 00:35:33.498 [2024-11-18 00:40:57.140501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.498 [2024-11-18 00:40:57.140528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.498 qpair failed and we were unable to recover it. 00:35:33.498 [2024-11-18 00:40:57.140623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.498 [2024-11-18 00:40:57.140652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.498 qpair failed and we were unable to recover it. 00:35:33.498 [2024-11-18 00:40:57.140748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.498 [2024-11-18 00:40:57.140776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.498 qpair failed and we were unable to recover it. 00:35:33.498 [2024-11-18 00:40:57.140945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.498 [2024-11-18 00:40:57.140998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.498 qpair failed and we were unable to recover it. 00:35:33.498 [2024-11-18 00:40:57.141191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.498 [2024-11-18 00:40:57.141229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.498 qpair failed and we were unable to recover it. 00:35:33.498 [2024-11-18 00:40:57.141358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.498 [2024-11-18 00:40:57.141386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.498 qpair failed and we were unable to recover it. 00:35:33.498 [2024-11-18 00:40:57.141529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.498 [2024-11-18 00:40:57.141565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.498 qpair failed and we were unable to recover it. 00:35:33.498 [2024-11-18 00:40:57.141703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.498 [2024-11-18 00:40:57.141740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.498 qpair failed and we were unable to recover it. 00:35:33.498 [2024-11-18 00:40:57.141915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.498 [2024-11-18 00:40:57.141950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.498 qpair failed and we were unable to recover it. 00:35:33.498 [2024-11-18 00:40:57.142088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.498 [2024-11-18 00:40:57.142126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.498 qpair failed and we were unable to recover it. 00:35:33.498 [2024-11-18 00:40:57.142265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.498 [2024-11-18 00:40:57.142302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.498 qpair failed and we were unable to recover it. 00:35:33.498 [2024-11-18 00:40:57.142445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.498 [2024-11-18 00:40:57.142474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.498 qpair failed and we were unable to recover it. 00:35:33.498 [2024-11-18 00:40:57.142589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.498 [2024-11-18 00:40:57.142617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.498 qpair failed and we were unable to recover it. 00:35:33.498 [2024-11-18 00:40:57.142784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.498 [2024-11-18 00:40:57.142831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.498 qpair failed and we were unable to recover it. 00:35:33.498 [2024-11-18 00:40:57.142986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.498 [2024-11-18 00:40:57.143022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.498 qpair failed and we were unable to recover it. 00:35:33.498 [2024-11-18 00:40:57.143205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.498 [2024-11-18 00:40:57.143242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.498 qpair failed and we were unable to recover it. 00:35:33.498 [2024-11-18 00:40:57.143404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.498 [2024-11-18 00:40:57.143433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.498 qpair failed and we were unable to recover it. 00:35:33.498 [2024-11-18 00:40:57.143521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.498 [2024-11-18 00:40:57.143569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.498 qpair failed and we were unable to recover it. 00:35:33.498 [2024-11-18 00:40:57.143747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.498 [2024-11-18 00:40:57.143783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.498 qpair failed and we were unable to recover it. 00:35:33.498 [2024-11-18 00:40:57.143889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.498 [2024-11-18 00:40:57.143926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.498 qpair failed and we were unable to recover it. 00:35:33.498 [2024-11-18 00:40:57.144091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.498 [2024-11-18 00:40:57.144163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.498 qpair failed and we were unable to recover it. 00:35:33.498 [2024-11-18 00:40:57.144300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.498 [2024-11-18 00:40:57.144356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.498 qpair failed and we were unable to recover it. 00:35:33.498 [2024-11-18 00:40:57.144474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.498 [2024-11-18 00:40:57.144502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.498 qpair failed and we were unable to recover it. 00:35:33.498 [2024-11-18 00:40:57.144602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.498 [2024-11-18 00:40:57.144636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.498 qpair failed and we were unable to recover it. 00:35:33.498 [2024-11-18 00:40:57.144825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.499 [2024-11-18 00:40:57.144862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.499 qpair failed and we were unable to recover it. 00:35:33.499 [2024-11-18 00:40:57.145069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.499 [2024-11-18 00:40:57.145105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.499 qpair failed and we were unable to recover it. 00:35:33.499 [2024-11-18 00:40:57.145281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.499 [2024-11-18 00:40:57.145326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.499 qpair failed and we were unable to recover it. 00:35:33.499 [2024-11-18 00:40:57.145439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.499 [2024-11-18 00:40:57.145467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.499 qpair failed and we were unable to recover it. 00:35:33.499 [2024-11-18 00:40:57.145610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.499 [2024-11-18 00:40:57.145637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.499 qpair failed and we were unable to recover it. 00:35:33.499 [2024-11-18 00:40:57.145806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.499 [2024-11-18 00:40:57.145843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.499 qpair failed and we were unable to recover it. 00:35:33.499 [2024-11-18 00:40:57.146013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.499 [2024-11-18 00:40:57.146050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.499 qpair failed and we were unable to recover it. 00:35:33.499 [2024-11-18 00:40:57.146269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.499 [2024-11-18 00:40:57.146368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.499 qpair failed and we were unable to recover it. 00:35:33.499 [2024-11-18 00:40:57.146457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.499 [2024-11-18 00:40:57.146484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.499 qpair failed and we were unable to recover it. 00:35:33.499 [2024-11-18 00:40:57.146608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.499 [2024-11-18 00:40:57.146654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.499 qpair failed and we were unable to recover it. 00:35:33.499 [2024-11-18 00:40:57.146859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.499 [2024-11-18 00:40:57.146895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.499 qpair failed and we were unable to recover it. 00:35:33.499 [2024-11-18 00:40:57.147081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.499 [2024-11-18 00:40:57.147116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.499 qpair failed and we were unable to recover it. 00:35:33.499 [2024-11-18 00:40:57.147259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.499 [2024-11-18 00:40:57.147295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.499 qpair failed and we were unable to recover it. 00:35:33.499 [2024-11-18 00:40:57.147428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.499 [2024-11-18 00:40:57.147457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.499 qpair failed and we were unable to recover it. 00:35:33.499 [2024-11-18 00:40:57.147548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.499 [2024-11-18 00:40:57.147596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.499 qpair failed and we were unable to recover it. 00:35:33.499 [2024-11-18 00:40:57.147769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.499 [2024-11-18 00:40:57.147806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.499 qpair failed and we were unable to recover it. 00:35:33.499 [2024-11-18 00:40:57.147960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.499 [2024-11-18 00:40:57.147996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.499 qpair failed and we were unable to recover it. 00:35:33.499 [2024-11-18 00:40:57.148144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.499 [2024-11-18 00:40:57.148180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.499 qpair failed and we were unable to recover it. 00:35:33.499 [2024-11-18 00:40:57.148373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.499 [2024-11-18 00:40:57.148402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.499 qpair failed and we were unable to recover it. 00:35:33.499 [2024-11-18 00:40:57.148524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.499 [2024-11-18 00:40:57.148553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.499 qpair failed and we were unable to recover it. 00:35:33.499 [2024-11-18 00:40:57.148642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.499 [2024-11-18 00:40:57.148694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.499 qpair failed and we were unable to recover it. 00:35:33.499 [2024-11-18 00:40:57.148836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.499 [2024-11-18 00:40:57.148873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.499 qpair failed and we were unable to recover it. 00:35:33.499 [2024-11-18 00:40:57.148976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.499 [2024-11-18 00:40:57.149014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.499 qpair failed and we were unable to recover it. 00:35:33.499 [2024-11-18 00:40:57.149159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.499 [2024-11-18 00:40:57.149195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.499 qpair failed and we were unable to recover it. 00:35:33.499 [2024-11-18 00:40:57.149309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.499 [2024-11-18 00:40:57.149372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.499 qpair failed and we were unable to recover it. 00:35:33.499 [2024-11-18 00:40:57.149511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.499 [2024-11-18 00:40:57.149547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.499 qpair failed and we were unable to recover it. 00:35:33.499 [2024-11-18 00:40:57.149728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.499 [2024-11-18 00:40:57.149785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.499 qpair failed and we were unable to recover it. 00:35:33.499 [2024-11-18 00:40:57.149938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.499 [2024-11-18 00:40:57.149981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.499 qpair failed and we were unable to recover it. 00:35:33.499 [2024-11-18 00:40:57.150157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.499 [2024-11-18 00:40:57.150193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.499 qpair failed and we were unable to recover it. 00:35:33.499 [2024-11-18 00:40:57.150373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.499 [2024-11-18 00:40:57.150401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.499 qpair failed and we were unable to recover it. 00:35:33.499 [2024-11-18 00:40:57.150496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.499 [2024-11-18 00:40:57.150523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.499 qpair failed and we were unable to recover it. 00:35:33.499 [2024-11-18 00:40:57.150658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.499 [2024-11-18 00:40:57.150699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.499 qpair failed and we were unable to recover it. 00:35:33.499 [2024-11-18 00:40:57.150826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.499 [2024-11-18 00:40:57.150863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.499 qpair failed and we were unable to recover it. 00:35:33.499 [2024-11-18 00:40:57.151038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.499 [2024-11-18 00:40:57.151075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.499 qpair failed and we were unable to recover it. 00:35:33.499 [2024-11-18 00:40:57.151212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.499 [2024-11-18 00:40:57.151249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.499 qpair failed and we were unable to recover it. 00:35:33.499 [2024-11-18 00:40:57.151431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.499 [2024-11-18 00:40:57.151472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.499 qpair failed and we were unable to recover it. 00:35:33.499 [2024-11-18 00:40:57.151601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.499 [2024-11-18 00:40:57.151640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.499 qpair failed and we were unable to recover it. 00:35:33.499 [2024-11-18 00:40:57.151780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.499 [2024-11-18 00:40:57.151832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.499 qpair failed and we were unable to recover it. 00:35:33.500 [2024-11-18 00:40:57.151970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.500 [2024-11-18 00:40:57.152006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.500 qpair failed and we were unable to recover it. 00:35:33.500 [2024-11-18 00:40:57.152148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.500 [2024-11-18 00:40:57.152215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.500 qpair failed and we were unable to recover it. 00:35:33.500 [2024-11-18 00:40:57.152404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.500 [2024-11-18 00:40:57.152433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.500 qpair failed and we were unable to recover it. 00:35:33.500 [2024-11-18 00:40:57.152522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.500 [2024-11-18 00:40:57.152549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.500 qpair failed and we were unable to recover it. 00:35:33.500 [2024-11-18 00:40:57.152686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.500 [2024-11-18 00:40:57.152739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.500 qpair failed and we were unable to recover it. 00:35:33.500 [2024-11-18 00:40:57.152892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.500 [2024-11-18 00:40:57.152926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.500 qpair failed and we were unable to recover it. 00:35:33.500 [2024-11-18 00:40:57.153149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.500 [2024-11-18 00:40:57.153215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.500 qpair failed and we were unable to recover it. 00:35:33.500 [2024-11-18 00:40:57.153394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.500 [2024-11-18 00:40:57.153421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.500 qpair failed and we were unable to recover it. 00:35:33.500 [2024-11-18 00:40:57.153537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.500 [2024-11-18 00:40:57.153563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.500 qpair failed and we were unable to recover it. 00:35:33.500 [2024-11-18 00:40:57.153681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.500 [2024-11-18 00:40:57.153707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.500 qpair failed and we were unable to recover it. 00:35:33.500 [2024-11-18 00:40:57.153900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.500 [2024-11-18 00:40:57.153935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.500 qpair failed and we were unable to recover it. 00:35:33.500 [2024-11-18 00:40:57.154047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.500 [2024-11-18 00:40:57.154073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.500 qpair failed and we were unable to recover it. 00:35:33.500 [2024-11-18 00:40:57.154242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.500 [2024-11-18 00:40:57.154279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.500 qpair failed and we were unable to recover it. 00:35:33.500 [2024-11-18 00:40:57.154441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.500 [2024-11-18 00:40:57.154468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.500 qpair failed and we were unable to recover it. 00:35:33.500 [2024-11-18 00:40:57.154581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.500 [2024-11-18 00:40:57.154615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.500 qpair failed and we were unable to recover it. 00:35:33.500 [2024-11-18 00:40:57.154708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.500 [2024-11-18 00:40:57.154735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.500 qpair failed and we were unable to recover it. 00:35:33.500 [2024-11-18 00:40:57.154861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.500 [2024-11-18 00:40:57.154897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.500 qpair failed and we were unable to recover it. 00:35:33.500 [2024-11-18 00:40:57.155027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.500 [2024-11-18 00:40:57.155077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.500 qpair failed and we were unable to recover it. 00:35:33.500 [2024-11-18 00:40:57.155236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.500 [2024-11-18 00:40:57.155263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.500 qpair failed and we were unable to recover it. 00:35:33.500 [2024-11-18 00:40:57.155383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.500 [2024-11-18 00:40:57.155424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.500 qpair failed and we were unable to recover it. 00:35:33.500 [2024-11-18 00:40:57.155548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.500 [2024-11-18 00:40:57.155577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.500 qpair failed and we were unable to recover it. 00:35:33.500 [2024-11-18 00:40:57.155695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.500 [2024-11-18 00:40:57.155722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.500 qpair failed and we were unable to recover it. 00:35:33.500 [2024-11-18 00:40:57.155883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.500 [2024-11-18 00:40:57.155927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.500 qpair failed and we were unable to recover it. 00:35:33.500 [2024-11-18 00:40:57.156142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.500 [2024-11-18 00:40:57.156207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.500 qpair failed and we were unable to recover it. 00:35:33.500 [2024-11-18 00:40:57.156392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.500 [2024-11-18 00:40:57.156420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.500 qpair failed and we were unable to recover it. 00:35:33.500 [2024-11-18 00:40:57.156541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.500 [2024-11-18 00:40:57.156569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.500 qpair failed and we were unable to recover it. 00:35:33.500 [2024-11-18 00:40:57.156710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.500 [2024-11-18 00:40:57.156761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.500 qpair failed and we were unable to recover it. 00:35:33.500 [2024-11-18 00:40:57.156969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.500 [2024-11-18 00:40:57.157033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.500 qpair failed and we were unable to recover it. 00:35:33.500 [2024-11-18 00:40:57.157288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.500 [2024-11-18 00:40:57.157392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.500 qpair failed and we were unable to recover it. 00:35:33.500 [2024-11-18 00:40:57.157513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.500 [2024-11-18 00:40:57.157540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.500 qpair failed and we were unable to recover it. 00:35:33.500 [2024-11-18 00:40:57.157627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.500 [2024-11-18 00:40:57.157675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.500 qpair failed and we were unable to recover it. 00:35:33.500 [2024-11-18 00:40:57.157777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.500 [2024-11-18 00:40:57.157812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.500 qpair failed and we were unable to recover it. 00:35:33.500 [2024-11-18 00:40:57.157953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.500 [2024-11-18 00:40:57.157990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.500 qpair failed and we were unable to recover it. 00:35:33.500 [2024-11-18 00:40:57.158141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.500 [2024-11-18 00:40:57.158175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.500 qpair failed and we were unable to recover it. 00:35:33.500 [2024-11-18 00:40:57.158282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.500 [2024-11-18 00:40:57.158334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.500 qpair failed and we were unable to recover it. 00:35:33.500 [2024-11-18 00:40:57.158469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.500 [2024-11-18 00:40:57.158506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.500 qpair failed and we were unable to recover it. 00:35:33.500 [2024-11-18 00:40:57.158659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.500 [2024-11-18 00:40:57.158695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.500 qpair failed and we were unable to recover it. 00:35:33.500 [2024-11-18 00:40:57.158836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.500 [2024-11-18 00:40:57.158871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.500 qpair failed and we were unable to recover it. 00:35:33.501 [2024-11-18 00:40:57.159014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.501 [2024-11-18 00:40:57.159049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.501 qpair failed and we were unable to recover it. 00:35:33.501 [2024-11-18 00:40:57.159197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.501 [2024-11-18 00:40:57.159231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.501 qpair failed and we were unable to recover it. 00:35:33.501 [2024-11-18 00:40:57.159365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.501 [2024-11-18 00:40:57.159392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.501 qpair failed and we were unable to recover it. 00:35:33.501 [2024-11-18 00:40:57.159553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.501 [2024-11-18 00:40:57.159612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.501 qpair failed and we were unable to recover it. 00:35:33.501 [2024-11-18 00:40:57.159813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.501 [2024-11-18 00:40:57.159866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.501 qpair failed and we were unable to recover it. 00:35:33.501 [2024-11-18 00:40:57.160005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.501 [2024-11-18 00:40:57.160034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.501 qpair failed and we were unable to recover it. 00:35:33.501 [2024-11-18 00:40:57.160248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.501 [2024-11-18 00:40:57.160275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.501 qpair failed and we were unable to recover it. 00:35:33.501 [2024-11-18 00:40:57.160384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.501 [2024-11-18 00:40:57.160412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.501 qpair failed and we were unable to recover it. 00:35:33.501 [2024-11-18 00:40:57.160557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.501 [2024-11-18 00:40:57.160584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.501 qpair failed and we were unable to recover it. 00:35:33.501 [2024-11-18 00:40:57.160723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.501 [2024-11-18 00:40:57.160750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.501 qpair failed and we were unable to recover it. 00:35:33.501 [2024-11-18 00:40:57.160905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.501 [2024-11-18 00:40:57.160970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.501 qpair failed and we were unable to recover it. 00:35:33.501 [2024-11-18 00:40:57.161113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.501 [2024-11-18 00:40:57.161195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.501 qpair failed and we were unable to recover it. 00:35:33.501 [2024-11-18 00:40:57.161414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.501 [2024-11-18 00:40:57.161455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.501 qpair failed and we were unable to recover it. 00:35:33.501 [2024-11-18 00:40:57.161586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.501 [2024-11-18 00:40:57.161617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.501 qpair failed and we were unable to recover it. 00:35:33.501 [2024-11-18 00:40:57.161776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.501 [2024-11-18 00:40:57.161827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.501 qpair failed and we were unable to recover it. 00:35:33.501 [2024-11-18 00:40:57.161999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.501 [2024-11-18 00:40:57.162034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.501 qpair failed and we were unable to recover it. 00:35:33.501 [2024-11-18 00:40:57.162141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.501 [2024-11-18 00:40:57.162168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.501 qpair failed and we were unable to recover it. 00:35:33.501 [2024-11-18 00:40:57.162325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.501 [2024-11-18 00:40:57.162372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.501 qpair failed and we were unable to recover it. 00:35:33.501 [2024-11-18 00:40:57.162500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.501 [2024-11-18 00:40:57.162528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.501 qpair failed and we were unable to recover it. 00:35:33.501 [2024-11-18 00:40:57.162646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.501 [2024-11-18 00:40:57.162720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.501 qpair failed and we were unable to recover it. 00:35:33.501 [2024-11-18 00:40:57.162947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.501 [2024-11-18 00:40:57.162983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.501 qpair failed and we were unable to recover it. 00:35:33.501 [2024-11-18 00:40:57.163102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.501 [2024-11-18 00:40:57.163141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.501 qpair failed and we were unable to recover it. 00:35:33.501 [2024-11-18 00:40:57.163290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.501 [2024-11-18 00:40:57.163355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.501 qpair failed and we were unable to recover it. 00:35:33.501 [2024-11-18 00:40:57.163496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.501 [2024-11-18 00:40:57.163524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.501 qpair failed and we were unable to recover it. 00:35:33.501 [2024-11-18 00:40:57.163639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.501 [2024-11-18 00:40:57.163666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.501 qpair failed and we were unable to recover it. 00:35:33.501 [2024-11-18 00:40:57.163782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.501 [2024-11-18 00:40:57.163814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.501 qpair failed and we were unable to recover it. 00:35:33.501 [2024-11-18 00:40:57.163947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.501 [2024-11-18 00:40:57.163977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.501 qpair failed and we were unable to recover it. 00:35:33.501 [2024-11-18 00:40:57.164109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.501 [2024-11-18 00:40:57.164139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.501 qpair failed and we were unable to recover it. 00:35:33.501 [2024-11-18 00:40:57.164271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.501 [2024-11-18 00:40:57.164302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.501 qpair failed and we were unable to recover it. 00:35:33.501 [2024-11-18 00:40:57.164478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.501 [2024-11-18 00:40:57.164505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.501 qpair failed and we were unable to recover it. 00:35:33.501 [2024-11-18 00:40:57.164592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.501 [2024-11-18 00:40:57.164627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.501 qpair failed and we were unable to recover it. 00:35:33.501 [2024-11-18 00:40:57.164811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.501 [2024-11-18 00:40:57.164843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.501 qpair failed and we were unable to recover it. 00:35:33.501 [2024-11-18 00:40:57.164955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.501 [2024-11-18 00:40:57.164981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.501 qpair failed and we were unable to recover it. 00:35:33.501 [2024-11-18 00:40:57.165105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.502 [2024-11-18 00:40:57.165136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.502 qpair failed and we were unable to recover it. 00:35:33.502 [2024-11-18 00:40:57.165269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.502 [2024-11-18 00:40:57.165301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.502 qpair failed and we were unable to recover it. 00:35:33.502 [2024-11-18 00:40:57.165448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.502 [2024-11-18 00:40:57.165475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.502 qpair failed and we were unable to recover it. 00:35:33.502 [2024-11-18 00:40:57.165562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.502 [2024-11-18 00:40:57.165589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.502 qpair failed and we were unable to recover it. 00:35:33.502 [2024-11-18 00:40:57.165768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.502 [2024-11-18 00:40:57.165799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.502 qpair failed and we were unable to recover it. 00:35:33.502 [2024-11-18 00:40:57.165989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.502 [2024-11-18 00:40:57.166020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.502 qpair failed and we were unable to recover it. 00:35:33.502 [2024-11-18 00:40:57.166148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.502 [2024-11-18 00:40:57.166179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.502 qpair failed and we were unable to recover it. 00:35:33.502 [2024-11-18 00:40:57.166318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.502 [2024-11-18 00:40:57.166373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.502 qpair failed and we were unable to recover it. 00:35:33.502 [2024-11-18 00:40:57.166467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.502 [2024-11-18 00:40:57.166493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.502 qpair failed and we were unable to recover it. 00:35:33.502 [2024-11-18 00:40:57.166638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.502 [2024-11-18 00:40:57.166666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.502 qpair failed and we were unable to recover it. 00:35:33.502 [2024-11-18 00:40:57.166808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.502 [2024-11-18 00:40:57.166840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.502 qpair failed and we were unable to recover it. 00:35:33.502 [2024-11-18 00:40:57.166962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.502 [2024-11-18 00:40:57.167005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.502 qpair failed and we were unable to recover it. 00:35:33.502 [2024-11-18 00:40:57.167135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.502 [2024-11-18 00:40:57.167167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.502 qpair failed and we were unable to recover it. 00:35:33.502 [2024-11-18 00:40:57.167308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.502 [2024-11-18 00:40:57.167347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.502 qpair failed and we were unable to recover it. 00:35:33.502 [2024-11-18 00:40:57.167450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.502 [2024-11-18 00:40:57.167476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.502 qpair failed and we were unable to recover it. 00:35:33.502 [2024-11-18 00:40:57.167584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.502 [2024-11-18 00:40:57.167611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.502 qpair failed and we were unable to recover it. 00:35:33.502 [2024-11-18 00:40:57.167802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.502 [2024-11-18 00:40:57.167828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.502 qpair failed and we were unable to recover it. 00:35:33.502 [2024-11-18 00:40:57.167979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.502 [2024-11-18 00:40:57.168024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.502 qpair failed and we were unable to recover it. 00:35:33.502 [2024-11-18 00:40:57.168156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.502 [2024-11-18 00:40:57.168188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.502 qpair failed and we were unable to recover it. 00:35:33.502 [2024-11-18 00:40:57.168285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.502 [2024-11-18 00:40:57.168324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.502 qpair failed and we were unable to recover it. 00:35:33.502 [2024-11-18 00:40:57.168456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.502 [2024-11-18 00:40:57.168499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.502 qpair failed and we were unable to recover it. 00:35:33.502 [2024-11-18 00:40:57.168623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.502 [2024-11-18 00:40:57.168694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.502 qpair failed and we were unable to recover it. 00:35:33.502 [2024-11-18 00:40:57.168883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.502 [2024-11-18 00:40:57.168916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.502 qpair failed and we were unable to recover it. 00:35:33.502 [2024-11-18 00:40:57.169019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.502 [2024-11-18 00:40:57.169051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.502 qpair failed and we were unable to recover it. 00:35:33.502 [2024-11-18 00:40:57.169206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.502 [2024-11-18 00:40:57.169234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.502 qpair failed and we were unable to recover it. 00:35:33.502 [2024-11-18 00:40:57.169399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.502 [2024-11-18 00:40:57.169428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.502 qpair failed and we were unable to recover it. 00:35:33.502 [2024-11-18 00:40:57.169536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.502 [2024-11-18 00:40:57.169588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.502 qpair failed and we were unable to recover it. 00:35:33.502 [2024-11-18 00:40:57.169840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.502 [2024-11-18 00:40:57.169871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.502 qpair failed and we were unable to recover it. 00:35:33.502 [2024-11-18 00:40:57.169962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.502 [2024-11-18 00:40:57.169993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.502 qpair failed and we were unable to recover it. 00:35:33.502 [2024-11-18 00:40:57.170104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.502 [2024-11-18 00:40:57.170131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.502 qpair failed and we were unable to recover it. 00:35:33.502 [2024-11-18 00:40:57.170219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.502 [2024-11-18 00:40:57.170248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.502 qpair failed and we were unable to recover it. 00:35:33.502 [2024-11-18 00:40:57.170378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.502 [2024-11-18 00:40:57.170406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.502 qpair failed and we were unable to recover it. 00:35:33.502 [2024-11-18 00:40:57.170543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.502 [2024-11-18 00:40:57.170574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.502 qpair failed and we were unable to recover it. 00:35:33.502 [2024-11-18 00:40:57.170702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.502 [2024-11-18 00:40:57.170733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.502 qpair failed and we were unable to recover it. 00:35:33.502 [2024-11-18 00:40:57.170901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.502 [2024-11-18 00:40:57.170932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.502 qpair failed and we were unable to recover it. 00:35:33.502 [2024-11-18 00:40:57.171065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.502 [2024-11-18 00:40:57.171096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.502 qpair failed and we were unable to recover it. 00:35:33.502 [2024-11-18 00:40:57.171232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.502 [2024-11-18 00:40:57.171264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.502 qpair failed and we were unable to recover it. 00:35:33.502 [2024-11-18 00:40:57.171403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.502 [2024-11-18 00:40:57.171437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.502 qpair failed and we were unable to recover it. 00:35:33.503 [2024-11-18 00:40:57.171558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.503 [2024-11-18 00:40:57.171589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.503 qpair failed and we were unable to recover it. 00:35:33.503 [2024-11-18 00:40:57.171691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.503 [2024-11-18 00:40:57.171722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.503 qpair failed and we were unable to recover it. 00:35:33.503 [2024-11-18 00:40:57.171827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.503 [2024-11-18 00:40:57.171858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.503 qpair failed and we were unable to recover it. 00:35:33.503 [2024-11-18 00:40:57.172016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.503 [2024-11-18 00:40:57.172048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.503 qpair failed and we were unable to recover it. 00:35:33.503 [2024-11-18 00:40:57.172178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.503 [2024-11-18 00:40:57.172208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.503 qpair failed and we were unable to recover it. 00:35:33.503 [2024-11-18 00:40:57.172359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.503 [2024-11-18 00:40:57.172386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.503 qpair failed and we were unable to recover it. 00:35:33.503 [2024-11-18 00:40:57.172498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.503 [2024-11-18 00:40:57.172525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.503 qpair failed and we were unable to recover it. 00:35:33.503 [2024-11-18 00:40:57.172641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.503 [2024-11-18 00:40:57.172667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.503 qpair failed and we were unable to recover it. 00:35:33.503 [2024-11-18 00:40:57.172813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.503 [2024-11-18 00:40:57.172841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.503 qpair failed and we were unable to recover it. 00:35:33.503 [2024-11-18 00:40:57.172986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.503 [2024-11-18 00:40:57.173044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.503 qpair failed and we were unable to recover it. 00:35:33.503 [2024-11-18 00:40:57.173168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.503 [2024-11-18 00:40:57.173196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.503 qpair failed and we were unable to recover it. 00:35:33.503 [2024-11-18 00:40:57.173325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.503 [2024-11-18 00:40:57.173354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.503 qpair failed and we were unable to recover it. 00:35:33.503 [2024-11-18 00:40:57.173443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.503 [2024-11-18 00:40:57.173471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.503 qpair failed and we were unable to recover it. 00:35:33.503 [2024-11-18 00:40:57.173602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.503 [2024-11-18 00:40:57.173628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.503 qpair failed and we were unable to recover it. 00:35:33.503 [2024-11-18 00:40:57.173757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.503 [2024-11-18 00:40:57.173785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.503 qpair failed and we were unable to recover it. 00:35:33.503 [2024-11-18 00:40:57.173898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.503 [2024-11-18 00:40:57.173927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.503 qpair failed and we were unable to recover it. 00:35:33.503 [2024-11-18 00:40:57.174016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.503 [2024-11-18 00:40:57.174042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.503 qpair failed and we were unable to recover it. 00:35:33.503 [2024-11-18 00:40:57.174168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.503 [2024-11-18 00:40:57.174197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.503 qpair failed and we were unable to recover it. 00:35:33.503 [2024-11-18 00:40:57.174320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.503 [2024-11-18 00:40:57.174365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.503 qpair failed and we were unable to recover it. 00:35:33.503 [2024-11-18 00:40:57.174507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.503 [2024-11-18 00:40:57.174554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.503 qpair failed and we were unable to recover it. 00:35:33.503 [2024-11-18 00:40:57.174669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.503 [2024-11-18 00:40:57.174703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.503 qpair failed and we were unable to recover it. 00:35:33.503 [2024-11-18 00:40:57.174864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.503 [2024-11-18 00:40:57.174895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.503 qpair failed and we were unable to recover it. 00:35:33.503 [2024-11-18 00:40:57.175031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.503 [2024-11-18 00:40:57.175063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.503 qpair failed and we were unable to recover it. 00:35:33.503 [2024-11-18 00:40:57.175226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.503 [2024-11-18 00:40:57.175257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.503 qpair failed and we were unable to recover it. 00:35:33.503 [2024-11-18 00:40:57.175411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.503 [2024-11-18 00:40:57.175441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.503 qpair failed and we were unable to recover it. 00:35:33.503 [2024-11-18 00:40:57.175636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.503 [2024-11-18 00:40:57.175674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.503 qpair failed and we were unable to recover it. 00:35:33.503 [2024-11-18 00:40:57.175787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.503 [2024-11-18 00:40:57.175865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.503 qpair failed and we were unable to recover it. 00:35:33.503 [2024-11-18 00:40:57.176025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.503 [2024-11-18 00:40:57.176056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.503 qpair failed and we were unable to recover it. 00:35:33.503 [2024-11-18 00:40:57.176162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.503 [2024-11-18 00:40:57.176189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.503 qpair failed and we were unable to recover it. 00:35:33.503 [2024-11-18 00:40:57.176317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.503 [2024-11-18 00:40:57.176346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.503 qpair failed and we were unable to recover it. 00:35:33.503 [2024-11-18 00:40:57.176438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.503 [2024-11-18 00:40:57.176464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.503 qpair failed and we were unable to recover it. 00:35:33.503 [2024-11-18 00:40:57.176573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.503 [2024-11-18 00:40:57.176606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.503 qpair failed and we were unable to recover it. 00:35:33.503 [2024-11-18 00:40:57.176752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.503 [2024-11-18 00:40:57.176784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.503 qpair failed and we were unable to recover it. 00:35:33.503 [2024-11-18 00:40:57.176919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.503 [2024-11-18 00:40:57.176964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.503 qpair failed and we were unable to recover it. 00:35:33.503 [2024-11-18 00:40:57.177121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.503 [2024-11-18 00:40:57.177152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.503 qpair failed and we were unable to recover it. 00:35:33.503 [2024-11-18 00:40:57.177255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.503 [2024-11-18 00:40:57.177299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.503 qpair failed and we were unable to recover it. 00:35:33.503 [2024-11-18 00:40:57.177468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.503 [2024-11-18 00:40:57.177494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.503 qpair failed and we were unable to recover it. 00:35:33.503 [2024-11-18 00:40:57.177616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.503 [2024-11-18 00:40:57.177648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.503 qpair failed and we were unable to recover it. 00:35:33.504 [2024-11-18 00:40:57.177778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.504 [2024-11-18 00:40:57.177810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.504 qpair failed and we were unable to recover it. 00:35:33.504 [2024-11-18 00:40:57.177908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.504 [2024-11-18 00:40:57.177938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.504 qpair failed and we were unable to recover it. 00:35:33.504 [2024-11-18 00:40:57.178121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.504 [2024-11-18 00:40:57.178177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.504 qpair failed and we were unable to recover it. 00:35:33.504 [2024-11-18 00:40:57.178262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.504 [2024-11-18 00:40:57.178290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.504 qpair failed and we were unable to recover it. 00:35:33.504 [2024-11-18 00:40:57.178408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.504 [2024-11-18 00:40:57.178448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.504 qpair failed and we were unable to recover it. 00:35:33.504 [2024-11-18 00:40:57.178573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.504 [2024-11-18 00:40:57.178618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.504 qpair failed and we were unable to recover it. 00:35:33.504 [2024-11-18 00:40:57.178790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.504 [2024-11-18 00:40:57.178821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.504 qpair failed and we were unable to recover it. 00:35:33.504 [2024-11-18 00:40:57.179011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.504 [2024-11-18 00:40:57.179043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.504 qpair failed and we were unable to recover it. 00:35:33.504 [2024-11-18 00:40:57.179216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.504 [2024-11-18 00:40:57.179281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.504 qpair failed and we were unable to recover it. 00:35:33.504 [2024-11-18 00:40:57.179469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.504 [2024-11-18 00:40:57.179496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.504 qpair failed and we were unable to recover it. 00:35:33.504 [2024-11-18 00:40:57.179591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.504 [2024-11-18 00:40:57.179618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.504 qpair failed and we were unable to recover it. 00:35:33.504 [2024-11-18 00:40:57.179702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.504 [2024-11-18 00:40:57.179728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.504 qpair failed and we were unable to recover it. 00:35:33.504 [2024-11-18 00:40:57.179894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.504 [2024-11-18 00:40:57.179925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.504 qpair failed and we were unable to recover it. 00:35:33.504 [2024-11-18 00:40:57.180142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.504 [2024-11-18 00:40:57.180174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.504 qpair failed and we were unable to recover it. 00:35:33.504 [2024-11-18 00:40:57.180273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.504 [2024-11-18 00:40:57.180303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.504 qpair failed and we were unable to recover it. 00:35:33.504 [2024-11-18 00:40:57.180466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.504 [2024-11-18 00:40:57.180493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.504 qpair failed and we were unable to recover it. 00:35:33.504 [2024-11-18 00:40:57.180621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.504 [2024-11-18 00:40:57.180648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.504 qpair failed and we were unable to recover it. 00:35:33.504 [2024-11-18 00:40:57.180765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.504 [2024-11-18 00:40:57.180791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.504 qpair failed and we were unable to recover it. 00:35:33.504 [2024-11-18 00:40:57.180910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.504 [2024-11-18 00:40:57.180936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.504 qpair failed and we were unable to recover it. 00:35:33.504 [2024-11-18 00:40:57.181148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.504 [2024-11-18 00:40:57.181179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.504 qpair failed and we were unable to recover it. 00:35:33.504 [2024-11-18 00:40:57.181355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.504 [2024-11-18 00:40:57.181384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.504 qpair failed and we were unable to recover it. 00:35:33.504 [2024-11-18 00:40:57.181527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.504 [2024-11-18 00:40:57.181553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.504 qpair failed and we were unable to recover it. 00:35:33.504 [2024-11-18 00:40:57.181696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.504 [2024-11-18 00:40:57.181724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.504 qpair failed and we were unable to recover it. 00:35:33.504 [2024-11-18 00:40:57.181847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.504 [2024-11-18 00:40:57.181892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.504 qpair failed and we were unable to recover it. 00:35:33.504 [2024-11-18 00:40:57.182022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.504 [2024-11-18 00:40:57.182053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.504 qpair failed and we were unable to recover it. 00:35:33.504 [2024-11-18 00:40:57.182240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.504 [2024-11-18 00:40:57.182271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.504 qpair failed and we were unable to recover it. 00:35:33.504 [2024-11-18 00:40:57.182414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.504 [2024-11-18 00:40:57.182441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.504 qpair failed and we were unable to recover it. 00:35:33.504 [2024-11-18 00:40:57.182532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.504 [2024-11-18 00:40:57.182558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.504 qpair failed and we were unable to recover it. 00:35:33.504 [2024-11-18 00:40:57.182724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.504 [2024-11-18 00:40:57.182761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.504 qpair failed and we were unable to recover it. 00:35:33.504 [2024-11-18 00:40:57.182888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.504 [2024-11-18 00:40:57.182918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.504 qpair failed and we were unable to recover it. 00:35:33.504 [2024-11-18 00:40:57.183064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.504 [2024-11-18 00:40:57.183096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.504 qpair failed and we were unable to recover it. 00:35:33.504 [2024-11-18 00:40:57.183235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.504 [2024-11-18 00:40:57.183266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.504 qpair failed and we were unable to recover it. 00:35:33.504 [2024-11-18 00:40:57.183416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.504 [2024-11-18 00:40:57.183442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.504 qpair failed and we were unable to recover it. 00:35:33.504 [2024-11-18 00:40:57.183557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.504 [2024-11-18 00:40:57.183603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.504 qpair failed and we were unable to recover it. 00:35:33.504 [2024-11-18 00:40:57.183748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.504 [2024-11-18 00:40:57.183774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.504 qpair failed and we were unable to recover it. 00:35:33.504 [2024-11-18 00:40:57.183890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.504 [2024-11-18 00:40:57.183916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.504 qpair failed and we were unable to recover it. 00:35:33.504 [2024-11-18 00:40:57.184110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.504 [2024-11-18 00:40:57.184171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.504 qpair failed and we were unable to recover it. 00:35:33.504 [2024-11-18 00:40:57.184363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.504 [2024-11-18 00:40:57.184390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.505 qpair failed and we were unable to recover it. 00:35:33.505 [2024-11-18 00:40:57.184507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.505 [2024-11-18 00:40:57.184534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.505 qpair failed and we were unable to recover it. 00:35:33.505 [2024-11-18 00:40:57.184638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.505 [2024-11-18 00:40:57.184670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.505 qpair failed and we were unable to recover it. 00:35:33.505 [2024-11-18 00:40:57.184827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.505 [2024-11-18 00:40:57.184857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.505 qpair failed and we were unable to recover it. 00:35:33.505 [2024-11-18 00:40:57.184987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.505 [2024-11-18 00:40:57.185018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.505 qpair failed and we were unable to recover it. 00:35:33.505 [2024-11-18 00:40:57.185166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.505 [2024-11-18 00:40:57.185194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.505 qpair failed and we were unable to recover it. 00:35:33.505 [2024-11-18 00:40:57.185348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.505 [2024-11-18 00:40:57.185375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.505 qpair failed and we were unable to recover it. 00:35:33.505 [2024-11-18 00:40:57.185498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.505 [2024-11-18 00:40:57.185524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.505 qpair failed and we were unable to recover it. 00:35:33.505 [2024-11-18 00:40:57.185675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.505 [2024-11-18 00:40:57.185701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.505 qpair failed and we were unable to recover it. 00:35:33.505 [2024-11-18 00:40:57.185825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.505 [2024-11-18 00:40:57.185852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.505 qpair failed and we were unable to recover it. 00:35:33.505 [2024-11-18 00:40:57.185941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.505 [2024-11-18 00:40:57.185969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.505 qpair failed and we were unable to recover it. 00:35:33.505 [2024-11-18 00:40:57.186062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.505 [2024-11-18 00:40:57.186089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.505 qpair failed and we were unable to recover it. 00:35:33.505 [2024-11-18 00:40:57.186228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.505 [2024-11-18 00:40:57.186273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.505 qpair failed and we were unable to recover it. 00:35:33.505 [2024-11-18 00:40:57.186368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.505 [2024-11-18 00:40:57.186396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.505 qpair failed and we were unable to recover it. 00:35:33.505 [2024-11-18 00:40:57.186541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.505 [2024-11-18 00:40:57.186567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.505 qpair failed and we were unable to recover it. 00:35:33.505 [2024-11-18 00:40:57.186674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.505 [2024-11-18 00:40:57.186706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.505 qpair failed and we were unable to recover it. 00:35:33.505 [2024-11-18 00:40:57.186868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.505 [2024-11-18 00:40:57.186898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.505 qpair failed and we were unable to recover it. 00:35:33.505 [2024-11-18 00:40:57.187035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.505 [2024-11-18 00:40:57.187066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.505 qpair failed and we were unable to recover it. 00:35:33.505 [2024-11-18 00:40:57.187295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.505 [2024-11-18 00:40:57.187372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.505 qpair failed and we were unable to recover it. 00:35:33.505 [2024-11-18 00:40:57.187504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.505 [2024-11-18 00:40:57.187535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.505 qpair failed and we were unable to recover it. 00:35:33.505 [2024-11-18 00:40:57.187640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.505 [2024-11-18 00:40:57.187672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.505 qpair failed and we were unable to recover it. 00:35:33.505 [2024-11-18 00:40:57.187802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.505 [2024-11-18 00:40:57.187833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.505 qpair failed and we were unable to recover it. 00:35:33.505 [2024-11-18 00:40:57.187963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.505 [2024-11-18 00:40:57.187994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.505 qpair failed and we were unable to recover it. 00:35:33.505 [2024-11-18 00:40:57.188129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.505 [2024-11-18 00:40:57.188161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.505 qpair failed and we were unable to recover it. 00:35:33.505 [2024-11-18 00:40:57.188254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.505 [2024-11-18 00:40:57.188286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.505 qpair failed and we were unable to recover it. 00:35:33.505 [2024-11-18 00:40:57.188415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.505 [2024-11-18 00:40:57.188462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.505 qpair failed and we were unable to recover it. 00:35:33.505 [2024-11-18 00:40:57.188637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.505 [2024-11-18 00:40:57.188670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.505 qpair failed and we were unable to recover it. 00:35:33.505 [2024-11-18 00:40:57.188803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.505 [2024-11-18 00:40:57.188835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.505 qpair failed and we were unable to recover it. 00:35:33.505 [2024-11-18 00:40:57.188992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.505 [2024-11-18 00:40:57.189024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.505 qpair failed and we were unable to recover it. 00:35:33.505 [2024-11-18 00:40:57.189164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.505 [2024-11-18 00:40:57.189197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.505 qpair failed and we were unable to recover it. 00:35:33.505 [2024-11-18 00:40:57.189323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.505 [2024-11-18 00:40:57.189355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.505 qpair failed and we were unable to recover it. 00:35:33.505 [2024-11-18 00:40:57.189486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.505 [2024-11-18 00:40:57.189525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.505 qpair failed and we were unable to recover it. 00:35:33.505 [2024-11-18 00:40:57.189677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.505 [2024-11-18 00:40:57.189708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.505 qpair failed and we were unable to recover it. 00:35:33.505 [2024-11-18 00:40:57.189850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.505 [2024-11-18 00:40:57.189882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.505 qpair failed and we were unable to recover it. 00:35:33.505 [2024-11-18 00:40:57.190015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.505 [2024-11-18 00:40:57.190048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.505 qpair failed and we were unable to recover it. 00:35:33.505 [2024-11-18 00:40:57.190215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.505 [2024-11-18 00:40:57.190248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.505 qpair failed and we were unable to recover it. 00:35:33.505 [2024-11-18 00:40:57.190406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.505 [2024-11-18 00:40:57.190438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.505 qpair failed and we were unable to recover it. 00:35:33.505 [2024-11-18 00:40:57.190572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.505 [2024-11-18 00:40:57.190603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.505 qpair failed and we were unable to recover it. 00:35:33.505 [2024-11-18 00:40:57.190710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.506 [2024-11-18 00:40:57.190743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.506 qpair failed and we were unable to recover it. 00:35:33.506 [2024-11-18 00:40:57.190870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.506 [2024-11-18 00:40:57.190902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.506 qpair failed and we were unable to recover it. 00:35:33.506 [2024-11-18 00:40:57.191029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.506 [2024-11-18 00:40:57.191060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.506 qpair failed and we were unable to recover it. 00:35:33.506 [2024-11-18 00:40:57.191225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.506 [2024-11-18 00:40:57.191283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.506 qpair failed and we were unable to recover it. 00:35:33.506 [2024-11-18 00:40:57.191444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.506 [2024-11-18 00:40:57.191474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.506 qpair failed and we were unable to recover it. 00:35:33.506 [2024-11-18 00:40:57.191607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.506 [2024-11-18 00:40:57.191638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.506 qpair failed and we were unable to recover it. 00:35:33.506 [2024-11-18 00:40:57.191803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.506 [2024-11-18 00:40:57.191835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.506 qpair failed and we were unable to recover it. 00:35:33.506 [2024-11-18 00:40:57.192000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.506 [2024-11-18 00:40:57.192031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.506 qpair failed and we were unable to recover it. 00:35:33.506 [2024-11-18 00:40:57.192131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.506 [2024-11-18 00:40:57.192186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.506 qpair failed and we were unable to recover it. 00:35:33.506 [2024-11-18 00:40:57.192364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.506 [2024-11-18 00:40:57.192397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.506 qpair failed and we were unable to recover it. 00:35:33.506 [2024-11-18 00:40:57.192558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.506 [2024-11-18 00:40:57.192590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.506 qpair failed and we were unable to recover it. 00:35:33.506 [2024-11-18 00:40:57.192692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.506 [2024-11-18 00:40:57.192724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.506 qpair failed and we were unable to recover it. 00:35:33.506 [2024-11-18 00:40:57.192857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.506 [2024-11-18 00:40:57.192889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.506 qpair failed and we were unable to recover it. 00:35:33.506 [2024-11-18 00:40:57.193015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.506 [2024-11-18 00:40:57.193046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.506 qpair failed and we were unable to recover it. 00:35:33.506 [2024-11-18 00:40:57.193152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.506 [2024-11-18 00:40:57.193184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.506 qpair failed and we were unable to recover it. 00:35:33.506 [2024-11-18 00:40:57.193351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.506 [2024-11-18 00:40:57.193384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.506 qpair failed and we were unable to recover it. 00:35:33.506 [2024-11-18 00:40:57.193520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.506 [2024-11-18 00:40:57.193552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.506 qpair failed and we were unable to recover it. 00:35:33.506 [2024-11-18 00:40:57.193659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.506 [2024-11-18 00:40:57.193691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.506 qpair failed and we were unable to recover it. 00:35:33.506 [2024-11-18 00:40:57.193814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.506 [2024-11-18 00:40:57.193846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.506 qpair failed and we were unable to recover it. 00:35:33.506 [2024-11-18 00:40:57.194003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.506 [2024-11-18 00:40:57.194035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.506 qpair failed and we were unable to recover it. 00:35:33.506 [2024-11-18 00:40:57.194199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.506 [2024-11-18 00:40:57.194231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.506 qpair failed and we were unable to recover it. 00:35:33.506 [2024-11-18 00:40:57.194403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.506 [2024-11-18 00:40:57.194435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.506 qpair failed and we were unable to recover it. 00:35:33.506 [2024-11-18 00:40:57.194576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.506 [2024-11-18 00:40:57.194608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.506 qpair failed and we were unable to recover it. 00:35:33.506 [2024-11-18 00:40:57.194736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.506 [2024-11-18 00:40:57.194768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.506 qpair failed and we were unable to recover it. 00:35:33.506 [2024-11-18 00:40:57.194863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.506 [2024-11-18 00:40:57.194895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.506 qpair failed and we were unable to recover it. 00:35:33.506 [2024-11-18 00:40:57.195023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.506 [2024-11-18 00:40:57.195055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.506 qpair failed and we were unable to recover it. 00:35:33.506 [2024-11-18 00:40:57.195223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.506 [2024-11-18 00:40:57.195255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.506 qpair failed and we were unable to recover it. 00:35:33.506 [2024-11-18 00:40:57.195401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.506 [2024-11-18 00:40:57.195433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.506 qpair failed and we were unable to recover it. 00:35:33.506 [2024-11-18 00:40:57.195563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.506 [2024-11-18 00:40:57.195595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.506 qpair failed and we were unable to recover it. 00:35:33.506 [2024-11-18 00:40:57.195715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.506 [2024-11-18 00:40:57.195746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.506 qpair failed and we were unable to recover it. 00:35:33.506 [2024-11-18 00:40:57.195909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.506 [2024-11-18 00:40:57.195941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.506 qpair failed and we were unable to recover it. 00:35:33.506 [2024-11-18 00:40:57.196070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.506 [2024-11-18 00:40:57.196102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.506 qpair failed and we were unable to recover it. 00:35:33.506 [2024-11-18 00:40:57.196235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.506 [2024-11-18 00:40:57.196267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.506 qpair failed and we were unable to recover it. 00:35:33.506 [2024-11-18 00:40:57.196409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.507 [2024-11-18 00:40:57.196446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.507 qpair failed and we were unable to recover it. 00:35:33.507 [2024-11-18 00:40:57.196604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.507 [2024-11-18 00:40:57.196635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.507 qpair failed and we were unable to recover it. 00:35:33.507 [2024-11-18 00:40:57.196759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.507 [2024-11-18 00:40:57.196791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.507 qpair failed and we were unable to recover it. 00:35:33.507 [2024-11-18 00:40:57.196928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.507 [2024-11-18 00:40:57.196960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.507 qpair failed and we were unable to recover it. 00:35:33.507 [2024-11-18 00:40:57.197090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.507 [2024-11-18 00:40:57.197122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.507 qpair failed and we were unable to recover it. 00:35:33.507 [2024-11-18 00:40:57.197284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.507 [2024-11-18 00:40:57.197324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.507 qpair failed and we were unable to recover it. 00:35:33.507 [2024-11-18 00:40:57.197494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.507 [2024-11-18 00:40:57.197526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.507 qpair failed and we were unable to recover it. 00:35:33.507 [2024-11-18 00:40:57.197623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.507 [2024-11-18 00:40:57.197655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.507 qpair failed and we were unable to recover it. 00:35:33.507 [2024-11-18 00:40:57.197785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.507 [2024-11-18 00:40:57.197816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.507 qpair failed and we were unable to recover it. 00:35:33.507 [2024-11-18 00:40:57.197981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.507 [2024-11-18 00:40:57.198013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.507 qpair failed and we were unable to recover it. 00:35:33.507 [2024-11-18 00:40:57.198159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.507 [2024-11-18 00:40:57.198215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.507 qpair failed and we were unable to recover it. 00:35:33.507 [2024-11-18 00:40:57.198405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.507 [2024-11-18 00:40:57.198437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.507 qpair failed and we were unable to recover it. 00:35:33.507 [2024-11-18 00:40:57.198573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.507 [2024-11-18 00:40:57.198605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.507 qpair failed and we were unable to recover it. 00:35:33.507 [2024-11-18 00:40:57.198735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.507 [2024-11-18 00:40:57.198766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.507 qpair failed and we were unable to recover it. 00:35:33.507 [2024-11-18 00:40:57.198905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.507 [2024-11-18 00:40:57.198938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.507 qpair failed and we were unable to recover it. 00:35:33.507 [2024-11-18 00:40:57.199049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.507 [2024-11-18 00:40:57.199081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.507 qpair failed and we were unable to recover it. 00:35:33.507 [2024-11-18 00:40:57.199213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.507 [2024-11-18 00:40:57.199245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.507 qpair failed and we were unable to recover it. 00:35:33.507 [2024-11-18 00:40:57.199376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.507 [2024-11-18 00:40:57.199409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.507 qpair failed and we were unable to recover it. 00:35:33.507 [2024-11-18 00:40:57.199574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.507 [2024-11-18 00:40:57.199606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.507 qpair failed and we were unable to recover it. 00:35:33.507 [2024-11-18 00:40:57.199708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.507 [2024-11-18 00:40:57.199740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.507 qpair failed and we were unable to recover it. 00:35:33.507 [2024-11-18 00:40:57.199899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.507 [2024-11-18 00:40:57.199931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.507 qpair failed and we were unable to recover it. 00:35:33.507 [2024-11-18 00:40:57.200065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.507 [2024-11-18 00:40:57.200097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.507 qpair failed and we were unable to recover it. 00:35:33.507 [2024-11-18 00:40:57.200259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.507 [2024-11-18 00:40:57.200290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.507 qpair failed and we were unable to recover it. 00:35:33.507 [2024-11-18 00:40:57.200456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.507 [2024-11-18 00:40:57.200488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.507 qpair failed and we were unable to recover it. 00:35:33.507 [2024-11-18 00:40:57.200653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.507 [2024-11-18 00:40:57.200685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.507 qpair failed and we were unable to recover it. 00:35:33.507 [2024-11-18 00:40:57.200783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.507 [2024-11-18 00:40:57.200816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.507 qpair failed and we were unable to recover it. 00:35:33.507 [2024-11-18 00:40:57.200978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.507 [2024-11-18 00:40:57.201009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.507 qpair failed and we were unable to recover it. 00:35:33.507 [2024-11-18 00:40:57.201118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.507 [2024-11-18 00:40:57.201171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.507 qpair failed and we were unable to recover it. 00:35:33.507 [2024-11-18 00:40:57.201338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.507 [2024-11-18 00:40:57.201370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.507 qpair failed and we were unable to recover it. 00:35:33.507 [2024-11-18 00:40:57.201530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.507 [2024-11-18 00:40:57.201561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.507 qpair failed and we were unable to recover it. 00:35:33.507 [2024-11-18 00:40:57.201696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.507 [2024-11-18 00:40:57.201728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.507 qpair failed and we were unable to recover it. 00:35:33.507 [2024-11-18 00:40:57.201911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.507 [2024-11-18 00:40:57.201944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.507 qpair failed and we were unable to recover it. 00:35:33.507 [2024-11-18 00:40:57.202112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.507 [2024-11-18 00:40:57.202145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.507 qpair failed and we were unable to recover it. 00:35:33.507 [2024-11-18 00:40:57.202323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.507 [2024-11-18 00:40:57.202358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.507 qpair failed and we were unable to recover it. 00:35:33.507 [2024-11-18 00:40:57.202495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.507 [2024-11-18 00:40:57.202529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.507 qpair failed and we were unable to recover it. 00:35:33.507 [2024-11-18 00:40:57.202661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.507 [2024-11-18 00:40:57.202694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.507 qpair failed and we were unable to recover it. 00:35:33.507 [2024-11-18 00:40:57.202801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.507 [2024-11-18 00:40:57.202835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.507 qpair failed and we were unable to recover it. 00:35:33.507 [2024-11-18 00:40:57.203003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.507 [2024-11-18 00:40:57.203036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.507 qpair failed and we were unable to recover it. 00:35:33.507 [2024-11-18 00:40:57.203202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.507 [2024-11-18 00:40:57.203236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.507 qpair failed and we were unable to recover it. 00:35:33.507 [2024-11-18 00:40:57.203383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.507 [2024-11-18 00:40:57.203417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.507 qpair failed and we were unable to recover it. 00:35:33.507 [2024-11-18 00:40:57.203515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.507 [2024-11-18 00:40:57.203554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.507 qpair failed and we were unable to recover it. 00:35:33.507 [2024-11-18 00:40:57.203694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.508 [2024-11-18 00:40:57.203727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.508 qpair failed and we were unable to recover it. 00:35:33.508 [2024-11-18 00:40:57.203870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.508 [2024-11-18 00:40:57.203903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.508 qpair failed and we were unable to recover it. 00:35:33.508 [2024-11-18 00:40:57.203999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.508 [2024-11-18 00:40:57.204033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.508 qpair failed and we were unable to recover it. 00:35:33.508 [2024-11-18 00:40:57.204171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.508 [2024-11-18 00:40:57.204205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.508 qpair failed and we were unable to recover it. 00:35:33.508 [2024-11-18 00:40:57.204304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.508 [2024-11-18 00:40:57.204345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.508 qpair failed and we were unable to recover it. 00:35:33.508 [2024-11-18 00:40:57.204510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.508 [2024-11-18 00:40:57.204543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.508 qpair failed and we were unable to recover it. 00:35:33.508 [2024-11-18 00:40:57.204708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.508 [2024-11-18 00:40:57.204741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.508 qpair failed and we were unable to recover it. 00:35:33.508 [2024-11-18 00:40:57.204883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.508 [2024-11-18 00:40:57.204916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.508 qpair failed and we were unable to recover it. 00:35:33.508 [2024-11-18 00:40:57.205059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.508 [2024-11-18 00:40:57.205092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.508 qpair failed and we were unable to recover it. 00:35:33.508 [2024-11-18 00:40:57.205192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.508 [2024-11-18 00:40:57.205225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.508 qpair failed and we were unable to recover it. 00:35:33.508 [2024-11-18 00:40:57.205369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.508 [2024-11-18 00:40:57.205403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.508 qpair failed and we were unable to recover it. 00:35:33.508 [2024-11-18 00:40:57.205565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.508 [2024-11-18 00:40:57.205598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.508 qpair failed and we were unable to recover it. 00:35:33.508 [2024-11-18 00:40:57.205709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.508 [2024-11-18 00:40:57.205742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.508 qpair failed and we were unable to recover it. 00:35:33.508 [2024-11-18 00:40:57.205916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.508 [2024-11-18 00:40:57.205949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.508 qpair failed and we were unable to recover it. 00:35:33.508 [2024-11-18 00:40:57.206082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.508 [2024-11-18 00:40:57.206114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.508 qpair failed and we were unable to recover it. 00:35:33.508 [2024-11-18 00:40:57.206246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.508 [2024-11-18 00:40:57.206279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.508 qpair failed and we were unable to recover it. 00:35:33.508 [2024-11-18 00:40:57.206469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.508 [2024-11-18 00:40:57.206501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.508 qpair failed and we were unable to recover it. 00:35:33.508 [2024-11-18 00:40:57.206659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.508 [2024-11-18 00:40:57.206691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.508 qpair failed and we were unable to recover it. 00:35:33.508 [2024-11-18 00:40:57.206817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.508 [2024-11-18 00:40:57.206848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.508 qpair failed and we were unable to recover it. 00:35:33.508 [2024-11-18 00:40:57.206979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.508 [2024-11-18 00:40:57.207011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.508 qpair failed and we were unable to recover it. 00:35:33.508 [2024-11-18 00:40:57.207154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.508 [2024-11-18 00:40:57.207186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.508 qpair failed and we were unable to recover it. 00:35:33.508 [2024-11-18 00:40:57.207298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.508 [2024-11-18 00:40:57.207341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.508 qpair failed and we were unable to recover it. 00:35:33.508 [2024-11-18 00:40:57.207489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.508 [2024-11-18 00:40:57.207521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.508 qpair failed and we were unable to recover it. 00:35:33.508 [2024-11-18 00:40:57.207654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.508 [2024-11-18 00:40:57.207687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.508 qpair failed and we were unable to recover it. 00:35:33.508 [2024-11-18 00:40:57.207854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.508 [2024-11-18 00:40:57.207887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.508 qpair failed and we were unable to recover it. 00:35:33.508 [2024-11-18 00:40:57.208023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.508 [2024-11-18 00:40:57.208056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.508 qpair failed and we were unable to recover it. 00:35:33.508 [2024-11-18 00:40:57.208189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.508 [2024-11-18 00:40:57.208228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.508 qpair failed and we were unable to recover it. 00:35:33.508 [2024-11-18 00:40:57.208368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.508 [2024-11-18 00:40:57.208402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.508 qpair failed and we were unable to recover it. 00:35:33.508 [2024-11-18 00:40:57.208548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.508 [2024-11-18 00:40:57.208581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.508 qpair failed and we were unable to recover it. 00:35:33.508 [2024-11-18 00:40:57.208754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.508 [2024-11-18 00:40:57.208787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.508 qpair failed and we were unable to recover it. 00:35:33.508 [2024-11-18 00:40:57.208937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.508 [2024-11-18 00:40:57.208970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.508 qpair failed and we were unable to recover it. 00:35:33.508 [2024-11-18 00:40:57.209108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.508 [2024-11-18 00:40:57.209141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.508 qpair failed and we were unable to recover it. 00:35:33.508 [2024-11-18 00:40:57.209302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.508 [2024-11-18 00:40:57.209342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.508 qpair failed and we were unable to recover it. 00:35:33.508 [2024-11-18 00:40:57.209482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.508 [2024-11-18 00:40:57.209516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.508 qpair failed and we were unable to recover it. 00:35:33.508 [2024-11-18 00:40:57.209655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.508 [2024-11-18 00:40:57.209688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.508 qpair failed and we were unable to recover it. 00:35:33.508 [2024-11-18 00:40:57.209834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.508 [2024-11-18 00:40:57.209867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.508 qpair failed and we were unable to recover it. 00:35:33.508 [2024-11-18 00:40:57.209979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.508 [2024-11-18 00:40:57.210012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.508 qpair failed and we were unable to recover it. 00:35:33.508 [2024-11-18 00:40:57.210178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.508 [2024-11-18 00:40:57.210211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.508 qpair failed and we were unable to recover it. 00:35:33.508 [2024-11-18 00:40:57.210347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.508 [2024-11-18 00:40:57.210380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.508 qpair failed and we were unable to recover it. 00:35:33.508 [2024-11-18 00:40:57.210522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.508 [2024-11-18 00:40:57.210555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.508 qpair failed and we were unable to recover it. 00:35:33.508 [2024-11-18 00:40:57.210731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.508 [2024-11-18 00:40:57.210765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.508 qpair failed and we were unable to recover it. 00:35:33.509 [2024-11-18 00:40:57.210898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.509 [2024-11-18 00:40:57.210930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.509 qpair failed and we were unable to recover it. 00:35:33.509 [2024-11-18 00:40:57.211072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.509 [2024-11-18 00:40:57.211104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.509 qpair failed and we were unable to recover it. 00:35:33.509 [2024-11-18 00:40:57.211241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.509 [2024-11-18 00:40:57.211274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.509 qpair failed and we were unable to recover it. 00:35:33.509 [2024-11-18 00:40:57.211377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.509 [2024-11-18 00:40:57.211410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.509 qpair failed and we were unable to recover it. 00:35:33.509 [2024-11-18 00:40:57.211576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.509 [2024-11-18 00:40:57.211608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.509 qpair failed and we were unable to recover it. 00:35:33.509 [2024-11-18 00:40:57.211758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.509 [2024-11-18 00:40:57.211792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.509 qpair failed and we were unable to recover it. 00:35:33.509 [2024-11-18 00:40:57.211955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.509 [2024-11-18 00:40:57.211988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.509 qpair failed and we were unable to recover it. 00:35:33.509 [2024-11-18 00:40:57.212124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.509 [2024-11-18 00:40:57.212156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.509 qpair failed and we were unable to recover it. 00:35:33.509 [2024-11-18 00:40:57.212285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.509 [2024-11-18 00:40:57.212327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.509 qpair failed and we were unable to recover it. 00:35:33.509 [2024-11-18 00:40:57.212470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.509 [2024-11-18 00:40:57.212504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.509 qpair failed and we were unable to recover it. 00:35:33.509 [2024-11-18 00:40:57.212648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.509 [2024-11-18 00:40:57.212681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.509 qpair failed and we were unable to recover it. 00:35:33.509 [2024-11-18 00:40:57.212822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.509 [2024-11-18 00:40:57.212855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.509 qpair failed and we were unable to recover it. 00:35:33.509 [2024-11-18 00:40:57.212990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.509 [2024-11-18 00:40:57.213024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.509 qpair failed and we were unable to recover it. 00:35:33.509 [2024-11-18 00:40:57.213164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.509 [2024-11-18 00:40:57.213197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.509 qpair failed and we were unable to recover it. 00:35:33.509 [2024-11-18 00:40:57.213305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.509 [2024-11-18 00:40:57.213361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.509 qpair failed and we were unable to recover it. 00:35:33.509 [2024-11-18 00:40:57.213500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.509 [2024-11-18 00:40:57.213533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.509 qpair failed and we were unable to recover it. 00:35:33.509 [2024-11-18 00:40:57.213671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.509 [2024-11-18 00:40:57.213703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.509 qpair failed and we were unable to recover it. 00:35:33.509 [2024-11-18 00:40:57.213843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.509 [2024-11-18 00:40:57.213876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.509 qpair failed and we were unable to recover it. 00:35:33.509 [2024-11-18 00:40:57.214010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.509 [2024-11-18 00:40:57.214043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.509 qpair failed and we were unable to recover it. 00:35:33.509 [2024-11-18 00:40:57.214188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.509 [2024-11-18 00:40:57.214222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.509 qpair failed and we were unable to recover it. 00:35:33.509 [2024-11-18 00:40:57.214394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.509 [2024-11-18 00:40:57.214429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.509 qpair failed and we were unable to recover it. 00:35:33.509 [2024-11-18 00:40:57.214597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.509 [2024-11-18 00:40:57.214646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.509 qpair failed and we were unable to recover it. 00:35:33.509 [2024-11-18 00:40:57.214764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.509 [2024-11-18 00:40:57.214815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.509 qpair failed and we were unable to recover it. 00:35:33.509 [2024-11-18 00:40:57.215019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.509 [2024-11-18 00:40:57.215052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.509 qpair failed and we were unable to recover it. 00:35:33.509 [2024-11-18 00:40:57.215188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.509 [2024-11-18 00:40:57.215221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.509 qpair failed and we were unable to recover it. 00:35:33.509 [2024-11-18 00:40:57.215384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.509 [2024-11-18 00:40:57.215424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.509 qpair failed and we were unable to recover it. 00:35:33.509 [2024-11-18 00:40:57.215590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.509 [2024-11-18 00:40:57.215623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.509 qpair failed and we were unable to recover it. 00:35:33.509 [2024-11-18 00:40:57.215762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.509 [2024-11-18 00:40:57.215796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.509 qpair failed and we were unable to recover it. 00:35:33.509 [2024-11-18 00:40:57.215932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.509 [2024-11-18 00:40:57.215965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.509 qpair failed and we were unable to recover it. 00:35:33.509 [2024-11-18 00:40:57.216112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.509 [2024-11-18 00:40:57.216145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.509 qpair failed and we were unable to recover it. 00:35:33.509 [2024-11-18 00:40:57.216247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.509 [2024-11-18 00:40:57.216281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.509 qpair failed and we were unable to recover it. 00:35:33.509 [2024-11-18 00:40:57.216394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.509 [2024-11-18 00:40:57.216427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.509 qpair failed and we were unable to recover it. 00:35:33.509 [2024-11-18 00:40:57.216535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.509 [2024-11-18 00:40:57.216567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.509 qpair failed and we were unable to recover it. 00:35:33.509 [2024-11-18 00:40:57.216669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.509 [2024-11-18 00:40:57.216702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.509 qpair failed and we were unable to recover it. 00:35:33.509 [2024-11-18 00:40:57.216869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.509 [2024-11-18 00:40:57.216903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.509 qpair failed and we were unable to recover it. 00:35:33.509 [2024-11-18 00:40:57.217073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.509 [2024-11-18 00:40:57.217106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.509 qpair failed and we were unable to recover it. 00:35:33.509 [2024-11-18 00:40:57.217242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.509 [2024-11-18 00:40:57.217274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.509 qpair failed and we were unable to recover it. 00:35:33.509 [2024-11-18 00:40:57.217436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.509 [2024-11-18 00:40:57.217470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.509 qpair failed and we were unable to recover it. 00:35:33.509 [2024-11-18 00:40:57.217592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.509 [2024-11-18 00:40:57.217625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.509 qpair failed and we were unable to recover it. 00:35:33.509 [2024-11-18 00:40:57.217793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.509 [2024-11-18 00:40:57.217825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.509 qpair failed and we were unable to recover it. 00:35:33.509 [2024-11-18 00:40:57.217955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.509 [2024-11-18 00:40:57.217988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.510 qpair failed and we were unable to recover it. 00:35:33.510 [2024-11-18 00:40:57.218161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.510 [2024-11-18 00:40:57.218211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.510 qpair failed and we were unable to recover it. 00:35:33.510 [2024-11-18 00:40:57.218337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.510 [2024-11-18 00:40:57.218371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.510 qpair failed and we were unable to recover it. 00:35:33.510 [2024-11-18 00:40:57.218504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.510 [2024-11-18 00:40:57.218537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.510 qpair failed and we were unable to recover it. 00:35:33.510 [2024-11-18 00:40:57.218704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.510 [2024-11-18 00:40:57.218737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.510 qpair failed and we were unable to recover it. 00:35:33.510 [2024-11-18 00:40:57.218872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.510 [2024-11-18 00:40:57.218905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.510 qpair failed and we were unable to recover it. 00:35:33.510 [2024-11-18 00:40:57.219036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.510 [2024-11-18 00:40:57.219070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.510 qpair failed and we were unable to recover it. 00:35:33.510 [2024-11-18 00:40:57.219189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.510 [2024-11-18 00:40:57.219248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.510 qpair failed and we were unable to recover it. 00:35:33.510 [2024-11-18 00:40:57.219445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.510 [2024-11-18 00:40:57.219478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.510 qpair failed and we were unable to recover it. 00:35:33.510 [2024-11-18 00:40:57.219608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.510 [2024-11-18 00:40:57.219641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.510 qpair failed and we were unable to recover it. 00:35:33.510 [2024-11-18 00:40:57.219782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.510 [2024-11-18 00:40:57.219814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.510 qpair failed and we were unable to recover it. 00:35:33.510 [2024-11-18 00:40:57.219979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.510 [2024-11-18 00:40:57.220011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.510 qpair failed and we were unable to recover it. 00:35:33.510 [2024-11-18 00:40:57.220168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.510 [2024-11-18 00:40:57.220201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.510 qpair failed and we were unable to recover it. 00:35:33.510 [2024-11-18 00:40:57.220338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.510 [2024-11-18 00:40:57.220372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.510 qpair failed and we were unable to recover it. 00:35:33.510 [2024-11-18 00:40:57.220513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.510 [2024-11-18 00:40:57.220546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.510 qpair failed and we were unable to recover it. 00:35:33.510 [2024-11-18 00:40:57.220651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.510 [2024-11-18 00:40:57.220685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.510 qpair failed and we were unable to recover it. 00:35:33.510 [2024-11-18 00:40:57.220852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.510 [2024-11-18 00:40:57.220884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.510 qpair failed and we were unable to recover it. 00:35:33.510 [2024-11-18 00:40:57.221018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.510 [2024-11-18 00:40:57.221051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.510 qpair failed and we were unable to recover it. 00:35:33.510 [2024-11-18 00:40:57.221216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.510 [2024-11-18 00:40:57.221249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.510 qpair failed and we were unable to recover it. 00:35:33.510 [2024-11-18 00:40:57.221396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.510 [2024-11-18 00:40:57.221430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.510 qpair failed and we were unable to recover it. 00:35:33.510 [2024-11-18 00:40:57.221530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.510 [2024-11-18 00:40:57.221564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.510 qpair failed and we were unable to recover it. 00:35:33.510 [2024-11-18 00:40:57.221707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.510 [2024-11-18 00:40:57.221740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.510 qpair failed and we were unable to recover it. 00:35:33.510 [2024-11-18 00:40:57.221878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.510 [2024-11-18 00:40:57.221911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.510 qpair failed and we were unable to recover it. 00:35:33.510 [2024-11-18 00:40:57.222036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.510 [2024-11-18 00:40:57.222069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.510 qpair failed and we were unable to recover it. 00:35:33.510 [2024-11-18 00:40:57.222204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.510 [2024-11-18 00:40:57.222237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.510 qpair failed and we were unable to recover it. 00:35:33.510 [2024-11-18 00:40:57.222378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.510 [2024-11-18 00:40:57.222418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.510 qpair failed and we were unable to recover it. 00:35:33.510 [2024-11-18 00:40:57.222561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.510 [2024-11-18 00:40:57.222594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.510 qpair failed and we were unable to recover it. 00:35:33.510 [2024-11-18 00:40:57.222761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.510 [2024-11-18 00:40:57.222793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.510 qpair failed and we were unable to recover it. 00:35:33.510 [2024-11-18 00:40:57.222913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.510 [2024-11-18 00:40:57.222946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.510 qpair failed and we were unable to recover it. 00:35:33.510 [2024-11-18 00:40:57.223049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.510 [2024-11-18 00:40:57.223110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.510 qpair failed and we were unable to recover it. 00:35:33.510 [2024-11-18 00:40:57.223294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.510 [2024-11-18 00:40:57.223334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.510 qpair failed and we were unable to recover it. 00:35:33.510 [2024-11-18 00:40:57.223445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.510 [2024-11-18 00:40:57.223478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.510 qpair failed and we were unable to recover it. 00:35:33.510 [2024-11-18 00:40:57.223619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.510 [2024-11-18 00:40:57.223652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.510 qpair failed and we were unable to recover it. 00:35:33.510 [2024-11-18 00:40:57.223787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.510 [2024-11-18 00:40:57.223820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.510 qpair failed and we were unable to recover it. 00:35:33.510 [2024-11-18 00:40:57.223964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.510 [2024-11-18 00:40:57.223998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.510 qpair failed and we were unable to recover it. 00:35:33.510 [2024-11-18 00:40:57.224164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.510 [2024-11-18 00:40:57.224197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.510 qpair failed and we were unable to recover it. 00:35:33.510 [2024-11-18 00:40:57.224365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.510 [2024-11-18 00:40:57.224398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.510 qpair failed and we were unable to recover it. 00:35:33.510 [2024-11-18 00:40:57.224570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.510 [2024-11-18 00:40:57.224603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.510 qpair failed and we were unable to recover it. 00:35:33.510 [2024-11-18 00:40:57.224741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.511 [2024-11-18 00:40:57.224774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.511 qpair failed and we were unable to recover it. 00:35:33.511 [2024-11-18 00:40:57.224914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.511 [2024-11-18 00:40:57.224948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.511 qpair failed and we were unable to recover it. 00:35:33.511 [2024-11-18 00:40:57.225085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.511 [2024-11-18 00:40:57.225118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.511 qpair failed and we were unable to recover it. 00:35:33.511 [2024-11-18 00:40:57.225285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.511 [2024-11-18 00:40:57.225334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.511 qpair failed and we were unable to recover it. 00:35:33.511 [2024-11-18 00:40:57.225512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.511 [2024-11-18 00:40:57.225545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.511 qpair failed and we were unable to recover it. 00:35:33.511 [2024-11-18 00:40:57.225647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.511 [2024-11-18 00:40:57.225681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.511 qpair failed and we were unable to recover it. 00:35:33.511 [2024-11-18 00:40:57.225850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.511 [2024-11-18 00:40:57.225883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.511 qpair failed and we were unable to recover it. 00:35:33.511 [2024-11-18 00:40:57.226050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.511 [2024-11-18 00:40:57.226082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.511 qpair failed and we were unable to recover it. 00:35:33.511 [2024-11-18 00:40:57.226194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.511 [2024-11-18 00:40:57.226228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.511 qpair failed and we were unable to recover it. 00:35:33.511 [2024-11-18 00:40:57.226364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.511 [2024-11-18 00:40:57.226399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.511 qpair failed and we were unable to recover it. 00:35:33.511 [2024-11-18 00:40:57.226532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.511 [2024-11-18 00:40:57.226565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.511 qpair failed and we were unable to recover it. 00:35:33.511 [2024-11-18 00:40:57.226694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.511 [2024-11-18 00:40:57.226727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.511 qpair failed and we were unable to recover it. 00:35:33.511 [2024-11-18 00:40:57.226872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.511 [2024-11-18 00:40:57.226906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.511 qpair failed and we were unable to recover it. 00:35:33.511 [2024-11-18 00:40:57.227039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.511 [2024-11-18 00:40:57.227071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.511 qpair failed and we were unable to recover it. 00:35:33.511 [2024-11-18 00:40:57.227211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.511 [2024-11-18 00:40:57.227244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.511 qpair failed and we were unable to recover it. 00:35:33.511 [2024-11-18 00:40:57.227415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.511 [2024-11-18 00:40:57.227449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.511 qpair failed and we were unable to recover it. 00:35:33.511 [2024-11-18 00:40:57.227592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.511 [2024-11-18 00:40:57.227624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.511 qpair failed and we were unable to recover it. 00:35:33.511 [2024-11-18 00:40:57.227795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.511 [2024-11-18 00:40:57.227828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.511 qpair failed and we were unable to recover it. 00:35:33.511 [2024-11-18 00:40:57.227998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.511 [2024-11-18 00:40:57.228031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.511 qpair failed and we were unable to recover it. 00:35:33.511 [2024-11-18 00:40:57.228197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.511 [2024-11-18 00:40:57.228230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.511 qpair failed and we were unable to recover it. 00:35:33.511 [2024-11-18 00:40:57.228362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.511 [2024-11-18 00:40:57.228396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.511 qpair failed and we were unable to recover it. 00:35:33.511 [2024-11-18 00:40:57.228497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.511 [2024-11-18 00:40:57.228530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.511 qpair failed and we were unable to recover it. 00:35:33.511 [2024-11-18 00:40:57.228693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.511 [2024-11-18 00:40:57.228726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.511 qpair failed and we were unable to recover it. 00:35:33.511 [2024-11-18 00:40:57.228834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.511 [2024-11-18 00:40:57.228867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.511 qpair failed and we were unable to recover it. 00:35:33.511 [2024-11-18 00:40:57.229002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.512 [2024-11-18 00:40:57.229035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.512 qpair failed and we were unable to recover it. 00:35:33.512 [2024-11-18 00:40:57.229142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.512 [2024-11-18 00:40:57.229175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.512 qpair failed and we were unable to recover it. 00:35:33.512 [2024-11-18 00:40:57.229321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.512 [2024-11-18 00:40:57.229356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.512 qpair failed and we were unable to recover it. 00:35:33.512 [2024-11-18 00:40:57.229461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.512 [2024-11-18 00:40:57.229500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.512 qpair failed and we were unable to recover it. 00:35:33.512 [2024-11-18 00:40:57.229634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.512 [2024-11-18 00:40:57.229667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.512 qpair failed and we were unable to recover it. 00:35:33.512 [2024-11-18 00:40:57.229815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.512 [2024-11-18 00:40:57.229849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.512 qpair failed and we were unable to recover it. 00:35:33.512 [2024-11-18 00:40:57.229957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.512 [2024-11-18 00:40:57.229989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.512 qpair failed and we were unable to recover it. 00:35:33.512 [2024-11-18 00:40:57.230087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.512 [2024-11-18 00:40:57.230120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.512 qpair failed and we were unable to recover it. 00:35:33.512 [2024-11-18 00:40:57.230262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.512 [2024-11-18 00:40:57.230296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.512 qpair failed and we were unable to recover it. 00:35:33.512 [2024-11-18 00:40:57.230413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.512 [2024-11-18 00:40:57.230447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.512 qpair failed and we were unable to recover it. 00:35:33.512 [2024-11-18 00:40:57.230616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.512 [2024-11-18 00:40:57.230649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.512 qpair failed and we were unable to recover it. 00:35:33.512 [2024-11-18 00:40:57.230820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.512 [2024-11-18 00:40:57.230854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.512 qpair failed and we were unable to recover it. 00:35:33.512 [2024-11-18 00:40:57.230953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.512 [2024-11-18 00:40:57.230987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.512 qpair failed and we were unable to recover it. 00:35:33.512 [2024-11-18 00:40:57.231166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.512 [2024-11-18 00:40:57.231199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.512 qpair failed and we were unable to recover it. 00:35:33.512 [2024-11-18 00:40:57.231371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.512 [2024-11-18 00:40:57.231405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.512 qpair failed and we were unable to recover it. 00:35:33.512 [2024-11-18 00:40:57.231506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.512 [2024-11-18 00:40:57.231539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.512 qpair failed and we were unable to recover it. 00:35:33.512 [2024-11-18 00:40:57.231702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.512 [2024-11-18 00:40:57.231735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.512 qpair failed and we were unable to recover it. 00:35:33.512 [2024-11-18 00:40:57.231913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.512 [2024-11-18 00:40:57.231946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.512 qpair failed and we were unable to recover it. 00:35:33.512 [2024-11-18 00:40:57.232085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.512 [2024-11-18 00:40:57.232118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.512 qpair failed and we were unable to recover it. 00:35:33.512 [2024-11-18 00:40:57.232282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.512 [2024-11-18 00:40:57.232322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.512 qpair failed and we were unable to recover it. 00:35:33.512 [2024-11-18 00:40:57.232463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.512 [2024-11-18 00:40:57.232496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.512 qpair failed and we were unable to recover it. 00:35:33.512 [2024-11-18 00:40:57.232659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.512 [2024-11-18 00:40:57.232691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.512 qpair failed and we were unable to recover it. 00:35:33.512 [2024-11-18 00:40:57.232824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.512 [2024-11-18 00:40:57.232857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.512 qpair failed and we were unable to recover it. 00:35:33.512 [2024-11-18 00:40:57.232995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.512 [2024-11-18 00:40:57.233029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.512 qpair failed and we were unable to recover it. 00:35:33.512 [2024-11-18 00:40:57.233196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.512 [2024-11-18 00:40:57.233229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.512 qpair failed and we were unable to recover it. 00:35:33.512 [2024-11-18 00:40:57.233335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.512 [2024-11-18 00:40:57.233368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.512 qpair failed and we were unable to recover it. 00:35:33.512 [2024-11-18 00:40:57.233539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.512 [2024-11-18 00:40:57.233573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.512 qpair failed and we were unable to recover it. 00:35:33.512 [2024-11-18 00:40:57.233682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.512 [2024-11-18 00:40:57.233715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.512 qpair failed and we were unable to recover it. 00:35:33.512 [2024-11-18 00:40:57.233859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.512 [2024-11-18 00:40:57.233892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.512 qpair failed and we were unable to recover it. 00:35:33.512 [2024-11-18 00:40:57.234057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.512 [2024-11-18 00:40:57.234090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.512 qpair failed and we were unable to recover it. 00:35:33.512 [2024-11-18 00:40:57.234217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.512 [2024-11-18 00:40:57.234266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.512 qpair failed and we were unable to recover it. 00:35:33.512 [2024-11-18 00:40:57.234434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.512 [2024-11-18 00:40:57.234468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.512 qpair failed and we were unable to recover it. 00:35:33.513 [2024-11-18 00:40:57.234617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.513 [2024-11-18 00:40:57.234651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.513 qpair failed and we were unable to recover it. 00:35:33.513 [2024-11-18 00:40:57.234766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.513 [2024-11-18 00:40:57.234799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.513 qpair failed and we were unable to recover it. 00:35:33.513 [2024-11-18 00:40:57.234927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.513 [2024-11-18 00:40:57.234960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.513 qpair failed and we were unable to recover it. 00:35:33.513 [2024-11-18 00:40:57.235114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.513 [2024-11-18 00:40:57.235161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.513 qpair failed and we were unable to recover it. 00:35:33.513 [2024-11-18 00:40:57.235344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.513 [2024-11-18 00:40:57.235377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.513 qpair failed and we were unable to recover it. 00:35:33.513 [2024-11-18 00:40:57.235543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.513 [2024-11-18 00:40:57.235576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.513 qpair failed and we were unable to recover it. 00:35:33.513 [2024-11-18 00:40:57.235711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.513 [2024-11-18 00:40:57.235745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.513 qpair failed and we were unable to recover it. 00:35:33.513 [2024-11-18 00:40:57.235846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.513 [2024-11-18 00:40:57.235879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.513 qpair failed and we were unable to recover it. 00:35:33.513 [2024-11-18 00:40:57.236019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.513 [2024-11-18 00:40:57.236052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.513 qpair failed and we were unable to recover it. 00:35:33.513 [2024-11-18 00:40:57.236190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.513 [2024-11-18 00:40:57.236223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.513 qpair failed and we were unable to recover it. 00:35:33.513 [2024-11-18 00:40:57.236387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.513 [2024-11-18 00:40:57.236420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.513 qpair failed and we were unable to recover it. 00:35:33.513 [2024-11-18 00:40:57.236520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.513 [2024-11-18 00:40:57.236559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.513 qpair failed and we were unable to recover it. 00:35:33.513 [2024-11-18 00:40:57.236739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.513 [2024-11-18 00:40:57.236772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.513 qpair failed and we were unable to recover it. 00:35:33.513 [2024-11-18 00:40:57.236915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.513 [2024-11-18 00:40:57.236948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.513 qpair failed and we were unable to recover it. 00:35:33.513 [2024-11-18 00:40:57.237083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.513 [2024-11-18 00:40:57.237116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.513 qpair failed and we were unable to recover it. 00:35:33.513 [2024-11-18 00:40:57.237283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.513 [2024-11-18 00:40:57.237323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.513 qpair failed and we were unable to recover it. 00:35:33.513 [2024-11-18 00:40:57.237461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.513 [2024-11-18 00:40:57.237494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.513 qpair failed and we were unable to recover it. 00:35:33.513 [2024-11-18 00:40:57.237635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.513 [2024-11-18 00:40:57.237668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.513 qpair failed and we were unable to recover it. 00:35:33.513 [2024-11-18 00:40:57.237847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.513 [2024-11-18 00:40:57.237880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.513 qpair failed and we were unable to recover it. 00:35:33.513 [2024-11-18 00:40:57.238019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.513 [2024-11-18 00:40:57.238052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.513 qpair failed and we were unable to recover it. 00:35:33.513 [2024-11-18 00:40:57.238203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.513 [2024-11-18 00:40:57.238235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.513 qpair failed and we were unable to recover it. 00:35:33.513 [2024-11-18 00:40:57.238356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.513 [2024-11-18 00:40:57.238389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.513 qpair failed and we were unable to recover it. 00:35:33.513 [2024-11-18 00:40:57.238496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.513 [2024-11-18 00:40:57.238529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.513 qpair failed and we were unable to recover it. 00:35:33.513 [2024-11-18 00:40:57.238660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.513 [2024-11-18 00:40:57.238693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.513 qpair failed and we were unable to recover it. 00:35:33.513 [2024-11-18 00:40:57.238831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.513 [2024-11-18 00:40:57.238864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.513 qpair failed and we were unable to recover it. 00:35:33.513 [2024-11-18 00:40:57.239034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.513 [2024-11-18 00:40:57.239067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.513 qpair failed and we were unable to recover it. 00:35:33.513 [2024-11-18 00:40:57.239177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.513 [2024-11-18 00:40:57.239212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.513 qpair failed and we were unable to recover it. 00:35:33.513 [2024-11-18 00:40:57.239384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.513 [2024-11-18 00:40:57.239418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.513 qpair failed and we were unable to recover it. 00:35:33.513 [2024-11-18 00:40:57.239519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.513 [2024-11-18 00:40:57.239552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.513 qpair failed and we were unable to recover it. 00:35:33.513 [2024-11-18 00:40:57.239716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.513 [2024-11-18 00:40:57.239750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.513 qpair failed and we were unable to recover it. 00:35:33.513 [2024-11-18 00:40:57.239921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.513 [2024-11-18 00:40:57.239956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.513 qpair failed and we were unable to recover it. 00:35:33.513 [2024-11-18 00:40:57.240095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.513 [2024-11-18 00:40:57.240130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.513 qpair failed and we were unable to recover it. 00:35:33.513 [2024-11-18 00:40:57.240304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.513 [2024-11-18 00:40:57.240346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.513 qpair failed and we were unable to recover it. 00:35:33.514 [2024-11-18 00:40:57.240519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.514 [2024-11-18 00:40:57.240554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.514 qpair failed and we were unable to recover it. 00:35:33.514 [2024-11-18 00:40:57.240695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.514 [2024-11-18 00:40:57.240730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.514 qpair failed and we were unable to recover it. 00:35:33.514 [2024-11-18 00:40:57.240849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.514 [2024-11-18 00:40:57.240883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.514 qpair failed and we were unable to recover it. 00:35:33.514 [2024-11-18 00:40:57.241056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.514 [2024-11-18 00:40:57.241090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.514 qpair failed and we were unable to recover it. 00:35:33.514 [2024-11-18 00:40:57.241230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.514 [2024-11-18 00:40:57.241265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.514 qpair failed and we were unable to recover it. 00:35:33.514 [2024-11-18 00:40:57.241401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.514 [2024-11-18 00:40:57.241437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.514 qpair failed and we were unable to recover it. 00:35:33.514 [2024-11-18 00:40:57.241583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.514 [2024-11-18 00:40:57.241618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.514 qpair failed and we were unable to recover it. 00:35:33.514 [2024-11-18 00:40:57.241784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.514 [2024-11-18 00:40:57.241819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.514 qpair failed and we were unable to recover it. 00:35:33.514 [2024-11-18 00:40:57.241957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.514 [2024-11-18 00:40:57.241991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.514 qpair failed and we were unable to recover it. 00:35:33.514 [2024-11-18 00:40:57.242131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.514 [2024-11-18 00:40:57.242165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.514 qpair failed and we were unable to recover it. 00:35:33.514 [2024-11-18 00:40:57.242324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.514 [2024-11-18 00:40:57.242360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.514 qpair failed and we were unable to recover it. 00:35:33.514 [2024-11-18 00:40:57.242470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.514 [2024-11-18 00:40:57.242504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.514 qpair failed and we were unable to recover it. 00:35:33.514 [2024-11-18 00:40:57.242676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.514 [2024-11-18 00:40:57.242710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.514 qpair failed and we were unable to recover it. 00:35:33.514 [2024-11-18 00:40:57.242827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.514 [2024-11-18 00:40:57.242862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.514 qpair failed and we were unable to recover it. 00:35:33.514 [2024-11-18 00:40:57.243002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.514 [2024-11-18 00:40:57.243037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.514 qpair failed and we were unable to recover it. 00:35:33.514 [2024-11-18 00:40:57.243180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.514 [2024-11-18 00:40:57.243214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.514 qpair failed and we were unable to recover it. 00:35:33.514 [2024-11-18 00:40:57.243361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.514 [2024-11-18 00:40:57.243397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.514 qpair failed and we were unable to recover it. 00:35:33.514 [2024-11-18 00:40:57.243532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.514 [2024-11-18 00:40:57.243566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.514 qpair failed and we were unable to recover it. 00:35:33.514 [2024-11-18 00:40:57.243739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.514 [2024-11-18 00:40:57.243779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.514 qpair failed and we were unable to recover it. 00:35:33.514 [2024-11-18 00:40:57.243893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.514 [2024-11-18 00:40:57.243929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.514 qpair failed and we were unable to recover it. 00:35:33.514 [2024-11-18 00:40:57.244096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.514 [2024-11-18 00:40:57.244131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.514 qpair failed and we were unable to recover it. 00:35:33.514 [2024-11-18 00:40:57.244320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.514 [2024-11-18 00:40:57.244356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.514 qpair failed and we were unable to recover it. 00:35:33.514 [2024-11-18 00:40:57.244500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.514 [2024-11-18 00:40:57.244534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.514 qpair failed and we were unable to recover it. 00:35:33.514 [2024-11-18 00:40:57.244638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.514 [2024-11-18 00:40:57.244673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.514 qpair failed and we were unable to recover it. 00:35:33.514 [2024-11-18 00:40:57.244852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.514 [2024-11-18 00:40:57.244886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.514 qpair failed and we were unable to recover it. 00:35:33.514 [2024-11-18 00:40:57.245000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.514 [2024-11-18 00:40:57.245036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.514 qpair failed and we were unable to recover it. 00:35:33.514 [2024-11-18 00:40:57.245176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.514 [2024-11-18 00:40:57.245211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.514 qpair failed and we were unable to recover it. 00:35:33.514 [2024-11-18 00:40:57.245355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.514 [2024-11-18 00:40:57.245391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.514 qpair failed and we were unable to recover it. 00:35:33.514 [2024-11-18 00:40:57.245530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.514 [2024-11-18 00:40:57.245564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.514 qpair failed and we were unable to recover it. 00:35:33.514 [2024-11-18 00:40:57.245729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.514 [2024-11-18 00:40:57.245763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.514 qpair failed and we were unable to recover it. 00:35:33.514 [2024-11-18 00:40:57.245938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.514 [2024-11-18 00:40:57.245973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.514 qpair failed and we were unable to recover it. 00:35:33.514 [2024-11-18 00:40:57.246144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.514 [2024-11-18 00:40:57.246179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.514 qpair failed and we were unable to recover it. 00:35:33.515 [2024-11-18 00:40:57.246307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.515 [2024-11-18 00:40:57.246350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.515 qpair failed and we were unable to recover it. 00:35:33.515 [2024-11-18 00:40:57.246529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.515 [2024-11-18 00:40:57.246564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.515 qpair failed and we were unable to recover it. 00:35:33.515 [2024-11-18 00:40:57.246704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.515 [2024-11-18 00:40:57.246738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.515 qpair failed and we were unable to recover it. 00:35:33.515 [2024-11-18 00:40:57.246887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.515 [2024-11-18 00:40:57.246922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.515 qpair failed and we were unable to recover it. 00:35:33.515 [2024-11-18 00:40:57.247043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.515 [2024-11-18 00:40:57.247103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.515 qpair failed and we were unable to recover it. 00:35:33.515 [2024-11-18 00:40:57.247299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.515 [2024-11-18 00:40:57.247344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.515 qpair failed and we were unable to recover it. 00:35:33.515 [2024-11-18 00:40:57.247514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.515 [2024-11-18 00:40:57.247549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.515 qpair failed and we were unable to recover it. 00:35:33.515 [2024-11-18 00:40:57.247695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.515 [2024-11-18 00:40:57.247730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.515 qpair failed and we were unable to recover it. 00:35:33.515 [2024-11-18 00:40:57.247837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.515 [2024-11-18 00:40:57.247871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.515 qpair failed and we were unable to recover it. 00:35:33.515 [2024-11-18 00:40:57.247980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.515 [2024-11-18 00:40:57.248014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.515 qpair failed and we were unable to recover it. 00:35:33.515 [2024-11-18 00:40:57.248162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.515 [2024-11-18 00:40:57.248197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.515 qpair failed and we were unable to recover it. 00:35:33.515 [2024-11-18 00:40:57.248306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.515 [2024-11-18 00:40:57.248351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.515 qpair failed and we were unable to recover it. 00:35:33.515 [2024-11-18 00:40:57.248460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.515 [2024-11-18 00:40:57.248495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.515 qpair failed and we were unable to recover it. 00:35:33.515 [2024-11-18 00:40:57.248676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.515 [2024-11-18 00:40:57.248711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.515 qpair failed and we were unable to recover it. 00:35:33.515 [2024-11-18 00:40:57.248856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.515 [2024-11-18 00:40:57.248891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.515 qpair failed and we were unable to recover it. 00:35:33.515 [2024-11-18 00:40:57.249064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.515 [2024-11-18 00:40:57.249098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.515 qpair failed and we were unable to recover it. 00:35:33.515 [2024-11-18 00:40:57.249244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.515 [2024-11-18 00:40:57.249279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.515 qpair failed and we were unable to recover it. 00:35:33.515 [2024-11-18 00:40:57.249451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.515 [2024-11-18 00:40:57.249486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.515 qpair failed and we were unable to recover it. 00:35:33.515 [2024-11-18 00:40:57.249657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.515 [2024-11-18 00:40:57.249691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.515 qpair failed and we were unable to recover it. 00:35:33.515 [2024-11-18 00:40:57.249826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.515 [2024-11-18 00:40:57.249860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.515 qpair failed and we were unable to recover it. 00:35:33.515 [2024-11-18 00:40:57.250031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.515 [2024-11-18 00:40:57.250065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.515 qpair failed and we were unable to recover it. 00:35:33.515 [2024-11-18 00:40:57.250244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.515 [2024-11-18 00:40:57.250278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.515 qpair failed and we were unable to recover it. 00:35:33.515 [2024-11-18 00:40:57.250439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.515 [2024-11-18 00:40:57.250474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.515 qpair failed and we were unable to recover it. 00:35:33.515 [2024-11-18 00:40:57.250615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.515 [2024-11-18 00:40:57.250649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.515 qpair failed and we were unable to recover it. 00:35:33.515 [2024-11-18 00:40:57.250784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.515 [2024-11-18 00:40:57.250819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.515 qpair failed and we were unable to recover it. 00:35:33.515 [2024-11-18 00:40:57.250962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.515 [2024-11-18 00:40:57.250996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.515 qpair failed and we were unable to recover it. 00:35:33.516 [2024-11-18 00:40:57.251163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.516 [2024-11-18 00:40:57.251204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.516 qpair failed and we were unable to recover it. 00:35:33.516 [2024-11-18 00:40:57.251347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.516 [2024-11-18 00:40:57.251383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.516 qpair failed and we were unable to recover it. 00:35:33.516 [2024-11-18 00:40:57.251531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.516 [2024-11-18 00:40:57.251566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.516 qpair failed and we were unable to recover it. 00:35:33.516 [2024-11-18 00:40:57.251679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.516 [2024-11-18 00:40:57.251713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.516 qpair failed and we were unable to recover it. 00:35:33.516 [2024-11-18 00:40:57.251880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.516 [2024-11-18 00:40:57.251915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.516 qpair failed and we were unable to recover it. 00:35:33.516 [2024-11-18 00:40:57.252069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.516 [2024-11-18 00:40:57.252103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.516 qpair failed and we were unable to recover it. 00:35:33.516 [2024-11-18 00:40:57.252274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.516 [2024-11-18 00:40:57.252308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.516 qpair failed and we were unable to recover it. 00:35:33.516 [2024-11-18 00:40:57.252499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.516 [2024-11-18 00:40:57.252534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.516 qpair failed and we were unable to recover it. 00:35:33.516 [2024-11-18 00:40:57.252678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.516 [2024-11-18 00:40:57.252712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.516 qpair failed and we were unable to recover it. 00:35:33.516 [2024-11-18 00:40:57.252820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.516 [2024-11-18 00:40:57.252856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.516 qpair failed and we were unable to recover it. 00:35:33.516 [2024-11-18 00:40:57.253026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.516 [2024-11-18 00:40:57.253061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.516 qpair failed and we were unable to recover it. 00:35:33.516 [2024-11-18 00:40:57.253210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.516 [2024-11-18 00:40:57.253244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.516 qpair failed and we were unable to recover it. 00:35:33.516 [2024-11-18 00:40:57.253386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.516 [2024-11-18 00:40:57.253423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.516 qpair failed and we were unable to recover it. 00:35:33.516 [2024-11-18 00:40:57.253592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.516 [2024-11-18 00:40:57.253627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.516 qpair failed and we were unable to recover it. 00:35:33.516 [2024-11-18 00:40:57.253784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.516 [2024-11-18 00:40:57.253819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.516 qpair failed and we were unable to recover it. 00:35:33.516 [2024-11-18 00:40:57.253991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.516 [2024-11-18 00:40:57.254026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.516 qpair failed and we were unable to recover it. 00:35:33.516 [2024-11-18 00:40:57.254194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.516 [2024-11-18 00:40:57.254229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.516 qpair failed and we were unable to recover it. 00:35:33.516 [2024-11-18 00:40:57.254369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.516 [2024-11-18 00:40:57.254405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.516 qpair failed and we were unable to recover it. 00:35:33.516 [2024-11-18 00:40:57.254543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.516 [2024-11-18 00:40:57.254577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.516 qpair failed and we were unable to recover it. 00:35:33.516 [2024-11-18 00:40:57.254682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.516 [2024-11-18 00:40:57.254718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.516 qpair failed and we were unable to recover it. 00:35:33.516 [2024-11-18 00:40:57.254864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.516 [2024-11-18 00:40:57.254898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.516 qpair failed and we were unable to recover it. 00:35:33.516 [2024-11-18 00:40:57.255036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.516 [2024-11-18 00:40:57.255071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.516 qpair failed and we were unable to recover it. 00:35:33.516 [2024-11-18 00:40:57.255243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.516 [2024-11-18 00:40:57.255276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.516 qpair failed and we were unable to recover it. 00:35:33.516 [2024-11-18 00:40:57.255399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.516 [2024-11-18 00:40:57.255435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.516 qpair failed and we were unable to recover it. 00:35:33.516 [2024-11-18 00:40:57.255579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.516 [2024-11-18 00:40:57.255614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.516 qpair failed and we were unable to recover it. 00:35:33.516 [2024-11-18 00:40:57.255723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.516 [2024-11-18 00:40:57.255758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.516 qpair failed and we were unable to recover it. 00:35:33.516 [2024-11-18 00:40:57.255933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.517 [2024-11-18 00:40:57.255968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.517 qpair failed and we were unable to recover it. 00:35:33.517 [2024-11-18 00:40:57.256133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.517 [2024-11-18 00:40:57.256178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.517 qpair failed and we were unable to recover it. 00:35:33.517 [2024-11-18 00:40:57.256339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.517 [2024-11-18 00:40:57.256375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.517 qpair failed and we were unable to recover it. 00:35:33.517 [2024-11-18 00:40:57.256550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.517 [2024-11-18 00:40:57.256584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.517 qpair failed and we were unable to recover it. 00:35:33.517 [2024-11-18 00:40:57.256730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.517 [2024-11-18 00:40:57.256765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.517 qpair failed and we were unable to recover it. 00:35:33.517 [2024-11-18 00:40:57.256896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.517 [2024-11-18 00:40:57.256930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.517 qpair failed and we were unable to recover it. 00:35:33.517 [2024-11-18 00:40:57.257099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.517 [2024-11-18 00:40:57.257134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.517 qpair failed and we were unable to recover it. 00:35:33.517 [2024-11-18 00:40:57.257273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.517 [2024-11-18 00:40:57.257308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.517 qpair failed and we were unable to recover it. 00:35:33.517 [2024-11-18 00:40:57.257466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.517 [2024-11-18 00:40:57.257501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.517 qpair failed and we were unable to recover it. 00:35:33.517 [2024-11-18 00:40:57.257656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.517 [2024-11-18 00:40:57.257691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.517 qpair failed and we were unable to recover it. 00:35:33.517 [2024-11-18 00:40:57.257807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.517 [2024-11-18 00:40:57.257842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.517 qpair failed and we were unable to recover it. 00:35:33.517 [2024-11-18 00:40:57.257984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.517 [2024-11-18 00:40:57.258018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.517 qpair failed and we were unable to recover it. 00:35:33.517 [2024-11-18 00:40:57.258166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.517 [2024-11-18 00:40:57.258201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.517 qpair failed and we were unable to recover it. 00:35:33.517 [2024-11-18 00:40:57.258325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.517 [2024-11-18 00:40:57.258360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.517 qpair failed and we were unable to recover it. 00:35:33.517 [2024-11-18 00:40:57.258503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.517 [2024-11-18 00:40:57.258544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.517 qpair failed and we were unable to recover it. 00:35:33.517 [2024-11-18 00:40:57.258723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.517 [2024-11-18 00:40:57.258757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.517 qpair failed and we were unable to recover it. 00:35:33.517 [2024-11-18 00:40:57.258926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.517 [2024-11-18 00:40:57.258960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.517 qpair failed and we were unable to recover it. 00:35:33.517 [2024-11-18 00:40:57.259097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.517 [2024-11-18 00:40:57.259132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.517 qpair failed and we were unable to recover it. 00:35:33.517 [2024-11-18 00:40:57.259275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.517 [2024-11-18 00:40:57.259317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.517 qpair failed and we were unable to recover it. 00:35:33.517 [2024-11-18 00:40:57.259464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.517 [2024-11-18 00:40:57.259499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.517 qpair failed and we were unable to recover it. 00:35:33.517 [2024-11-18 00:40:57.259644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.517 [2024-11-18 00:40:57.259678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.517 qpair failed and we were unable to recover it. 00:35:33.517 [2024-11-18 00:40:57.259827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.517 [2024-11-18 00:40:57.259861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.517 qpair failed and we were unable to recover it. 00:35:33.517 [2024-11-18 00:40:57.259974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.517 [2024-11-18 00:40:57.260008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.517 qpair failed and we were unable to recover it. 00:35:33.517 [2024-11-18 00:40:57.260146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.517 [2024-11-18 00:40:57.260181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.517 qpair failed and we were unable to recover it. 00:35:33.517 [2024-11-18 00:40:57.260308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.517 [2024-11-18 00:40:57.260352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.517 qpair failed and we were unable to recover it. 00:35:33.517 [2024-11-18 00:40:57.260493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.517 [2024-11-18 00:40:57.260528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.517 qpair failed and we were unable to recover it. 00:35:33.517 [2024-11-18 00:40:57.260638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.517 [2024-11-18 00:40:57.260673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.517 qpair failed and we were unable to recover it. 00:35:33.517 [2024-11-18 00:40:57.260788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.517 [2024-11-18 00:40:57.260822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.517 qpair failed and we were unable to recover it. 00:35:33.517 [2024-11-18 00:40:57.260965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.517 [2024-11-18 00:40:57.261000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.517 qpair failed and we were unable to recover it. 00:35:33.517 [2024-11-18 00:40:57.261113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.517 [2024-11-18 00:40:57.261148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.517 qpair failed and we were unable to recover it. 00:35:33.517 [2024-11-18 00:40:57.261280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.517 [2024-11-18 00:40:57.261332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.517 qpair failed and we were unable to recover it. 00:35:33.800 [2024-11-18 00:40:57.261476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.800 [2024-11-18 00:40:57.261511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.800 qpair failed and we were unable to recover it. 00:35:33.800 [2024-11-18 00:40:57.261631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.800 [2024-11-18 00:40:57.261665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.800 qpair failed and we were unable to recover it. 00:35:33.800 [2024-11-18 00:40:57.261814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.800 [2024-11-18 00:40:57.261848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.800 qpair failed and we were unable to recover it. 00:35:33.800 [2024-11-18 00:40:57.261988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.800 [2024-11-18 00:40:57.262023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.800 qpair failed and we were unable to recover it. 00:35:33.800 [2024-11-18 00:40:57.262164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.800 [2024-11-18 00:40:57.262200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.800 qpair failed and we were unable to recover it. 00:35:33.800 [2024-11-18 00:40:57.262334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.800 [2024-11-18 00:40:57.262369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.800 qpair failed and we were unable to recover it. 00:35:33.800 [2024-11-18 00:40:57.262507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.800 [2024-11-18 00:40:57.262542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.800 qpair failed and we were unable to recover it. 00:35:33.800 [2024-11-18 00:40:57.262656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.800 [2024-11-18 00:40:57.262691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.800 qpair failed and we were unable to recover it. 00:35:33.800 [2024-11-18 00:40:57.262875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.800 [2024-11-18 00:40:57.262909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.800 qpair failed and we were unable to recover it. 00:35:33.800 [2024-11-18 00:40:57.263053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.800 [2024-11-18 00:40:57.263088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.800 qpair failed and we were unable to recover it. 00:35:33.800 [2024-11-18 00:40:57.263231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.800 [2024-11-18 00:40:57.263266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.800 qpair failed and we were unable to recover it. 00:35:33.800 [2024-11-18 00:40:57.263385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.800 [2024-11-18 00:40:57.263420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.800 qpair failed and we were unable to recover it. 00:35:33.800 [2024-11-18 00:40:57.263595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.800 [2024-11-18 00:40:57.263629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.800 qpair failed and we were unable to recover it. 00:35:33.800 [2024-11-18 00:40:57.263767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.800 [2024-11-18 00:40:57.263801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.800 qpair failed and we were unable to recover it. 00:35:33.800 [2024-11-18 00:40:57.263941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.800 [2024-11-18 00:40:57.263975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.800 qpair failed and we were unable to recover it. 00:35:33.800 [2024-11-18 00:40:57.264086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.800 [2024-11-18 00:40:57.264121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.800 qpair failed and we were unable to recover it. 00:35:33.800 [2024-11-18 00:40:57.264262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.800 [2024-11-18 00:40:57.264297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.800 qpair failed and we were unable to recover it. 00:35:33.800 [2024-11-18 00:40:57.264453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.800 [2024-11-18 00:40:57.264487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.800 qpair failed and we were unable to recover it. 00:35:33.800 [2024-11-18 00:40:57.264592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.800 [2024-11-18 00:40:57.264627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.800 qpair failed and we were unable to recover it. 00:35:33.800 [2024-11-18 00:40:57.264776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.800 [2024-11-18 00:40:57.264811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.800 qpair failed and we were unable to recover it. 00:35:33.800 [2024-11-18 00:40:57.264981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.800 [2024-11-18 00:40:57.265016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.800 qpair failed and we were unable to recover it. 00:35:33.800 [2024-11-18 00:40:57.265123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.800 [2024-11-18 00:40:57.265159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.800 qpair failed and we were unable to recover it. 00:35:33.800 [2024-11-18 00:40:57.265306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.800 [2024-11-18 00:40:57.265350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.800 qpair failed and we were unable to recover it. 00:35:33.800 [2024-11-18 00:40:57.265462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.800 [2024-11-18 00:40:57.265503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.800 qpair failed and we were unable to recover it. 00:35:33.800 [2024-11-18 00:40:57.265642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.800 [2024-11-18 00:40:57.265677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.800 qpair failed and we were unable to recover it. 00:35:33.800 [2024-11-18 00:40:57.265773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.800 [2024-11-18 00:40:57.265807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.800 qpair failed and we were unable to recover it. 00:35:33.800 [2024-11-18 00:40:57.265923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.800 [2024-11-18 00:40:57.265958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.800 qpair failed and we were unable to recover it. 00:35:33.800 [2024-11-18 00:40:57.266098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.800 [2024-11-18 00:40:57.266132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.800 qpair failed and we were unable to recover it. 00:35:33.800 [2024-11-18 00:40:57.266274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.800 [2024-11-18 00:40:57.266308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.800 qpair failed and we were unable to recover it. 00:35:33.800 [2024-11-18 00:40:57.266463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.800 [2024-11-18 00:40:57.266498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.800 qpair failed and we were unable to recover it. 00:35:33.800 [2024-11-18 00:40:57.266664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.800 [2024-11-18 00:40:57.266699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.800 qpair failed and we were unable to recover it. 00:35:33.800 [2024-11-18 00:40:57.266808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.800 [2024-11-18 00:40:57.266842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.800 qpair failed and we were unable to recover it. 00:35:33.801 [2024-11-18 00:40:57.266976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.801 [2024-11-18 00:40:57.267011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.801 qpair failed and we were unable to recover it. 00:35:33.801 [2024-11-18 00:40:57.267181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.801 [2024-11-18 00:40:57.267215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.801 qpair failed and we were unable to recover it. 00:35:33.801 [2024-11-18 00:40:57.267329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.801 [2024-11-18 00:40:57.267364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.801 qpair failed and we were unable to recover it. 00:35:33.801 [2024-11-18 00:40:57.267486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.801 [2024-11-18 00:40:57.267521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.801 qpair failed and we were unable to recover it. 00:35:33.801 [2024-11-18 00:40:57.267664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.801 [2024-11-18 00:40:57.267700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.801 qpair failed and we were unable to recover it. 00:35:33.801 [2024-11-18 00:40:57.267879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.801 [2024-11-18 00:40:57.267913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.801 qpair failed and we were unable to recover it. 00:35:33.801 [2024-11-18 00:40:57.268022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.801 [2024-11-18 00:40:57.268056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.801 qpair failed and we were unable to recover it. 00:35:33.801 [2024-11-18 00:40:57.268194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.801 [2024-11-18 00:40:57.268228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.801 qpair failed and we were unable to recover it. 00:35:33.801 [2024-11-18 00:40:57.268376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.801 [2024-11-18 00:40:57.268411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.801 qpair failed and we were unable to recover it. 00:35:33.801 [2024-11-18 00:40:57.268589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.801 [2024-11-18 00:40:57.268624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.801 qpair failed and we were unable to recover it. 00:35:33.801 [2024-11-18 00:40:57.268761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.801 [2024-11-18 00:40:57.268796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.801 qpair failed and we were unable to recover it. 00:35:33.801 [2024-11-18 00:40:57.268911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.801 [2024-11-18 00:40:57.268946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.801 qpair failed and we were unable to recover it. 00:35:33.801 [2024-11-18 00:40:57.269120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.801 [2024-11-18 00:40:57.269155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.801 qpair failed and we were unable to recover it. 00:35:33.801 [2024-11-18 00:40:57.269291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.801 [2024-11-18 00:40:57.269348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.801 qpair failed and we were unable to recover it. 00:35:33.801 [2024-11-18 00:40:57.269472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.801 [2024-11-18 00:40:57.269507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.801 qpair failed and we were unable to recover it. 00:35:33.801 [2024-11-18 00:40:57.269684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.801 [2024-11-18 00:40:57.269718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.801 qpair failed and we were unable to recover it. 00:35:33.801 [2024-11-18 00:40:57.269897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.801 [2024-11-18 00:40:57.269931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.801 qpair failed and we were unable to recover it. 00:35:33.801 [2024-11-18 00:40:57.270073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.801 [2024-11-18 00:40:57.270107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.801 qpair failed and we were unable to recover it. 00:35:33.801 [2024-11-18 00:40:57.270258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.801 [2024-11-18 00:40:57.270293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.801 qpair failed and we were unable to recover it. 00:35:33.801 [2024-11-18 00:40:57.270421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.801 [2024-11-18 00:40:57.270455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.801 qpair failed and we were unable to recover it. 00:35:33.801 [2024-11-18 00:40:57.270627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.801 [2024-11-18 00:40:57.270662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.801 qpair failed and we were unable to recover it. 00:35:33.801 [2024-11-18 00:40:57.270839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.801 [2024-11-18 00:40:57.270874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.801 qpair failed and we were unable to recover it. 00:35:33.801 [2024-11-18 00:40:57.271045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.801 [2024-11-18 00:40:57.271080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.801 qpair failed and we were unable to recover it. 00:35:33.801 [2024-11-18 00:40:57.271248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.801 [2024-11-18 00:40:57.271282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.801 qpair failed and we were unable to recover it. 00:35:33.801 [2024-11-18 00:40:57.271473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.801 [2024-11-18 00:40:57.271508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.801 qpair failed and we were unable to recover it. 00:35:33.801 [2024-11-18 00:40:57.271622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.801 [2024-11-18 00:40:57.271658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.801 qpair failed and we were unable to recover it. 00:35:33.801 [2024-11-18 00:40:57.271771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.801 [2024-11-18 00:40:57.271806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.801 qpair failed and we were unable to recover it. 00:35:33.801 [2024-11-18 00:40:57.271980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.801 [2024-11-18 00:40:57.272015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.801 qpair failed and we were unable to recover it. 00:35:33.801 [2024-11-18 00:40:57.272181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.801 [2024-11-18 00:40:57.272215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.801 qpair failed and we were unable to recover it. 00:35:33.801 [2024-11-18 00:40:57.272389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.801 [2024-11-18 00:40:57.272424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.801 qpair failed and we were unable to recover it. 00:35:33.801 [2024-11-18 00:40:57.272531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.801 [2024-11-18 00:40:57.272566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.801 qpair failed and we were unable to recover it. 00:35:33.801 [2024-11-18 00:40:57.272704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.801 [2024-11-18 00:40:57.272745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.801 qpair failed and we were unable to recover it. 00:35:33.801 [2024-11-18 00:40:57.272876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.801 [2024-11-18 00:40:57.272911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.801 qpair failed and we were unable to recover it. 00:35:33.801 [2024-11-18 00:40:57.273088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.801 [2024-11-18 00:40:57.273124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.801 qpair failed and we were unable to recover it. 00:35:33.801 [2024-11-18 00:40:57.273260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.801 [2024-11-18 00:40:57.273294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.801 qpair failed and we were unable to recover it. 00:35:33.801 [2024-11-18 00:40:57.273455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.801 [2024-11-18 00:40:57.273490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.801 qpair failed and we were unable to recover it. 00:35:33.801 [2024-11-18 00:40:57.273631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.801 [2024-11-18 00:40:57.273666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.801 qpair failed and we were unable to recover it. 00:35:33.801 [2024-11-18 00:40:57.273807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.802 [2024-11-18 00:40:57.273841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.802 qpair failed and we were unable to recover it. 00:35:33.802 [2024-11-18 00:40:57.273976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.802 [2024-11-18 00:40:57.274010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.802 qpair failed and we were unable to recover it. 00:35:33.802 [2024-11-18 00:40:57.274196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.802 [2024-11-18 00:40:57.274231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.802 qpair failed and we were unable to recover it. 00:35:33.802 [2024-11-18 00:40:57.274373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.802 [2024-11-18 00:40:57.274409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.802 qpair failed and we were unable to recover it. 00:35:33.802 [2024-11-18 00:40:57.274549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.802 [2024-11-18 00:40:57.274584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.802 qpair failed and we were unable to recover it. 00:35:33.802 [2024-11-18 00:40:57.274688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.802 [2024-11-18 00:40:57.274724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.802 qpair failed and we were unable to recover it. 00:35:33.802 [2024-11-18 00:40:57.274861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.802 [2024-11-18 00:40:57.274894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.802 qpair failed and we were unable to recover it. 00:35:33.802 [2024-11-18 00:40:57.275065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.802 [2024-11-18 00:40:57.275101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.802 qpair failed and we were unable to recover it. 00:35:33.802 [2024-11-18 00:40:57.275308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.802 [2024-11-18 00:40:57.275375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.802 qpair failed and we were unable to recover it. 00:35:33.802 [2024-11-18 00:40:57.275545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.802 [2024-11-18 00:40:57.275580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.802 qpair failed and we were unable to recover it. 00:35:33.802 [2024-11-18 00:40:57.275719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.802 [2024-11-18 00:40:57.275754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.802 qpair failed and we were unable to recover it. 00:35:33.802 [2024-11-18 00:40:57.275928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.802 [2024-11-18 00:40:57.275970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.802 qpair failed and we were unable to recover it. 00:35:33.802 [2024-11-18 00:40:57.276143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.802 [2024-11-18 00:40:57.276186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.802 qpair failed and we were unable to recover it. 00:35:33.802 [2024-11-18 00:40:57.276325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.802 [2024-11-18 00:40:57.276375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.802 qpair failed and we were unable to recover it. 00:35:33.802 [2024-11-18 00:40:57.276572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.802 [2024-11-18 00:40:57.276613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.802 qpair failed and we were unable to recover it. 00:35:33.802 [2024-11-18 00:40:57.276745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.802 [2024-11-18 00:40:57.276787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.802 qpair failed and we were unable to recover it. 00:35:33.802 [2024-11-18 00:40:57.276923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.802 [2024-11-18 00:40:57.276965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.802 qpair failed and we were unable to recover it. 00:35:33.802 [2024-11-18 00:40:57.277127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.802 [2024-11-18 00:40:57.277169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.802 qpair failed and we were unable to recover it. 00:35:33.802 [2024-11-18 00:40:57.277350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.802 [2024-11-18 00:40:57.277393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.802 qpair failed and we were unable to recover it. 00:35:33.802 [2024-11-18 00:40:57.277520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.802 [2024-11-18 00:40:57.277563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.802 qpair failed and we were unable to recover it. 00:35:33.802 [2024-11-18 00:40:57.277759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.802 [2024-11-18 00:40:57.277800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.802 qpair failed and we were unable to recover it. 00:35:33.802 [2024-11-18 00:40:57.277978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.802 [2024-11-18 00:40:57.278021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.802 qpair failed and we were unable to recover it. 00:35:33.802 [2024-11-18 00:40:57.278214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.802 [2024-11-18 00:40:57.278255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.802 qpair failed and we were unable to recover it. 00:35:33.802 [2024-11-18 00:40:57.278396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.802 [2024-11-18 00:40:57.278438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.802 qpair failed and we were unable to recover it. 00:35:33.802 [2024-11-18 00:40:57.278603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.802 [2024-11-18 00:40:57.278645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.802 qpair failed and we were unable to recover it. 00:35:33.802 [2024-11-18 00:40:57.278842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.802 [2024-11-18 00:40:57.278886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.802 qpair failed and we were unable to recover it. 00:35:33.802 [2024-11-18 00:40:57.279053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.802 [2024-11-18 00:40:57.279095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.802 qpair failed and we were unable to recover it. 00:35:33.802 [2024-11-18 00:40:57.279216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.802 [2024-11-18 00:40:57.279259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.802 qpair failed and we were unable to recover it. 00:35:33.802 [2024-11-18 00:40:57.279439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.802 [2024-11-18 00:40:57.279482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.802 qpair failed and we were unable to recover it. 00:35:33.802 [2024-11-18 00:40:57.279645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.802 [2024-11-18 00:40:57.279687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.802 qpair failed and we were unable to recover it. 00:35:33.802 [2024-11-18 00:40:57.279853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.802 [2024-11-18 00:40:57.279896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.802 qpair failed and we were unable to recover it. 00:35:33.802 [2024-11-18 00:40:57.280061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.802 [2024-11-18 00:40:57.280105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.802 qpair failed and we were unable to recover it. 00:35:33.802 [2024-11-18 00:40:57.280300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.802 [2024-11-18 00:40:57.280353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.802 qpair failed and we were unable to recover it. 00:35:33.802 [2024-11-18 00:40:57.280511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.802 [2024-11-18 00:40:57.280553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.802 qpair failed and we were unable to recover it. 00:35:33.802 [2024-11-18 00:40:57.280717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.802 [2024-11-18 00:40:57.280768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.802 qpair failed and we were unable to recover it. 00:35:33.802 [2024-11-18 00:40:57.280935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.802 [2024-11-18 00:40:57.280979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.802 qpair failed and we were unable to recover it. 00:35:33.802 [2024-11-18 00:40:57.281101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.802 [2024-11-18 00:40:57.281144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.802 qpair failed and we were unable to recover it. 00:35:33.802 [2024-11-18 00:40:57.281306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.802 [2024-11-18 00:40:57.281380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.802 qpair failed and we were unable to recover it. 00:35:33.802 [2024-11-18 00:40:57.281545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.803 [2024-11-18 00:40:57.281588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.803 qpair failed and we were unable to recover it. 00:35:33.803 [2024-11-18 00:40:57.281711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.803 [2024-11-18 00:40:57.281753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.803 qpair failed and we were unable to recover it. 00:35:33.803 [2024-11-18 00:40:57.281919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.803 [2024-11-18 00:40:57.281961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.803 qpair failed and we were unable to recover it. 00:35:33.803 [2024-11-18 00:40:57.282090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.803 [2024-11-18 00:40:57.282133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.803 qpair failed and we were unable to recover it. 00:35:33.803 [2024-11-18 00:40:57.282288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.803 [2024-11-18 00:40:57.282345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.803 qpair failed and we were unable to recover it. 00:35:33.803 [2024-11-18 00:40:57.282509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.803 [2024-11-18 00:40:57.282551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.803 qpair failed and we were unable to recover it. 00:35:33.803 [2024-11-18 00:40:57.282708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.803 [2024-11-18 00:40:57.282750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.803 qpair failed and we were unable to recover it. 00:35:33.803 [2024-11-18 00:40:57.282881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.803 [2024-11-18 00:40:57.282922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.803 qpair failed and we were unable to recover it. 00:35:33.803 [2024-11-18 00:40:57.283055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.803 [2024-11-18 00:40:57.283097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.803 qpair failed and we were unable to recover it. 00:35:33.803 [2024-11-18 00:40:57.283235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.803 [2024-11-18 00:40:57.283279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.803 qpair failed and we were unable to recover it. 00:35:33.803 [2024-11-18 00:40:57.283496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.803 [2024-11-18 00:40:57.283540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.803 qpair failed and we were unable to recover it. 00:35:33.803 [2024-11-18 00:40:57.283712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.803 [2024-11-18 00:40:57.283755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.803 qpair failed and we were unable to recover it. 00:35:33.803 [2024-11-18 00:40:57.283954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.803 [2024-11-18 00:40:57.283995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.803 qpair failed and we were unable to recover it. 00:35:33.803 [2024-11-18 00:40:57.284145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.803 [2024-11-18 00:40:57.284187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.803 qpair failed and we were unable to recover it. 00:35:33.803 [2024-11-18 00:40:57.284324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.803 [2024-11-18 00:40:57.284367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.803 qpair failed and we were unable to recover it. 00:35:33.803 [2024-11-18 00:40:57.284502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.803 [2024-11-18 00:40:57.284543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.803 qpair failed and we were unable to recover it. 00:35:33.803 [2024-11-18 00:40:57.284723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.803 [2024-11-18 00:40:57.284765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.803 qpair failed and we were unable to recover it. 00:35:33.803 [2024-11-18 00:40:57.284933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.803 [2024-11-18 00:40:57.284975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.803 qpair failed and we were unable to recover it. 00:35:33.803 [2024-11-18 00:40:57.285181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.803 [2024-11-18 00:40:57.285222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.803 qpair failed and we were unable to recover it. 00:35:33.803 [2024-11-18 00:40:57.285382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.803 [2024-11-18 00:40:57.285426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.803 qpair failed and we were unable to recover it. 00:35:33.803 [2024-11-18 00:40:57.285598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.803 [2024-11-18 00:40:57.285640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.803 qpair failed and we were unable to recover it. 00:35:33.803 [2024-11-18 00:40:57.285839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.803 [2024-11-18 00:40:57.285880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.803 qpair failed and we were unable to recover it. 00:35:33.803 [2024-11-18 00:40:57.286047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.803 [2024-11-18 00:40:57.286089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.803 qpair failed and we were unable to recover it. 00:35:33.803 [2024-11-18 00:40:57.286252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.803 [2024-11-18 00:40:57.286295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.803 qpair failed and we were unable to recover it. 00:35:33.803 [2024-11-18 00:40:57.286440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.803 [2024-11-18 00:40:57.286482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.803 qpair failed and we were unable to recover it. 00:35:33.803 [2024-11-18 00:40:57.286614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.803 [2024-11-18 00:40:57.286656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.803 qpair failed and we were unable to recover it. 00:35:33.803 [2024-11-18 00:40:57.286818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.803 [2024-11-18 00:40:57.286861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.803 qpair failed and we were unable to recover it. 00:35:33.803 [2024-11-18 00:40:57.286976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.803 [2024-11-18 00:40:57.287018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.803 qpair failed and we were unable to recover it. 00:35:33.803 [2024-11-18 00:40:57.287211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.803 [2024-11-18 00:40:57.287252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.803 qpair failed and we were unable to recover it. 00:35:33.803 [2024-11-18 00:40:57.287435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.803 [2024-11-18 00:40:57.287477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.803 qpair failed and we were unable to recover it. 00:35:33.803 [2024-11-18 00:40:57.287640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.803 [2024-11-18 00:40:57.287682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.803 qpair failed and we were unable to recover it. 00:35:33.803 [2024-11-18 00:40:57.287849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.803 [2024-11-18 00:40:57.287891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.803 qpair failed and we were unable to recover it. 00:35:33.803 [2024-11-18 00:40:57.288028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.803 [2024-11-18 00:40:57.288070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.803 qpair failed and we were unable to recover it. 00:35:33.803 [2024-11-18 00:40:57.288235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.803 [2024-11-18 00:40:57.288277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.803 qpair failed and we were unable to recover it. 00:35:33.803 [2024-11-18 00:40:57.288459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.803 [2024-11-18 00:40:57.288500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.803 qpair failed and we were unable to recover it. 00:35:33.803 [2024-11-18 00:40:57.288668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.803 [2024-11-18 00:40:57.288710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.803 qpair failed and we were unable to recover it. 00:35:33.803 [2024-11-18 00:40:57.288885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.803 [2024-11-18 00:40:57.288933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.803 qpair failed and we were unable to recover it. 00:35:33.803 [2024-11-18 00:40:57.289059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.803 [2024-11-18 00:40:57.289099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.803 qpair failed and we were unable to recover it. 00:35:33.804 [2024-11-18 00:40:57.289272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.804 [2024-11-18 00:40:57.289328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.804 qpair failed and we were unable to recover it. 00:35:33.804 [2024-11-18 00:40:57.289462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.804 [2024-11-18 00:40:57.289504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.804 qpair failed and we were unable to recover it. 00:35:33.804 [2024-11-18 00:40:57.289624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.804 [2024-11-18 00:40:57.289666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.804 qpair failed and we were unable to recover it. 00:35:33.804 [2024-11-18 00:40:57.289827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.804 [2024-11-18 00:40:57.289869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.804 qpair failed and we were unable to recover it. 00:35:33.804 [2024-11-18 00:40:57.290037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.804 [2024-11-18 00:40:57.290078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.804 qpair failed and we were unable to recover it. 00:35:33.804 [2024-11-18 00:40:57.290273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.804 [2024-11-18 00:40:57.290338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.804 qpair failed and we were unable to recover it. 00:35:33.804 [2024-11-18 00:40:57.290499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.804 [2024-11-18 00:40:57.290541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.804 qpair failed and we were unable to recover it. 00:35:33.804 [2024-11-18 00:40:57.290703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.804 [2024-11-18 00:40:57.290745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.804 qpair failed and we were unable to recover it. 00:35:33.804 [2024-11-18 00:40:57.290941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.804 [2024-11-18 00:40:57.290982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.804 qpair failed and we were unable to recover it. 00:35:33.804 [2024-11-18 00:40:57.291148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.804 [2024-11-18 00:40:57.291189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.804 qpair failed and we were unable to recover it. 00:35:33.804 [2024-11-18 00:40:57.291384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.804 [2024-11-18 00:40:57.291428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.804 qpair failed and we were unable to recover it. 00:35:33.804 [2024-11-18 00:40:57.291591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.804 [2024-11-18 00:40:57.291633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.804 qpair failed and we were unable to recover it. 00:35:33.804 [2024-11-18 00:40:57.291840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.804 [2024-11-18 00:40:57.291881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.804 qpair failed and we were unable to recover it. 00:35:33.804 [2024-11-18 00:40:57.292006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.804 [2024-11-18 00:40:57.292048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.804 qpair failed and we were unable to recover it. 00:35:33.804 [2024-11-18 00:40:57.292241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.804 [2024-11-18 00:40:57.292282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.804 qpair failed and we were unable to recover it. 00:35:33.804 [2024-11-18 00:40:57.292453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.804 [2024-11-18 00:40:57.292511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.804 qpair failed and we were unable to recover it. 00:35:33.804 [2024-11-18 00:40:57.292690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.804 [2024-11-18 00:40:57.292756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.804 qpair failed and we were unable to recover it. 00:35:33.804 [2024-11-18 00:40:57.292927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.804 [2024-11-18 00:40:57.292968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.804 qpair failed and we were unable to recover it. 00:35:33.804 [2024-11-18 00:40:57.293126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.804 [2024-11-18 00:40:57.293168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.804 qpair failed and we were unable to recover it. 00:35:33.804 [2024-11-18 00:40:57.293370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.804 [2024-11-18 00:40:57.293413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.804 qpair failed and we were unable to recover it. 00:35:33.804 [2024-11-18 00:40:57.293611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.804 [2024-11-18 00:40:57.293653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.804 qpair failed and we were unable to recover it. 00:35:33.804 [2024-11-18 00:40:57.293811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.804 [2024-11-18 00:40:57.293853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.804 qpair failed and we were unable to recover it. 00:35:33.804 [2024-11-18 00:40:57.293986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.804 [2024-11-18 00:40:57.294028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.804 qpair failed and we were unable to recover it. 00:35:33.804 [2024-11-18 00:40:57.294159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.804 [2024-11-18 00:40:57.294202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.804 qpair failed and we were unable to recover it. 00:35:33.804 [2024-11-18 00:40:57.294400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.804 [2024-11-18 00:40:57.294444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.804 qpair failed and we were unable to recover it. 00:35:33.804 [2024-11-18 00:40:57.294664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.804 [2024-11-18 00:40:57.294719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.804 qpair failed and we were unable to recover it. 00:35:33.804 [2024-11-18 00:40:57.294900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.804 [2024-11-18 00:40:57.294944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.804 qpair failed and we were unable to recover it. 00:35:33.804 [2024-11-18 00:40:57.295117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.804 [2024-11-18 00:40:57.295163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.804 qpair failed and we were unable to recover it. 00:35:33.804 [2024-11-18 00:40:57.295305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.804 [2024-11-18 00:40:57.295364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.804 qpair failed and we were unable to recover it. 00:35:33.804 [2024-11-18 00:40:57.295557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.804 [2024-11-18 00:40:57.295602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.804 qpair failed and we were unable to recover it. 00:35:33.804 [2024-11-18 00:40:57.295809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.804 [2024-11-18 00:40:57.295854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.804 qpair failed and we were unable to recover it. 00:35:33.804 [2024-11-18 00:40:57.296028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.804 [2024-11-18 00:40:57.296073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.804 qpair failed and we were unable to recover it. 00:35:33.804 [2024-11-18 00:40:57.296203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.805 [2024-11-18 00:40:57.296261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.805 qpair failed and we were unable to recover it. 00:35:33.805 [2024-11-18 00:40:57.296438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.805 [2024-11-18 00:40:57.296483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.805 qpair failed and we were unable to recover it. 00:35:33.805 [2024-11-18 00:40:57.296620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.805 [2024-11-18 00:40:57.296664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.805 qpair failed and we were unable to recover it. 00:35:33.805 [2024-11-18 00:40:57.296843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.805 [2024-11-18 00:40:57.296893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.805 qpair failed and we were unable to recover it. 00:35:33.805 [2024-11-18 00:40:57.297065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.805 [2024-11-18 00:40:57.297109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.805 qpair failed and we were unable to recover it. 00:35:33.805 [2024-11-18 00:40:57.297298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.805 [2024-11-18 00:40:57.297352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.805 qpair failed and we were unable to recover it. 00:35:33.805 [2024-11-18 00:40:57.297522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.805 [2024-11-18 00:40:57.297587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.805 qpair failed and we were unable to recover it. 00:35:33.805 [2024-11-18 00:40:57.297760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.805 [2024-11-18 00:40:57.297804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.805 qpair failed and we were unable to recover it. 00:35:33.805 [2024-11-18 00:40:57.297954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.805 [2024-11-18 00:40:57.298014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.805 qpair failed and we were unable to recover it. 00:35:33.805 [2024-11-18 00:40:57.298182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.805 [2024-11-18 00:40:57.298226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.805 qpair failed and we were unable to recover it. 00:35:33.805 [2024-11-18 00:40:57.298406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.805 [2024-11-18 00:40:57.298449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.805 qpair failed and we were unable to recover it. 00:35:33.805 [2024-11-18 00:40:57.298654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.805 [2024-11-18 00:40:57.298697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.805 qpair failed and we were unable to recover it. 00:35:33.805 [2024-11-18 00:40:57.298878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.805 [2024-11-18 00:40:57.298923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.805 qpair failed and we were unable to recover it. 00:35:33.805 [2024-11-18 00:40:57.299092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.805 [2024-11-18 00:40:57.299135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.805 qpair failed and we were unable to recover it. 00:35:33.805 [2024-11-18 00:40:57.299279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.805 [2024-11-18 00:40:57.299342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.805 qpair failed and we were unable to recover it. 00:35:33.805 [2024-11-18 00:40:57.299501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.805 [2024-11-18 00:40:57.299550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.805 qpair failed and we were unable to recover it. 00:35:33.805 [2024-11-18 00:40:57.299703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.805 [2024-11-18 00:40:57.299747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.805 qpair failed and we were unable to recover it. 00:35:33.805 [2024-11-18 00:40:57.299920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.805 [2024-11-18 00:40:57.299964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.805 qpair failed and we were unable to recover it. 00:35:33.805 [2024-11-18 00:40:57.300177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.805 [2024-11-18 00:40:57.300222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.805 qpair failed and we were unable to recover it. 00:35:33.805 [2024-11-18 00:40:57.300389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.805 [2024-11-18 00:40:57.300432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.805 qpair failed and we were unable to recover it. 00:35:33.805 [2024-11-18 00:40:57.300648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.805 [2024-11-18 00:40:57.300690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.805 qpair failed and we were unable to recover it. 00:35:33.805 [2024-11-18 00:40:57.300871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.805 [2024-11-18 00:40:57.300916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.805 qpair failed and we were unable to recover it. 00:35:33.805 [2024-11-18 00:40:57.301061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.805 [2024-11-18 00:40:57.301106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.805 qpair failed and we were unable to recover it. 00:35:33.805 [2024-11-18 00:40:57.301293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.805 [2024-11-18 00:40:57.301349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.805 qpair failed and we were unable to recover it. 00:35:33.805 [2024-11-18 00:40:57.301487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.805 [2024-11-18 00:40:57.301530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.805 qpair failed and we were unable to recover it. 00:35:33.805 [2024-11-18 00:40:57.301710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.805 [2024-11-18 00:40:57.301754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.805 qpair failed and we were unable to recover it. 00:35:33.805 [2024-11-18 00:40:57.301941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.805 [2024-11-18 00:40:57.301985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.805 qpair failed and we were unable to recover it. 00:35:33.805 [2024-11-18 00:40:57.302126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.805 [2024-11-18 00:40:57.302174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.805 qpair failed and we were unable to recover it. 00:35:33.805 [2024-11-18 00:40:57.302354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.805 [2024-11-18 00:40:57.302399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.805 qpair failed and we were unable to recover it. 00:35:33.805 [2024-11-18 00:40:57.302562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.805 [2024-11-18 00:40:57.302605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.805 qpair failed and we were unable to recover it. 00:35:33.805 [2024-11-18 00:40:57.302806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.805 [2024-11-18 00:40:57.302847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.805 qpair failed and we were unable to recover it. 00:35:33.805 [2024-11-18 00:40:57.302993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.805 [2024-11-18 00:40:57.303038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.805 qpair failed and we were unable to recover it. 00:35:33.805 [2024-11-18 00:40:57.303302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.805 [2024-11-18 00:40:57.303386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.805 qpair failed and we were unable to recover it. 00:35:33.805 [2024-11-18 00:40:57.303527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.805 [2024-11-18 00:40:57.303577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.805 qpair failed and we were unable to recover it. 00:35:33.805 [2024-11-18 00:40:57.303781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.805 [2024-11-18 00:40:57.303827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.805 qpair failed and we were unable to recover it. 00:35:33.805 [2024-11-18 00:40:57.304000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.805 [2024-11-18 00:40:57.304045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.805 qpair failed and we were unable to recover it. 00:35:33.805 [2024-11-18 00:40:57.304212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.805 [2024-11-18 00:40:57.304258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.805 qpair failed and we were unable to recover it. 00:35:33.805 [2024-11-18 00:40:57.304492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.805 [2024-11-18 00:40:57.304534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.805 qpair failed and we were unable to recover it. 00:35:33.806 [2024-11-18 00:40:57.304669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.806 [2024-11-18 00:40:57.304712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.806 qpair failed and we were unable to recover it. 00:35:33.806 [2024-11-18 00:40:57.304903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.806 [2024-11-18 00:40:57.304948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.806 qpair failed and we were unable to recover it. 00:35:33.806 [2024-11-18 00:40:57.305118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.806 [2024-11-18 00:40:57.305163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.806 qpair failed and we were unable to recover it. 00:35:33.806 [2024-11-18 00:40:57.305327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.806 [2024-11-18 00:40:57.305372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.806 qpair failed and we were unable to recover it. 00:35:33.806 [2024-11-18 00:40:57.305513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.806 [2024-11-18 00:40:57.305554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.806 qpair failed and we were unable to recover it. 00:35:33.806 [2024-11-18 00:40:57.305742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.806 [2024-11-18 00:40:57.305785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.806 qpair failed and we were unable to recover it. 00:35:33.806 [2024-11-18 00:40:57.305992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.806 [2024-11-18 00:40:57.306036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.806 qpair failed and we were unable to recover it. 00:35:33.806 [2024-11-18 00:40:57.306166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.806 [2024-11-18 00:40:57.306210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.806 qpair failed and we were unable to recover it. 00:35:33.806 [2024-11-18 00:40:57.306346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.806 [2024-11-18 00:40:57.306411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.806 qpair failed and we were unable to recover it. 00:35:33.806 [2024-11-18 00:40:57.306587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.806 [2024-11-18 00:40:57.306630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.806 qpair failed and we were unable to recover it. 00:35:33.806 [2024-11-18 00:40:57.306846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.806 [2024-11-18 00:40:57.306892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.806 qpair failed and we were unable to recover it. 00:35:33.806 [2024-11-18 00:40:57.307079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.806 [2024-11-18 00:40:57.307123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.806 qpair failed and we were unable to recover it. 00:35:33.806 [2024-11-18 00:40:57.307291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.806 [2024-11-18 00:40:57.307340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.806 qpair failed and we were unable to recover it. 00:35:33.806 [2024-11-18 00:40:57.307509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.806 [2024-11-18 00:40:57.307550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.806 qpair failed and we were unable to recover it. 00:35:33.806 [2024-11-18 00:40:57.307766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.806 [2024-11-18 00:40:57.307811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.806 qpair failed and we were unable to recover it. 00:35:33.806 [2024-11-18 00:40:57.308004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.806 [2024-11-18 00:40:57.308050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.806 qpair failed and we were unable to recover it. 00:35:33.806 [2024-11-18 00:40:57.308193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.806 [2024-11-18 00:40:57.308237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.806 qpair failed and we were unable to recover it. 00:35:33.806 [2024-11-18 00:40:57.308397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.806 [2024-11-18 00:40:57.308440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.806 qpair failed and we were unable to recover it. 00:35:33.806 [2024-11-18 00:40:57.308581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.806 [2024-11-18 00:40:57.308623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.806 qpair failed and we were unable to recover it. 00:35:33.806 [2024-11-18 00:40:57.308803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.806 [2024-11-18 00:40:57.308848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.806 qpair failed and we were unable to recover it. 00:35:33.806 [2024-11-18 00:40:57.309012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.806 [2024-11-18 00:40:57.309056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.806 qpair failed and we were unable to recover it. 00:35:33.806 [2024-11-18 00:40:57.309224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.806 [2024-11-18 00:40:57.309278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:33.806 qpair failed and we were unable to recover it. 00:35:33.806 [2024-11-18 00:40:57.309545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.806 [2024-11-18 00:40:57.309623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.806 qpair failed and we were unable to recover it. 00:35:33.806 [2024-11-18 00:40:57.309804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.806 [2024-11-18 00:40:57.309850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.806 qpair failed and we were unable to recover it. 00:35:33.806 [2024-11-18 00:40:57.310053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.806 [2024-11-18 00:40:57.310099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.806 qpair failed and we were unable to recover it. 00:35:33.806 [2024-11-18 00:40:57.310294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.806 [2024-11-18 00:40:57.310353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.806 qpair failed and we were unable to recover it. 00:35:33.806 [2024-11-18 00:40:57.310522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.806 [2024-11-18 00:40:57.310564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.806 qpair failed and we were unable to recover it. 00:35:33.806 [2024-11-18 00:40:57.310728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.806 [2024-11-18 00:40:57.310770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.806 qpair failed and we were unable to recover it. 00:35:33.806 [2024-11-18 00:40:57.310944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.806 [2024-11-18 00:40:57.311003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.806 qpair failed and we were unable to recover it. 00:35:33.806 [2024-11-18 00:40:57.311180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.806 [2024-11-18 00:40:57.311243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.806 qpair failed and we were unable to recover it. 00:35:33.806 [2024-11-18 00:40:57.311451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.806 [2024-11-18 00:40:57.311493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.806 qpair failed and we were unable to recover it. 00:35:33.806 [2024-11-18 00:40:57.311633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.806 [2024-11-18 00:40:57.311677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.806 qpair failed and we were unable to recover it. 00:35:33.806 [2024-11-18 00:40:57.311794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.806 [2024-11-18 00:40:57.311836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.806 qpair failed and we were unable to recover it. 00:35:33.806 [2024-11-18 00:40:57.312038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.806 [2024-11-18 00:40:57.312095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.806 qpair failed and we were unable to recover it. 00:35:33.806 [2024-11-18 00:40:57.312293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.806 [2024-11-18 00:40:57.312347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.806 qpair failed and we were unable to recover it. 00:35:33.806 [2024-11-18 00:40:57.312501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.806 [2024-11-18 00:40:57.312555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.806 qpair failed and we were unable to recover it. 00:35:33.806 [2024-11-18 00:40:57.312775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.806 [2024-11-18 00:40:57.312818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.806 qpair failed and we were unable to recover it. 00:35:33.806 [2024-11-18 00:40:57.312992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.806 [2024-11-18 00:40:57.313035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.807 qpair failed and we were unable to recover it. 00:35:33.807 [2024-11-18 00:40:57.313220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.807 [2024-11-18 00:40:57.313264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.807 qpair failed and we were unable to recover it. 00:35:33.807 [2024-11-18 00:40:57.313461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.807 [2024-11-18 00:40:57.313503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.807 qpair failed and we were unable to recover it. 00:35:33.807 [2024-11-18 00:40:57.313716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.807 [2024-11-18 00:40:57.313759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.807 qpair failed and we were unable to recover it. 00:35:33.807 [2024-11-18 00:40:57.313962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.807 [2024-11-18 00:40:57.314005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.807 qpair failed and we were unable to recover it. 00:35:33.807 [2024-11-18 00:40:57.314182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.807 [2024-11-18 00:40:57.314224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.807 qpair failed and we were unable to recover it. 00:35:33.807 [2024-11-18 00:40:57.314384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.807 [2024-11-18 00:40:57.314428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.807 qpair failed and we were unable to recover it. 00:35:33.807 [2024-11-18 00:40:57.314613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.807 [2024-11-18 00:40:57.314656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.807 qpair failed and we were unable to recover it. 00:35:33.807 [2024-11-18 00:40:57.314824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.807 [2024-11-18 00:40:57.314867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.807 qpair failed and we were unable to recover it. 00:35:33.807 [2024-11-18 00:40:57.315043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.807 [2024-11-18 00:40:57.315086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.807 qpair failed and we were unable to recover it. 00:35:33.807 [2024-11-18 00:40:57.315286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.807 [2024-11-18 00:40:57.315340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.807 qpair failed and we were unable to recover it. 00:35:33.807 [2024-11-18 00:40:57.315511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.807 [2024-11-18 00:40:57.315554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.807 qpair failed and we were unable to recover it. 00:35:33.807 [2024-11-18 00:40:57.315691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.807 [2024-11-18 00:40:57.315734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.807 qpair failed and we were unable to recover it. 00:35:33.807 [2024-11-18 00:40:57.315933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.807 [2024-11-18 00:40:57.315979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.807 qpair failed and we were unable to recover it. 00:35:33.807 [2024-11-18 00:40:57.316172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.807 [2024-11-18 00:40:57.316219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.807 qpair failed and we were unable to recover it. 00:35:33.807 [2024-11-18 00:40:57.316443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.807 [2024-11-18 00:40:57.316489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.807 qpair failed and we were unable to recover it. 00:35:33.807 [2024-11-18 00:40:57.316658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.807 [2024-11-18 00:40:57.316703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.807 qpair failed and we were unable to recover it. 00:35:33.807 [2024-11-18 00:40:57.316863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.807 [2024-11-18 00:40:57.316908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.807 qpair failed and we were unable to recover it. 00:35:33.807 [2024-11-18 00:40:57.317074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.807 [2024-11-18 00:40:57.317133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.807 qpair failed and we were unable to recover it. 00:35:33.807 [2024-11-18 00:40:57.317356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.807 [2024-11-18 00:40:57.317400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.807 qpair failed and we were unable to recover it. 00:35:33.807 [2024-11-18 00:40:57.317560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.807 [2024-11-18 00:40:57.317603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.807 qpair failed and we were unable to recover it. 00:35:33.807 [2024-11-18 00:40:57.317773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.807 [2024-11-18 00:40:57.317816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.807 qpair failed and we were unable to recover it. 00:35:33.807 [2024-11-18 00:40:57.318007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.807 [2024-11-18 00:40:57.318054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.807 qpair failed and we were unable to recover it. 00:35:33.807 [2024-11-18 00:40:57.318266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.807 [2024-11-18 00:40:57.318325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.807 qpair failed and we were unable to recover it. 00:35:33.807 [2024-11-18 00:40:57.318473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.807 [2024-11-18 00:40:57.318521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.807 qpair failed and we were unable to recover it. 00:35:33.807 [2024-11-18 00:40:57.318748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.807 [2024-11-18 00:40:57.318794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.807 qpair failed and we were unable to recover it. 00:35:33.807 [2024-11-18 00:40:57.318939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.807 [2024-11-18 00:40:57.318985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.807 qpair failed and we were unable to recover it. 00:35:33.807 [2024-11-18 00:40:57.319164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.807 [2024-11-18 00:40:57.319211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.807 qpair failed and we were unable to recover it. 00:35:33.807 [2024-11-18 00:40:57.319393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.807 [2024-11-18 00:40:57.319440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.807 qpair failed and we were unable to recover it. 00:35:33.807 [2024-11-18 00:40:57.319582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.807 [2024-11-18 00:40:57.319629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.807 qpair failed and we were unable to recover it. 00:35:33.807 [2024-11-18 00:40:57.319820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.807 [2024-11-18 00:40:57.319866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.807 qpair failed and we were unable to recover it. 00:35:33.807 [2024-11-18 00:40:57.320040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.807 [2024-11-18 00:40:57.320086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.807 qpair failed and we were unable to recover it. 00:35:33.807 [2024-11-18 00:40:57.320222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.807 [2024-11-18 00:40:57.320267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.807 qpair failed and we were unable to recover it. 00:35:33.807 [2024-11-18 00:40:57.320501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.807 [2024-11-18 00:40:57.320547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.807 qpair failed and we were unable to recover it. 00:35:33.807 [2024-11-18 00:40:57.320726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.807 [2024-11-18 00:40:57.320771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.807 qpair failed and we were unable to recover it. 00:35:33.807 [2024-11-18 00:40:57.320985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.807 [2024-11-18 00:40:57.321031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.807 qpair failed and we were unable to recover it. 00:35:33.807 [2024-11-18 00:40:57.321285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.807 [2024-11-18 00:40:57.321375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.807 qpair failed and we were unable to recover it. 00:35:33.807 [2024-11-18 00:40:57.321557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.807 [2024-11-18 00:40:57.321605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.807 qpair failed and we were unable to recover it. 00:35:33.807 [2024-11-18 00:40:57.321820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.807 [2024-11-18 00:40:57.321874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.808 qpair failed and we were unable to recover it. 00:35:33.808 [2024-11-18 00:40:57.322095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.808 [2024-11-18 00:40:57.322141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.808 qpair failed and we were unable to recover it. 00:35:33.808 [2024-11-18 00:40:57.322278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.808 [2024-11-18 00:40:57.322339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.808 qpair failed and we were unable to recover it. 00:35:33.808 [2024-11-18 00:40:57.322485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.808 [2024-11-18 00:40:57.322531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.808 qpair failed and we were unable to recover it. 00:35:33.808 [2024-11-18 00:40:57.322672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.808 [2024-11-18 00:40:57.322718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.808 qpair failed and we were unable to recover it. 00:35:33.808 [2024-11-18 00:40:57.322848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.808 [2024-11-18 00:40:57.322894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.808 qpair failed and we were unable to recover it. 00:35:33.808 [2024-11-18 00:40:57.323061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.808 [2024-11-18 00:40:57.323107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.808 qpair failed and we were unable to recover it. 00:35:33.808 [2024-11-18 00:40:57.323293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.808 [2024-11-18 00:40:57.323353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.808 qpair failed and we were unable to recover it. 00:35:33.808 [2024-11-18 00:40:57.323595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.808 [2024-11-18 00:40:57.323665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.808 qpair failed and we were unable to recover it. 00:35:33.808 [2024-11-18 00:40:57.323860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.808 [2024-11-18 00:40:57.323908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.808 qpair failed and we were unable to recover it. 00:35:33.808 [2024-11-18 00:40:57.324078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.808 [2024-11-18 00:40:57.324125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.808 qpair failed and we were unable to recover it. 00:35:33.808 [2024-11-18 00:40:57.324302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.808 [2024-11-18 00:40:57.324367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.808 qpair failed and we were unable to recover it. 00:35:33.808 [2024-11-18 00:40:57.324556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.808 [2024-11-18 00:40:57.324603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.808 qpair failed and we were unable to recover it. 00:35:33.808 [2024-11-18 00:40:57.324821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.808 [2024-11-18 00:40:57.324867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.808 qpair failed and we were unable to recover it. 00:35:33.808 [2024-11-18 00:40:57.325065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.808 [2024-11-18 00:40:57.325111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.808 qpair failed and we were unable to recover it. 00:35:33.808 [2024-11-18 00:40:57.325241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.808 [2024-11-18 00:40:57.325287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.808 qpair failed and we were unable to recover it. 00:35:33.808 [2024-11-18 00:40:57.325506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.808 [2024-11-18 00:40:57.325552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.808 qpair failed and we were unable to recover it. 00:35:33.808 [2024-11-18 00:40:57.325766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.808 [2024-11-18 00:40:57.325812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.808 qpair failed and we were unable to recover it. 00:35:33.808 [2024-11-18 00:40:57.326004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.808 [2024-11-18 00:40:57.326052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.808 qpair failed and we were unable to recover it. 00:35:33.808 [2024-11-18 00:40:57.326266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.808 [2024-11-18 00:40:57.326325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.808 qpair failed and we were unable to recover it. 00:35:33.808 [2024-11-18 00:40:57.326512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.808 [2024-11-18 00:40:57.326558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.808 qpair failed and we were unable to recover it. 00:35:33.808 [2024-11-18 00:40:57.326696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.808 [2024-11-18 00:40:57.326741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.808 qpair failed and we were unable to recover it. 00:35:33.808 [2024-11-18 00:40:57.326921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.808 [2024-11-18 00:40:57.326967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.808 qpair failed and we were unable to recover it. 00:35:33.808 [2024-11-18 00:40:57.327101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.808 [2024-11-18 00:40:57.327147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.808 qpair failed and we were unable to recover it. 00:35:33.808 [2024-11-18 00:40:57.327300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.808 [2024-11-18 00:40:57.327364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.808 qpair failed and we were unable to recover it. 00:35:33.808 [2024-11-18 00:40:57.327512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.808 [2024-11-18 00:40:57.327557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.808 qpair failed and we were unable to recover it. 00:35:33.808 [2024-11-18 00:40:57.327770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.808 [2024-11-18 00:40:57.327815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.808 qpair failed and we were unable to recover it. 00:35:33.808 [2024-11-18 00:40:57.328030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.808 [2024-11-18 00:40:57.328084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.808 qpair failed and we were unable to recover it. 00:35:33.808 [2024-11-18 00:40:57.328264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.808 [2024-11-18 00:40:57.328326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.808 qpair failed and we were unable to recover it. 00:35:33.808 [2024-11-18 00:40:57.328492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.808 [2024-11-18 00:40:57.328537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.808 qpair failed and we were unable to recover it. 00:35:33.808 [2024-11-18 00:40:57.328759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.808 [2024-11-18 00:40:57.328805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.808 qpair failed and we were unable to recover it. 00:35:33.808 [2024-11-18 00:40:57.328984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.808 [2024-11-18 00:40:57.329030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.808 qpair failed and we were unable to recover it. 00:35:33.808 [2024-11-18 00:40:57.329169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.808 [2024-11-18 00:40:57.329214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.808 qpair failed and we were unable to recover it. 00:35:33.808 [2024-11-18 00:40:57.329397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.808 [2024-11-18 00:40:57.329444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.808 qpair failed and we were unable to recover it. 00:35:33.808 [2024-11-18 00:40:57.329592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.808 [2024-11-18 00:40:57.329638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.808 qpair failed and we were unable to recover it. 00:35:33.808 [2024-11-18 00:40:57.329765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.808 [2024-11-18 00:40:57.329810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.808 qpair failed and we were unable to recover it. 00:35:33.808 [2024-11-18 00:40:57.330024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.808 [2024-11-18 00:40:57.330069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.808 qpair failed and we were unable to recover it. 00:35:33.808 [2024-11-18 00:40:57.330226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.808 [2024-11-18 00:40:57.330274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.808 qpair failed and we were unable to recover it. 00:35:33.808 [2024-11-18 00:40:57.330501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.808 [2024-11-18 00:40:57.330547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.808 qpair failed and we were unable to recover it. 00:35:33.809 [2024-11-18 00:40:57.330726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.809 [2024-11-18 00:40:57.330774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.809 qpair failed and we were unable to recover it. 00:35:33.809 [2024-11-18 00:40:57.330985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.809 [2024-11-18 00:40:57.331032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.809 qpair failed and we were unable to recover it. 00:35:33.809 [2024-11-18 00:40:57.331305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.809 [2024-11-18 00:40:57.331366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.809 qpair failed and we were unable to recover it. 00:35:33.809 [2024-11-18 00:40:57.331542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.809 [2024-11-18 00:40:57.331587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.809 qpair failed and we were unable to recover it. 00:35:33.809 [2024-11-18 00:40:57.331726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.809 [2024-11-18 00:40:57.331791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.809 qpair failed and we were unable to recover it. 00:35:33.809 [2024-11-18 00:40:57.331966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.809 [2024-11-18 00:40:57.332015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.809 qpair failed and we were unable to recover it. 00:35:33.809 [2024-11-18 00:40:57.332219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.809 [2024-11-18 00:40:57.332268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.809 qpair failed and we were unable to recover it. 00:35:33.809 [2024-11-18 00:40:57.332507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.809 [2024-11-18 00:40:57.332557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.809 qpair failed and we were unable to recover it. 00:35:33.809 [2024-11-18 00:40:57.332747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.809 [2024-11-18 00:40:57.332795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.809 qpair failed and we were unable to recover it. 00:35:33.809 [2024-11-18 00:40:57.332996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.809 [2024-11-18 00:40:57.333041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.809 qpair failed and we were unable to recover it. 00:35:33.809 [2024-11-18 00:40:57.333221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.809 [2024-11-18 00:40:57.333267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.809 qpair failed and we were unable to recover it. 00:35:33.809 [2024-11-18 00:40:57.333435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.809 [2024-11-18 00:40:57.333481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.809 qpair failed and we were unable to recover it. 00:35:33.809 [2024-11-18 00:40:57.333665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.809 [2024-11-18 00:40:57.333710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.809 qpair failed and we were unable to recover it. 00:35:33.809 [2024-11-18 00:40:57.333893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.809 [2024-11-18 00:40:57.333940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.809 qpair failed and we were unable to recover it. 00:35:33.809 [2024-11-18 00:40:57.334088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.809 [2024-11-18 00:40:57.334135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.809 qpair failed and we were unable to recover it. 00:35:33.809 [2024-11-18 00:40:57.334336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.809 [2024-11-18 00:40:57.334394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.809 qpair failed and we were unable to recover it. 00:35:33.809 [2024-11-18 00:40:57.334619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.809 [2024-11-18 00:40:57.334670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.809 qpair failed and we were unable to recover it. 00:35:33.809 [2024-11-18 00:40:57.334875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.809 [2024-11-18 00:40:57.334939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.809 qpair failed and we were unable to recover it. 00:35:33.809 [2024-11-18 00:40:57.335154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.809 [2024-11-18 00:40:57.335200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.809 qpair failed and we were unable to recover it. 00:35:33.809 [2024-11-18 00:40:57.335390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.809 [2024-11-18 00:40:57.335438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.809 qpair failed and we were unable to recover it. 00:35:33.809 [2024-11-18 00:40:57.335651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.809 [2024-11-18 00:40:57.335697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.809 qpair failed and we were unable to recover it. 00:35:33.809 [2024-11-18 00:40:57.335909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.809 [2024-11-18 00:40:57.335955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.809 qpair failed and we were unable to recover it. 00:35:33.809 [2024-11-18 00:40:57.336129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.809 [2024-11-18 00:40:57.336176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.809 qpair failed and we were unable to recover it. 00:35:33.809 [2024-11-18 00:40:57.336409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.809 [2024-11-18 00:40:57.336474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.809 qpair failed and we were unable to recover it. 00:35:33.809 [2024-11-18 00:40:57.336723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.809 [2024-11-18 00:40:57.336788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.809 qpair failed and we were unable to recover it. 00:35:33.809 [2024-11-18 00:40:57.337022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.809 [2024-11-18 00:40:57.337068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.809 qpair failed and we were unable to recover it. 00:35:33.809 [2024-11-18 00:40:57.337234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.809 [2024-11-18 00:40:57.337279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.809 qpair failed and we were unable to recover it. 00:35:33.809 [2024-11-18 00:40:57.337507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.809 [2024-11-18 00:40:57.337553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.809 qpair failed and we were unable to recover it. 00:35:33.809 [2024-11-18 00:40:57.337708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.809 [2024-11-18 00:40:57.337756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.809 qpair failed and we were unable to recover it. 00:35:33.809 [2024-11-18 00:40:57.337955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.809 [2024-11-18 00:40:57.338002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.809 qpair failed and we were unable to recover it. 00:35:33.809 [2024-11-18 00:40:57.338147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.809 [2024-11-18 00:40:57.338211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.809 qpair failed and we were unable to recover it. 00:35:33.809 [2024-11-18 00:40:57.338410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.809 [2024-11-18 00:40:57.338461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.809 qpair failed and we were unable to recover it. 00:35:33.809 [2024-11-18 00:40:57.338651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.809 [2024-11-18 00:40:57.338699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.809 qpair failed and we were unable to recover it. 00:35:33.809 [2024-11-18 00:40:57.338864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.810 [2024-11-18 00:40:57.338913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.810 qpair failed and we were unable to recover it. 00:35:33.810 [2024-11-18 00:40:57.339066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.810 [2024-11-18 00:40:57.339115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.810 qpair failed and we were unable to recover it. 00:35:33.810 [2024-11-18 00:40:57.339326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.810 [2024-11-18 00:40:57.339373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.810 qpair failed and we were unable to recover it. 00:35:33.810 [2024-11-18 00:40:57.339523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.810 [2024-11-18 00:40:57.339569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.810 qpair failed and we were unable to recover it. 00:35:33.810 [2024-11-18 00:40:57.339753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.810 [2024-11-18 00:40:57.339799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.810 qpair failed and we were unable to recover it. 00:35:33.810 [2024-11-18 00:40:57.339983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.810 [2024-11-18 00:40:57.340029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.810 qpair failed and we were unable to recover it. 00:35:33.810 [2024-11-18 00:40:57.340172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.810 [2024-11-18 00:40:57.340218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.810 qpair failed and we were unable to recover it. 00:35:33.810 [2024-11-18 00:40:57.340432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.810 [2024-11-18 00:40:57.340479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.810 qpair failed and we were unable to recover it. 00:35:33.810 [2024-11-18 00:40:57.340661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.810 [2024-11-18 00:40:57.340707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.810 qpair failed and we were unable to recover it. 00:35:33.810 [2024-11-18 00:40:57.340885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.810 [2024-11-18 00:40:57.340938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.810 qpair failed and we were unable to recover it. 00:35:33.810 [2024-11-18 00:40:57.341097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.810 [2024-11-18 00:40:57.341142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.810 qpair failed and we were unable to recover it. 00:35:33.810 [2024-11-18 00:40:57.341331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.810 [2024-11-18 00:40:57.341378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.810 qpair failed and we were unable to recover it. 00:35:33.810 [2024-11-18 00:40:57.341614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.810 [2024-11-18 00:40:57.341667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.810 qpair failed and we were unable to recover it. 00:35:33.810 [2024-11-18 00:40:57.341863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.810 [2024-11-18 00:40:57.341912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.810 qpair failed and we were unable to recover it. 00:35:33.810 [2024-11-18 00:40:57.342105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.810 [2024-11-18 00:40:57.342153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.810 qpair failed and we were unable to recover it. 00:35:33.810 [2024-11-18 00:40:57.342342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.810 [2024-11-18 00:40:57.342394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.810 qpair failed and we were unable to recover it. 00:35:33.810 [2024-11-18 00:40:57.342575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.810 [2024-11-18 00:40:57.342624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.810 qpair failed and we were unable to recover it. 00:35:33.810 [2024-11-18 00:40:57.342877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.810 [2024-11-18 00:40:57.342946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.810 qpair failed and we were unable to recover it. 00:35:33.810 [2024-11-18 00:40:57.343179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.810 [2024-11-18 00:40:57.343231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.810 qpair failed and we were unable to recover it. 00:35:33.810 [2024-11-18 00:40:57.343410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.810 [2024-11-18 00:40:57.343461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.810 qpair failed and we were unable to recover it. 00:35:33.810 [2024-11-18 00:40:57.343684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.810 [2024-11-18 00:40:57.343732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.810 qpair failed and we were unable to recover it. 00:35:33.810 [2024-11-18 00:40:57.343924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.810 [2024-11-18 00:40:57.343973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.810 qpair failed and we were unable to recover it. 00:35:33.810 [2024-11-18 00:40:57.344163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.810 [2024-11-18 00:40:57.344212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.810 qpair failed and we were unable to recover it. 00:35:33.810 [2024-11-18 00:40:57.344524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.810 [2024-11-18 00:40:57.344574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.810 qpair failed and we were unable to recover it. 00:35:33.810 [2024-11-18 00:40:57.344720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.810 [2024-11-18 00:40:57.344767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.810 qpair failed and we were unable to recover it. 00:35:33.810 [2024-11-18 00:40:57.344996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.810 [2024-11-18 00:40:57.345062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.810 qpair failed and we were unable to recover it. 00:35:33.810 [2024-11-18 00:40:57.345232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.810 [2024-11-18 00:40:57.345280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.810 qpair failed and we were unable to recover it. 00:35:33.810 [2024-11-18 00:40:57.345481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.810 [2024-11-18 00:40:57.345531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.810 qpair failed and we were unable to recover it. 00:35:33.810 [2024-11-18 00:40:57.345738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.810 [2024-11-18 00:40:57.345786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.810 qpair failed and we were unable to recover it. 00:35:33.810 [2024-11-18 00:40:57.345973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.810 [2024-11-18 00:40:57.346022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.810 qpair failed and we were unable to recover it. 00:35:33.810 [2024-11-18 00:40:57.346218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.810 [2024-11-18 00:40:57.346267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.810 qpair failed and we were unable to recover it. 00:35:33.810 [2024-11-18 00:40:57.346482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.810 [2024-11-18 00:40:57.346532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.810 qpair failed and we were unable to recover it. 00:35:33.810 [2024-11-18 00:40:57.346728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.810 [2024-11-18 00:40:57.346776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.810 qpair failed and we were unable to recover it. 00:35:33.810 [2024-11-18 00:40:57.346963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.810 [2024-11-18 00:40:57.347011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.810 qpair failed and we were unable to recover it. 00:35:33.810 [2024-11-18 00:40:57.347209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.810 [2024-11-18 00:40:57.347257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.810 qpair failed and we were unable to recover it. 00:35:33.810 [2024-11-18 00:40:57.347453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.810 [2024-11-18 00:40:57.347502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.810 qpair failed and we were unable to recover it. 00:35:33.810 [2024-11-18 00:40:57.347699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.810 [2024-11-18 00:40:57.347748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.810 qpair failed and we were unable to recover it. 00:35:33.810 [2024-11-18 00:40:57.347982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.810 [2024-11-18 00:40:57.348030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.810 qpair failed and we were unable to recover it. 00:35:33.810 [2024-11-18 00:40:57.348178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.811 [2024-11-18 00:40:57.348254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.811 qpair failed and we were unable to recover it. 00:35:33.811 [2024-11-18 00:40:57.348462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.811 [2024-11-18 00:40:57.348512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.811 qpair failed and we were unable to recover it. 00:35:33.811 [2024-11-18 00:40:57.348710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.811 [2024-11-18 00:40:57.348758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.811 qpair failed and we were unable to recover it. 00:35:33.811 [2024-11-18 00:40:57.348953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.811 [2024-11-18 00:40:57.349002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.811 qpair failed and we were unable to recover it. 00:35:33.811 [2024-11-18 00:40:57.349147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.811 [2024-11-18 00:40:57.349196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.811 qpair failed and we were unable to recover it. 00:35:33.811 [2024-11-18 00:40:57.349438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.811 [2024-11-18 00:40:57.349488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.811 qpair failed and we were unable to recover it. 00:35:33.811 [2024-11-18 00:40:57.349685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.811 [2024-11-18 00:40:57.349749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.811 qpair failed and we were unable to recover it. 00:35:33.811 [2024-11-18 00:40:57.349959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.811 [2024-11-18 00:40:57.350040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.811 qpair failed and we were unable to recover it. 00:35:33.811 [2024-11-18 00:40:57.350286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.811 [2024-11-18 00:40:57.350395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.811 qpair failed and we were unable to recover it. 00:35:33.811 [2024-11-18 00:40:57.350620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.811 [2024-11-18 00:40:57.350669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.811 qpair failed and we were unable to recover it. 00:35:33.811 [2024-11-18 00:40:57.350863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.811 [2024-11-18 00:40:57.350912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.811 qpair failed and we were unable to recover it. 00:35:33.811 [2024-11-18 00:40:57.351092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.811 [2024-11-18 00:40:57.351140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.811 qpair failed and we were unable to recover it. 00:35:33.811 [2024-11-18 00:40:57.351343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.811 [2024-11-18 00:40:57.351394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.811 qpair failed and we were unable to recover it. 00:35:33.811 [2024-11-18 00:40:57.351617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.811 [2024-11-18 00:40:57.351666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.811 qpair failed and we were unable to recover it. 00:35:33.811 [2024-11-18 00:40:57.351801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.811 [2024-11-18 00:40:57.351849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.811 qpair failed and we were unable to recover it. 00:35:33.811 [2024-11-18 00:40:57.352044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.811 [2024-11-18 00:40:57.352092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.811 qpair failed and we were unable to recover it. 00:35:33.811 [2024-11-18 00:40:57.352226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.811 [2024-11-18 00:40:57.352275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.811 qpair failed and we were unable to recover it. 00:35:33.811 [2024-11-18 00:40:57.352447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.811 [2024-11-18 00:40:57.352496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.811 qpair failed and we were unable to recover it. 00:35:33.811 [2024-11-18 00:40:57.352720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.811 [2024-11-18 00:40:57.352768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.811 qpair failed and we were unable to recover it. 00:35:33.811 [2024-11-18 00:40:57.352959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.811 [2024-11-18 00:40:57.353008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.811 qpair failed and we were unable to recover it. 00:35:33.811 [2024-11-18 00:40:57.353144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.811 [2024-11-18 00:40:57.353192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.811 qpair failed and we were unable to recover it. 00:35:33.811 [2024-11-18 00:40:57.353382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.811 [2024-11-18 00:40:57.353432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.811 qpair failed and we were unable to recover it. 00:35:33.811 [2024-11-18 00:40:57.353614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.811 [2024-11-18 00:40:57.353663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.811 qpair failed and we were unable to recover it. 00:35:33.811 [2024-11-18 00:40:57.353830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.811 [2024-11-18 00:40:57.353881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.811 qpair failed and we were unable to recover it. 00:35:33.811 [2024-11-18 00:40:57.354108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.811 [2024-11-18 00:40:57.354157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.811 qpair failed and we were unable to recover it. 00:35:33.811 [2024-11-18 00:40:57.354357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.811 [2024-11-18 00:40:57.354406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.811 qpair failed and we were unable to recover it. 00:35:33.811 [2024-11-18 00:40:57.354613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.811 [2024-11-18 00:40:57.354662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.811 qpair failed and we were unable to recover it. 00:35:33.811 [2024-11-18 00:40:57.354802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.811 [2024-11-18 00:40:57.354850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.811 qpair failed and we were unable to recover it. 00:35:33.811 [2024-11-18 00:40:57.354989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.811 [2024-11-18 00:40:57.355038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.811 qpair failed and we were unable to recover it. 00:35:33.811 [2024-11-18 00:40:57.355261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.811 [2024-11-18 00:40:57.355350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.811 qpair failed and we were unable to recover it. 00:35:33.811 [2024-11-18 00:40:57.355542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.811 [2024-11-18 00:40:57.355593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.811 qpair failed and we were unable to recover it. 00:35:33.811 [2024-11-18 00:40:57.355777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.811 [2024-11-18 00:40:57.355826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.811 qpair failed and we were unable to recover it. 00:35:33.811 [2024-11-18 00:40:57.356019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.811 [2024-11-18 00:40:57.356067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.811 qpair failed and we were unable to recover it. 00:35:33.811 [2024-11-18 00:40:57.356263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.811 [2024-11-18 00:40:57.356324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.811 qpair failed and we were unable to recover it. 00:35:33.811 [2024-11-18 00:40:57.356553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.811 [2024-11-18 00:40:57.356602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.811 qpair failed and we were unable to recover it. 00:35:33.811 [2024-11-18 00:40:57.356794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.811 [2024-11-18 00:40:57.356843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.811 qpair failed and we were unable to recover it. 00:35:33.811 [2024-11-18 00:40:57.357065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.811 [2024-11-18 00:40:57.357114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.811 qpair failed and we were unable to recover it. 00:35:33.811 [2024-11-18 00:40:57.357286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.811 [2024-11-18 00:40:57.357350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.811 qpair failed and we were unable to recover it. 00:35:33.811 [2024-11-18 00:40:57.357548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.811 [2024-11-18 00:40:57.357596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.812 qpair failed and we were unable to recover it. 00:35:33.812 [2024-11-18 00:40:57.357792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.812 [2024-11-18 00:40:57.357849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.812 qpair failed and we were unable to recover it. 00:35:33.812 [2024-11-18 00:40:57.358011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.812 [2024-11-18 00:40:57.358060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.812 qpair failed and we were unable to recover it. 00:35:33.812 [2024-11-18 00:40:57.358278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.812 [2024-11-18 00:40:57.358402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.812 qpair failed and we were unable to recover it. 00:35:33.812 [2024-11-18 00:40:57.358603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.812 [2024-11-18 00:40:57.358652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.812 qpair failed and we were unable to recover it. 00:35:33.812 [2024-11-18 00:40:57.358800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.812 [2024-11-18 00:40:57.358848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.812 qpair failed and we were unable to recover it. 00:35:33.812 [2024-11-18 00:40:57.359037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.812 [2024-11-18 00:40:57.359086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.812 qpair failed and we were unable to recover it. 00:35:33.812 [2024-11-18 00:40:57.359263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.812 [2024-11-18 00:40:57.359328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.812 qpair failed and we were unable to recover it. 00:35:33.812 [2024-11-18 00:40:57.359480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.812 [2024-11-18 00:40:57.359530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.812 qpair failed and we were unable to recover it. 00:35:33.812 [2024-11-18 00:40:57.359721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.812 [2024-11-18 00:40:57.359770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.812 qpair failed and we were unable to recover it. 00:35:33.812 [2024-11-18 00:40:57.359998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.812 [2024-11-18 00:40:57.360047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.812 qpair failed and we were unable to recover it. 00:35:33.812 [2024-11-18 00:40:57.360237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.812 [2024-11-18 00:40:57.360286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.812 qpair failed and we were unable to recover it. 00:35:33.812 [2024-11-18 00:40:57.360505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.812 [2024-11-18 00:40:57.360554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.812 qpair failed and we were unable to recover it. 00:35:33.812 [2024-11-18 00:40:57.360785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.812 [2024-11-18 00:40:57.360834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.812 qpair failed and we were unable to recover it. 00:35:33.812 [2024-11-18 00:40:57.361021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.812 [2024-11-18 00:40:57.361069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.812 qpair failed and we were unable to recover it. 00:35:33.812 [2024-11-18 00:40:57.361326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.812 [2024-11-18 00:40:57.361376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.812 qpair failed and we were unable to recover it. 00:35:33.812 [2024-11-18 00:40:57.361552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.812 [2024-11-18 00:40:57.361601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.812 qpair failed and we were unable to recover it. 00:35:33.812 [2024-11-18 00:40:57.361847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.812 [2024-11-18 00:40:57.361899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.812 qpair failed and we were unable to recover it. 00:35:33.812 [2024-11-18 00:40:57.362110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.812 [2024-11-18 00:40:57.362162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.812 qpair failed and we were unable to recover it. 00:35:33.812 [2024-11-18 00:40:57.362393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.812 [2024-11-18 00:40:57.362447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.812 qpair failed and we were unable to recover it. 00:35:33.812 [2024-11-18 00:40:57.362652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.812 [2024-11-18 00:40:57.362704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.812 qpair failed and we were unable to recover it. 00:35:33.812 [2024-11-18 00:40:57.362907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.812 [2024-11-18 00:40:57.362954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.812 qpair failed and we were unable to recover it. 00:35:33.812 [2024-11-18 00:40:57.363145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.812 [2024-11-18 00:40:57.363193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.812 qpair failed and we were unable to recover it. 00:35:33.812 [2024-11-18 00:40:57.363378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.812 [2024-11-18 00:40:57.363431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.812 qpair failed and we were unable to recover it. 00:35:33.812 [2024-11-18 00:40:57.363605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.812 [2024-11-18 00:40:57.363655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.812 qpair failed and we were unable to recover it. 00:35:33.812 [2024-11-18 00:40:57.363835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.812 [2024-11-18 00:40:57.363886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.812 qpair failed and we were unable to recover it. 00:35:33.812 [2024-11-18 00:40:57.364124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.812 [2024-11-18 00:40:57.364176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.812 qpair failed and we were unable to recover it. 00:35:33.812 [2024-11-18 00:40:57.364377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.812 [2024-11-18 00:40:57.364430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.812 qpair failed and we were unable to recover it. 00:35:33.812 [2024-11-18 00:40:57.364629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.812 [2024-11-18 00:40:57.364692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.812 qpair failed and we were unable to recover it. 00:35:33.812 [2024-11-18 00:40:57.364935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.812 [2024-11-18 00:40:57.364988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.812 qpair failed and we were unable to recover it. 00:35:33.812 [2024-11-18 00:40:57.365194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.812 [2024-11-18 00:40:57.365246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.812 qpair failed and we were unable to recover it. 00:35:33.812 [2024-11-18 00:40:57.365429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.812 [2024-11-18 00:40:57.365482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.812 qpair failed and we were unable to recover it. 00:35:33.812 [2024-11-18 00:40:57.365698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.812 [2024-11-18 00:40:57.365750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.812 qpair failed and we were unable to recover it. 00:35:33.812 [2024-11-18 00:40:57.365953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.812 [2024-11-18 00:40:57.366005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.812 qpair failed and we were unable to recover it. 00:35:33.812 [2024-11-18 00:40:57.366162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.812 [2024-11-18 00:40:57.366243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.812 qpair failed and we were unable to recover it. 00:35:33.812 [2024-11-18 00:40:57.366485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.812 [2024-11-18 00:40:57.366539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.812 qpair failed and we were unable to recover it. 00:35:33.812 [2024-11-18 00:40:57.366712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.812 [2024-11-18 00:40:57.366764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.812 qpair failed and we were unable to recover it. 00:35:33.812 [2024-11-18 00:40:57.366965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.812 [2024-11-18 00:40:57.367016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.812 qpair failed and we were unable to recover it. 00:35:33.812 [2024-11-18 00:40:57.367298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.812 [2024-11-18 00:40:57.367403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.812 qpair failed and we were unable to recover it. 00:35:33.813 [2024-11-18 00:40:57.367647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.813 [2024-11-18 00:40:57.367699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.813 qpair failed and we were unable to recover it. 00:35:33.813 [2024-11-18 00:40:57.367897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.813 [2024-11-18 00:40:57.367950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.813 qpair failed and we were unable to recover it. 00:35:33.813 [2024-11-18 00:40:57.368151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.813 [2024-11-18 00:40:57.368203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.813 qpair failed and we were unable to recover it. 00:35:33.813 [2024-11-18 00:40:57.368384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.813 [2024-11-18 00:40:57.368438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.813 qpair failed and we were unable to recover it. 00:35:33.813 [2024-11-18 00:40:57.368648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.813 [2024-11-18 00:40:57.368700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.813 qpair failed and we were unable to recover it. 00:35:33.813 [2024-11-18 00:40:57.368910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.813 [2024-11-18 00:40:57.368961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.813 qpair failed and we were unable to recover it. 00:35:33.813 [2024-11-18 00:40:57.369160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.813 [2024-11-18 00:40:57.369212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.813 qpair failed and we were unable to recover it. 00:35:33.813 [2024-11-18 00:40:57.369395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.813 [2024-11-18 00:40:57.369448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.813 qpair failed and we were unable to recover it. 00:35:33.813 [2024-11-18 00:40:57.369621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.813 [2024-11-18 00:40:57.369673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.813 qpair failed and we were unable to recover it. 00:35:33.813 [2024-11-18 00:40:57.369871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.813 [2024-11-18 00:40:57.369923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.813 qpair failed and we were unable to recover it. 00:35:33.813 [2024-11-18 00:40:57.370118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.813 [2024-11-18 00:40:57.370171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.813 qpair failed and we were unable to recover it. 00:35:33.813 [2024-11-18 00:40:57.370366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.813 [2024-11-18 00:40:57.370419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.813 qpair failed and we were unable to recover it. 00:35:33.813 [2024-11-18 00:40:57.370629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.813 [2024-11-18 00:40:57.370681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.813 qpair failed and we were unable to recover it. 00:35:33.813 [2024-11-18 00:40:57.370918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.813 [2024-11-18 00:40:57.370970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.813 qpair failed and we were unable to recover it. 00:35:33.813 [2024-11-18 00:40:57.371151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.813 [2024-11-18 00:40:57.371216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.813 qpair failed and we were unable to recover it. 00:35:33.813 [2024-11-18 00:40:57.371417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.813 [2024-11-18 00:40:57.371469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.813 qpair failed and we were unable to recover it. 00:35:33.813 [2024-11-18 00:40:57.371665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.813 [2024-11-18 00:40:57.371716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.813 qpair failed and we were unable to recover it. 00:35:33.813 [2024-11-18 00:40:57.371929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.813 [2024-11-18 00:40:57.371981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.813 qpair failed and we were unable to recover it. 00:35:33.813 [2024-11-18 00:40:57.372241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.813 [2024-11-18 00:40:57.372306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.813 qpair failed and we were unable to recover it. 00:35:33.813 [2024-11-18 00:40:57.372555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.813 [2024-11-18 00:40:57.372607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.813 qpair failed and we were unable to recover it. 00:35:33.813 [2024-11-18 00:40:57.372802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.813 [2024-11-18 00:40:57.372854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.813 qpair failed and we were unable to recover it. 00:35:33.813 [2024-11-18 00:40:57.373057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.813 [2024-11-18 00:40:57.373108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.813 qpair failed and we were unable to recover it. 00:35:33.813 [2024-11-18 00:40:57.373337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.813 [2024-11-18 00:40:57.373390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.813 qpair failed and we were unable to recover it. 00:35:33.813 [2024-11-18 00:40:57.373609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.813 [2024-11-18 00:40:57.373661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.813 qpair failed and we were unable to recover it. 00:35:33.813 [2024-11-18 00:40:57.373826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.813 [2024-11-18 00:40:57.373877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.813 qpair failed and we were unable to recover it. 00:35:33.813 [2024-11-18 00:40:57.374073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.813 [2024-11-18 00:40:57.374123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.813 qpair failed and we were unable to recover it. 00:35:33.813 [2024-11-18 00:40:57.374355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.813 [2024-11-18 00:40:57.374408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.813 qpair failed and we were unable to recover it. 00:35:33.813 [2024-11-18 00:40:57.374615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.813 [2024-11-18 00:40:57.374668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.813 qpair failed and we were unable to recover it. 00:35:33.813 [2024-11-18 00:40:57.374880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.813 [2024-11-18 00:40:57.374931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.813 qpair failed and we were unable to recover it. 00:35:33.813 [2024-11-18 00:40:57.375117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.813 [2024-11-18 00:40:57.375168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.813 qpair failed and we were unable to recover it. 00:35:33.813 [2024-11-18 00:40:57.375342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.813 [2024-11-18 00:40:57.375397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.813 qpair failed and we were unable to recover it. 00:35:33.813 [2024-11-18 00:40:57.375593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.813 [2024-11-18 00:40:57.375644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.813 qpair failed and we were unable to recover it. 00:35:33.813 [2024-11-18 00:40:57.375822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.813 [2024-11-18 00:40:57.375874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.813 qpair failed and we were unable to recover it. 00:35:33.813 [2024-11-18 00:40:57.376063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.813 [2024-11-18 00:40:57.376115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.813 qpair failed and we were unable to recover it. 00:35:33.813 [2024-11-18 00:40:57.376286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.814 [2024-11-18 00:40:57.376352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.814 qpair failed and we were unable to recover it. 00:35:33.814 [2024-11-18 00:40:57.376554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.814 [2024-11-18 00:40:57.376607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.814 qpair failed and we were unable to recover it. 00:35:33.814 [2024-11-18 00:40:57.376840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.814 [2024-11-18 00:40:57.376891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.814 qpair failed and we were unable to recover it. 00:35:33.814 [2024-11-18 00:40:57.377083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.814 [2024-11-18 00:40:57.377135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.814 qpair failed and we were unable to recover it. 00:35:33.814 [2024-11-18 00:40:57.377352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.814 [2024-11-18 00:40:57.377405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.814 qpair failed and we were unable to recover it. 00:35:33.814 [2024-11-18 00:40:57.377568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.814 [2024-11-18 00:40:57.377620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.814 qpair failed and we were unable to recover it. 00:35:33.814 [2024-11-18 00:40:57.377857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.814 [2024-11-18 00:40:57.377908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.814 qpair failed and we were unable to recover it. 00:35:33.814 [2024-11-18 00:40:57.378109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.814 [2024-11-18 00:40:57.378161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.814 qpair failed and we were unable to recover it. 00:35:33.814 [2024-11-18 00:40:57.378399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.814 [2024-11-18 00:40:57.378452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.814 qpair failed and we were unable to recover it. 00:35:33.814 [2024-11-18 00:40:57.378654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.814 [2024-11-18 00:40:57.378706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.814 qpair failed and we were unable to recover it. 00:35:33.814 [2024-11-18 00:40:57.378916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.814 [2024-11-18 00:40:57.378971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.814 qpair failed and we were unable to recover it. 00:35:33.814 [2024-11-18 00:40:57.379142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.814 [2024-11-18 00:40:57.379194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.814 qpair failed and we were unable to recover it. 00:35:33.814 [2024-11-18 00:40:57.379392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.814 [2024-11-18 00:40:57.379446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.814 qpair failed and we were unable to recover it. 00:35:33.814 [2024-11-18 00:40:57.379684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.814 [2024-11-18 00:40:57.379736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.814 qpair failed and we were unable to recover it. 00:35:33.814 [2024-11-18 00:40:57.379945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.814 [2024-11-18 00:40:57.379996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.814 qpair failed and we were unable to recover it. 00:35:33.814 [2024-11-18 00:40:57.380204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.814 [2024-11-18 00:40:57.380255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.814 qpair failed and we were unable to recover it. 00:35:33.814 [2024-11-18 00:40:57.380463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.814 [2024-11-18 00:40:57.380517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.814 qpair failed and we were unable to recover it. 00:35:33.814 [2024-11-18 00:40:57.380714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.814 [2024-11-18 00:40:57.380766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.814 qpair failed and we were unable to recover it. 00:35:33.814 [2024-11-18 00:40:57.381007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.814 [2024-11-18 00:40:57.381059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.814 qpair failed and we were unable to recover it. 00:35:33.814 [2024-11-18 00:40:57.381289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.814 [2024-11-18 00:40:57.381353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.814 qpair failed and we were unable to recover it. 00:35:33.814 [2024-11-18 00:40:57.381520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.814 [2024-11-18 00:40:57.381571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.814 qpair failed and we were unable to recover it. 00:35:33.814 [2024-11-18 00:40:57.381812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.814 [2024-11-18 00:40:57.381864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.814 qpair failed and we were unable to recover it. 00:35:33.814 [2024-11-18 00:40:57.382075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.814 [2024-11-18 00:40:57.382131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.814 qpair failed and we were unable to recover it. 00:35:33.814 [2024-11-18 00:40:57.382350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.814 [2024-11-18 00:40:57.382417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.814 qpair failed and we were unable to recover it. 00:35:33.814 [2024-11-18 00:40:57.382605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.814 [2024-11-18 00:40:57.382660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.814 qpair failed and we were unable to recover it. 00:35:33.814 [2024-11-18 00:40:57.382869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.814 [2024-11-18 00:40:57.382925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.814 qpair failed and we were unable to recover it. 00:35:33.814 [2024-11-18 00:40:57.383196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.814 [2024-11-18 00:40:57.383262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.814 qpair failed and we were unable to recover it. 00:35:33.814 [2024-11-18 00:40:57.383504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.814 [2024-11-18 00:40:57.383560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.814 qpair failed and we were unable to recover it. 00:35:33.814 [2024-11-18 00:40:57.383755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.814 [2024-11-18 00:40:57.383811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.814 qpair failed and we were unable to recover it. 00:35:33.814 [2024-11-18 00:40:57.383978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.814 [2024-11-18 00:40:57.384035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.814 qpair failed and we were unable to recover it. 00:35:33.814 [2024-11-18 00:40:57.384262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.814 [2024-11-18 00:40:57.384361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.814 qpair failed and we were unable to recover it. 00:35:33.814 [2024-11-18 00:40:57.384621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.814 [2024-11-18 00:40:57.384677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.814 qpair failed and we were unable to recover it. 00:35:33.814 [2024-11-18 00:40:57.384895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.814 [2024-11-18 00:40:57.384950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.814 qpair failed and we were unable to recover it. 00:35:33.814 [2024-11-18 00:40:57.385162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.814 [2024-11-18 00:40:57.385220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.814 qpair failed and we were unable to recover it. 00:35:33.814 [2024-11-18 00:40:57.385428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.814 [2024-11-18 00:40:57.385481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.814 qpair failed and we were unable to recover it. 00:35:33.814 [2024-11-18 00:40:57.385686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.814 [2024-11-18 00:40:57.385738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.814 qpair failed and we were unable to recover it. 00:35:33.814 [2024-11-18 00:40:57.385929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.814 [2024-11-18 00:40:57.385981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.814 qpair failed and we were unable to recover it. 00:35:33.814 [2024-11-18 00:40:57.386196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.814 [2024-11-18 00:40:57.386248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.814 qpair failed and we were unable to recover it. 00:35:33.814 [2024-11-18 00:40:57.386443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.815 [2024-11-18 00:40:57.386496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.815 qpair failed and we were unable to recover it. 00:35:33.815 [2024-11-18 00:40:57.386678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.815 [2024-11-18 00:40:57.386730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.815 qpair failed and we were unable to recover it. 00:35:33.815 [2024-11-18 00:40:57.386895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.815 [2024-11-18 00:40:57.386947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.815 qpair failed and we were unable to recover it. 00:35:33.815 [2024-11-18 00:40:57.387131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.815 [2024-11-18 00:40:57.387182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.815 qpair failed and we were unable to recover it. 00:35:33.815 [2024-11-18 00:40:57.387368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.815 [2024-11-18 00:40:57.387440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.815 qpair failed and we were unable to recover it. 00:35:33.815 [2024-11-18 00:40:57.387609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.815 [2024-11-18 00:40:57.387661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.815 qpair failed and we were unable to recover it. 00:35:33.815 [2024-11-18 00:40:57.387868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.815 [2024-11-18 00:40:57.387919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.815 qpair failed and we were unable to recover it. 00:35:33.815 [2024-11-18 00:40:57.388115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.815 [2024-11-18 00:40:57.388167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.815 qpair failed and we were unable to recover it. 00:35:33.815 [2024-11-18 00:40:57.388359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.815 [2024-11-18 00:40:57.388412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.815 qpair failed and we were unable to recover it. 00:35:33.815 [2024-11-18 00:40:57.388654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.815 [2024-11-18 00:40:57.388706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.815 qpair failed and we were unable to recover it. 00:35:33.815 [2024-11-18 00:40:57.388951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.815 [2024-11-18 00:40:57.389003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.815 qpair failed and we were unable to recover it. 00:35:33.815 [2024-11-18 00:40:57.389192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.815 [2024-11-18 00:40:57.389243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.815 qpair failed and we were unable to recover it. 00:35:33.815 [2024-11-18 00:40:57.389439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.815 [2024-11-18 00:40:57.389504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.815 qpair failed and we were unable to recover it. 00:35:33.815 [2024-11-18 00:40:57.389694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.815 [2024-11-18 00:40:57.389746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.815 qpair failed and we were unable to recover it. 00:35:33.815 [2024-11-18 00:40:57.389937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.815 [2024-11-18 00:40:57.389988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.815 qpair failed and we were unable to recover it. 00:35:33.815 [2024-11-18 00:40:57.390166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.815 [2024-11-18 00:40:57.390218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.815 qpair failed and we were unable to recover it. 00:35:33.815 [2024-11-18 00:40:57.390379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.815 [2024-11-18 00:40:57.390432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.815 qpair failed and we were unable to recover it. 00:35:33.815 [2024-11-18 00:40:57.390622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.815 [2024-11-18 00:40:57.390674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.815 qpair failed and we were unable to recover it. 00:35:33.815 [2024-11-18 00:40:57.390839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.815 [2024-11-18 00:40:57.390891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.815 qpair failed and we were unable to recover it. 00:35:33.815 [2024-11-18 00:40:57.391138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.815 [2024-11-18 00:40:57.391189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.815 qpair failed and we were unable to recover it. 00:35:33.815 [2024-11-18 00:40:57.391421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.815 [2024-11-18 00:40:57.391475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.815 qpair failed and we were unable to recover it. 00:35:33.815 [2024-11-18 00:40:57.391637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.815 [2024-11-18 00:40:57.391689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.815 qpair failed and we were unable to recover it. 00:35:33.815 [2024-11-18 00:40:57.391921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.815 [2024-11-18 00:40:57.391972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.815 qpair failed and we were unable to recover it. 00:35:33.815 [2024-11-18 00:40:57.392172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.815 [2024-11-18 00:40:57.392224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.815 qpair failed and we were unable to recover it. 00:35:33.815 [2024-11-18 00:40:57.392405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.815 [2024-11-18 00:40:57.392458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.815 qpair failed and we were unable to recover it. 00:35:33.815 [2024-11-18 00:40:57.392619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.815 [2024-11-18 00:40:57.392689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.815 qpair failed and we were unable to recover it. 00:35:33.815 [2024-11-18 00:40:57.392913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.815 [2024-11-18 00:40:57.392968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.815 qpair failed and we were unable to recover it. 00:35:33.815 [2024-11-18 00:40:57.393230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.815 [2024-11-18 00:40:57.393295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.815 qpair failed and we were unable to recover it. 00:35:33.815 [2024-11-18 00:40:57.393540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.815 [2024-11-18 00:40:57.393595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.815 qpair failed and we were unable to recover it. 00:35:33.815 [2024-11-18 00:40:57.393780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.815 [2024-11-18 00:40:57.393836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.815 qpair failed and we were unable to recover it. 00:35:33.815 [2024-11-18 00:40:57.394012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.815 [2024-11-18 00:40:57.394068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.815 qpair failed and we were unable to recover it. 00:35:33.815 [2024-11-18 00:40:57.394256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.815 [2024-11-18 00:40:57.394345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.815 qpair failed and we were unable to recover it. 00:35:33.815 [2024-11-18 00:40:57.394559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.815 [2024-11-18 00:40:57.394614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.815 qpair failed and we were unable to recover it. 00:35:33.815 [2024-11-18 00:40:57.394784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.815 [2024-11-18 00:40:57.394839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.815 qpair failed and we were unable to recover it. 00:35:33.815 [2024-11-18 00:40:57.395060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.815 [2024-11-18 00:40:57.395115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.815 qpair failed and we were unable to recover it. 00:35:33.815 [2024-11-18 00:40:57.395306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.815 [2024-11-18 00:40:57.395383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.815 qpair failed and we were unable to recover it. 00:35:33.815 [2024-11-18 00:40:57.395565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.815 [2024-11-18 00:40:57.395619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.815 qpair failed and we were unable to recover it. 00:35:33.815 [2024-11-18 00:40:57.395832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.815 [2024-11-18 00:40:57.395887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.815 qpair failed and we were unable to recover it. 00:35:33.815 [2024-11-18 00:40:57.396079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.816 [2024-11-18 00:40:57.396135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.816 qpair failed and we were unable to recover it. 00:35:33.816 [2024-11-18 00:40:57.396381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.816 [2024-11-18 00:40:57.396446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.816 qpair failed and we were unable to recover it. 00:35:33.816 [2024-11-18 00:40:57.396614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.816 [2024-11-18 00:40:57.396670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.816 qpair failed and we were unable to recover it. 00:35:33.816 [2024-11-18 00:40:57.396860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.816 [2024-11-18 00:40:57.396916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.816 qpair failed and we were unable to recover it. 00:35:33.816 [2024-11-18 00:40:57.397171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.816 [2024-11-18 00:40:57.397227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.816 qpair failed and we were unable to recover it. 00:35:33.816 [2024-11-18 00:40:57.397471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.816 [2024-11-18 00:40:57.397528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.816 qpair failed and we were unable to recover it. 00:35:33.816 [2024-11-18 00:40:57.397777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.816 [2024-11-18 00:40:57.397833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.816 qpair failed and we were unable to recover it. 00:35:33.816 [2024-11-18 00:40:57.398044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.816 [2024-11-18 00:40:57.398099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.816 qpair failed and we were unable to recover it. 00:35:33.816 [2024-11-18 00:40:57.398328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.816 [2024-11-18 00:40:57.398385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.816 qpair failed and we were unable to recover it. 00:35:33.816 [2024-11-18 00:40:57.398599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.816 [2024-11-18 00:40:57.398655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.816 qpair failed and we were unable to recover it. 00:35:33.816 [2024-11-18 00:40:57.398874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.816 [2024-11-18 00:40:57.398928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.816 qpair failed and we were unable to recover it. 00:35:33.816 [2024-11-18 00:40:57.399147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.816 [2024-11-18 00:40:57.399202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.816 qpair failed and we were unable to recover it. 00:35:33.816 [2024-11-18 00:40:57.399458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.816 [2024-11-18 00:40:57.399515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.816 qpair failed and we were unable to recover it. 00:35:33.816 [2024-11-18 00:40:57.399731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.816 [2024-11-18 00:40:57.399787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.816 qpair failed and we were unable to recover it. 00:35:33.816 [2024-11-18 00:40:57.399964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.816 [2024-11-18 00:40:57.400020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.816 qpair failed and we were unable to recover it. 00:35:33.816 [2024-11-18 00:40:57.400212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.816 [2024-11-18 00:40:57.400291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.816 qpair failed and we were unable to recover it. 00:35:33.816 [2024-11-18 00:40:57.400550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.816 [2024-11-18 00:40:57.400607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.816 qpair failed and we were unable to recover it. 00:35:33.816 [2024-11-18 00:40:57.400829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.816 [2024-11-18 00:40:57.400884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.816 qpair failed and we were unable to recover it. 00:35:33.816 [2024-11-18 00:40:57.401129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.816 [2024-11-18 00:40:57.401194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.816 qpair failed and we were unable to recover it. 00:35:33.816 [2024-11-18 00:40:57.401477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.816 [2024-11-18 00:40:57.401534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.816 qpair failed and we were unable to recover it. 00:35:33.816 [2024-11-18 00:40:57.401755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.816 [2024-11-18 00:40:57.401811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.816 qpair failed and we were unable to recover it. 00:35:33.816 [2024-11-18 00:40:57.402026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.816 [2024-11-18 00:40:57.402082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.816 qpair failed and we were unable to recover it. 00:35:33.816 [2024-11-18 00:40:57.402296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.816 [2024-11-18 00:40:57.402370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.816 qpair failed and we were unable to recover it. 00:35:33.816 [2024-11-18 00:40:57.402574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.816 [2024-11-18 00:40:57.402629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.816 qpair failed and we were unable to recover it. 00:35:33.816 [2024-11-18 00:40:57.402797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.816 [2024-11-18 00:40:57.402852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.816 qpair failed and we were unable to recover it. 00:35:33.816 [2024-11-18 00:40:57.403063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.816 [2024-11-18 00:40:57.403118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.816 qpair failed and we were unable to recover it. 00:35:33.816 [2024-11-18 00:40:57.403359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.816 [2024-11-18 00:40:57.403416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.816 qpair failed and we were unable to recover it. 00:35:33.816 [2024-11-18 00:40:57.403665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.816 [2024-11-18 00:40:57.403722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.816 qpair failed and we were unable to recover it. 00:35:33.816 [2024-11-18 00:40:57.403970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.816 [2024-11-18 00:40:57.404026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.816 qpair failed and we were unable to recover it. 00:35:33.816 [2024-11-18 00:40:57.404238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.816 [2024-11-18 00:40:57.404294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.816 qpair failed and we were unable to recover it. 00:35:33.816 [2024-11-18 00:40:57.404540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.816 [2024-11-18 00:40:57.404596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.816 qpair failed and we were unable to recover it. 00:35:33.816 [2024-11-18 00:40:57.404846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.816 [2024-11-18 00:40:57.404902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.816 qpair failed and we were unable to recover it. 00:35:33.816 [2024-11-18 00:40:57.405079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.816 [2024-11-18 00:40:57.405135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.816 qpair failed and we were unable to recover it. 00:35:33.816 [2024-11-18 00:40:57.405344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.816 [2024-11-18 00:40:57.405401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.816 qpair failed and we were unable to recover it. 00:35:33.816 [2024-11-18 00:40:57.405622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.816 [2024-11-18 00:40:57.405678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.816 qpair failed and we were unable to recover it. 00:35:33.816 [2024-11-18 00:40:57.405899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.816 [2024-11-18 00:40:57.405954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.816 qpair failed and we were unable to recover it. 00:35:33.816 [2024-11-18 00:40:57.406150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.816 [2024-11-18 00:40:57.406206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.816 qpair failed and we were unable to recover it. 00:35:33.816 [2024-11-18 00:40:57.406421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.816 [2024-11-18 00:40:57.406480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.816 qpair failed and we were unable to recover it. 00:35:33.816 [2024-11-18 00:40:57.406656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.816 [2024-11-18 00:40:57.406712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.816 qpair failed and we were unable to recover it. 00:35:33.817 [2024-11-18 00:40:57.406896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.817 [2024-11-18 00:40:57.406952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.817 qpair failed and we were unable to recover it. 00:35:33.817 [2024-11-18 00:40:57.407207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.817 [2024-11-18 00:40:57.407262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.817 qpair failed and we were unable to recover it. 00:35:33.817 [2024-11-18 00:40:57.407488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.817 [2024-11-18 00:40:57.407544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.817 qpair failed and we were unable to recover it. 00:35:33.817 [2024-11-18 00:40:57.407750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.817 [2024-11-18 00:40:57.407806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.817 qpair failed and we were unable to recover it. 00:35:33.817 [2024-11-18 00:40:57.408018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.817 [2024-11-18 00:40:57.408073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.817 qpair failed and we were unable to recover it. 00:35:33.817 [2024-11-18 00:40:57.408243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.817 [2024-11-18 00:40:57.408299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.817 qpair failed and we were unable to recover it. 00:35:33.817 [2024-11-18 00:40:57.408569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.817 [2024-11-18 00:40:57.408625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.817 qpair failed and we were unable to recover it. 00:35:33.817 [2024-11-18 00:40:57.408852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.817 [2024-11-18 00:40:57.408908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.817 qpair failed and we were unable to recover it. 00:35:33.817 [2024-11-18 00:40:57.409118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.817 [2024-11-18 00:40:57.409174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.817 qpair failed and we were unable to recover it. 00:35:33.817 [2024-11-18 00:40:57.409379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.817 [2024-11-18 00:40:57.409436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.817 qpair failed and we were unable to recover it. 00:35:33.817 [2024-11-18 00:40:57.409613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.817 [2024-11-18 00:40:57.409668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.817 qpair failed and we were unable to recover it. 00:35:33.817 [2024-11-18 00:40:57.409825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.817 [2024-11-18 00:40:57.409880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.817 qpair failed and we were unable to recover it. 00:35:33.817 [2024-11-18 00:40:57.410132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.817 [2024-11-18 00:40:57.410188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.817 qpair failed and we were unable to recover it. 00:35:33.817 [2024-11-18 00:40:57.410401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.817 [2024-11-18 00:40:57.410457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.817 qpair failed and we were unable to recover it. 00:35:33.817 [2024-11-18 00:40:57.410709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.817 [2024-11-18 00:40:57.410765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.817 qpair failed and we were unable to recover it. 00:35:33.817 [2024-11-18 00:40:57.410965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.817 [2024-11-18 00:40:57.411021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.817 qpair failed and we were unable to recover it. 00:35:33.817 [2024-11-18 00:40:57.411182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.817 [2024-11-18 00:40:57.411234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.817 qpair failed and we were unable to recover it. 00:35:33.817 [2024-11-18 00:40:57.411478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.817 [2024-11-18 00:40:57.411536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.817 qpair failed and we were unable to recover it. 00:35:33.817 [2024-11-18 00:40:57.411754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.817 [2024-11-18 00:40:57.411809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.817 qpair failed and we were unable to recover it. 00:35:33.817 [2024-11-18 00:40:57.412066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.817 [2024-11-18 00:40:57.412121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.817 qpair failed and we were unable to recover it. 00:35:33.817 [2024-11-18 00:40:57.412341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.817 [2024-11-18 00:40:57.412398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.817 qpair failed and we were unable to recover it. 00:35:33.817 [2024-11-18 00:40:57.412579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.817 [2024-11-18 00:40:57.412635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.817 qpair failed and we were unable to recover it. 00:35:33.817 [2024-11-18 00:40:57.412853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.817 [2024-11-18 00:40:57.412908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.817 qpair failed and we were unable to recover it. 00:35:33.817 [2024-11-18 00:40:57.413089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.817 [2024-11-18 00:40:57.413145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.817 qpair failed and we were unable to recover it. 00:35:33.817 [2024-11-18 00:40:57.413345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.817 [2024-11-18 00:40:57.413401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.817 qpair failed and we were unable to recover it. 00:35:33.817 [2024-11-18 00:40:57.413612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.817 [2024-11-18 00:40:57.413667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.817 qpair failed and we were unable to recover it. 00:35:33.817 [2024-11-18 00:40:57.413880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.817 [2024-11-18 00:40:57.413961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.817 qpair failed and we were unable to recover it. 00:35:33.817 [2024-11-18 00:40:57.414200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.817 [2024-11-18 00:40:57.414264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.817 qpair failed and we were unable to recover it. 00:35:33.817 [2024-11-18 00:40:57.414592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.817 [2024-11-18 00:40:57.414658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.817 qpair failed and we were unable to recover it. 00:35:33.817 [2024-11-18 00:40:57.414908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.817 [2024-11-18 00:40:57.414964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.817 qpair failed and we were unable to recover it. 00:35:33.817 [2024-11-18 00:40:57.415144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.817 [2024-11-18 00:40:57.415209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.817 qpair failed and we were unable to recover it. 00:35:33.817 [2024-11-18 00:40:57.415441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.817 [2024-11-18 00:40:57.415498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.817 qpair failed and we were unable to recover it. 00:35:33.817 [2024-11-18 00:40:57.415753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.817 [2024-11-18 00:40:57.415809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.817 qpair failed and we were unable to recover it. 00:35:33.817 [2024-11-18 00:40:57.416057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.817 [2024-11-18 00:40:57.416112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.817 qpair failed and we were unable to recover it. 00:35:33.817 [2024-11-18 00:40:57.416331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.817 [2024-11-18 00:40:57.416388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.817 qpair failed and we were unable to recover it. 00:35:33.817 [2024-11-18 00:40:57.416606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.817 [2024-11-18 00:40:57.416662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.817 qpair failed and we were unable to recover it. 00:35:33.817 [2024-11-18 00:40:57.416889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.817 [2024-11-18 00:40:57.416944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.817 qpair failed and we were unable to recover it. 00:35:33.817 [2024-11-18 00:40:57.417204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.817 [2024-11-18 00:40:57.417259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.817 qpair failed and we were unable to recover it. 00:35:33.818 [2024-11-18 00:40:57.417505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.818 [2024-11-18 00:40:57.417561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.818 qpair failed and we were unable to recover it. 00:35:33.818 [2024-11-18 00:40:57.417804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.818 [2024-11-18 00:40:57.417860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.818 qpair failed and we were unable to recover it. 00:35:33.818 [2024-11-18 00:40:57.418122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.818 [2024-11-18 00:40:57.418177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.818 qpair failed and we were unable to recover it. 00:35:33.818 [2024-11-18 00:40:57.418360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.818 [2024-11-18 00:40:57.418419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.818 qpair failed and we were unable to recover it. 00:35:33.818 [2024-11-18 00:40:57.418609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.818 [2024-11-18 00:40:57.418664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.818 qpair failed and we were unable to recover it. 00:35:33.818 [2024-11-18 00:40:57.418832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.818 [2024-11-18 00:40:57.418888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.818 qpair failed and we were unable to recover it. 00:35:33.818 [2024-11-18 00:40:57.419111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.818 [2024-11-18 00:40:57.419169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.818 qpair failed and we were unable to recover it. 00:35:33.818 [2024-11-18 00:40:57.419418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.818 [2024-11-18 00:40:57.419476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.818 qpair failed and we were unable to recover it. 00:35:33.818 [2024-11-18 00:40:57.419661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.818 [2024-11-18 00:40:57.419716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.818 qpair failed and we were unable to recover it. 00:35:33.818 [2024-11-18 00:40:57.419939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.818 [2024-11-18 00:40:57.419995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.818 qpair failed and we were unable to recover it. 00:35:33.818 [2024-11-18 00:40:57.420219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.818 [2024-11-18 00:40:57.420274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.818 qpair failed and we were unable to recover it. 00:35:33.818 [2024-11-18 00:40:57.420503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.818 [2024-11-18 00:40:57.420560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.818 qpair failed and we were unable to recover it. 00:35:33.818 [2024-11-18 00:40:57.420756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.818 [2024-11-18 00:40:57.420811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.818 qpair failed and we were unable to recover it. 00:35:33.818 [2024-11-18 00:40:57.421023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.818 [2024-11-18 00:40:57.421079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.818 qpair failed and we were unable to recover it. 00:35:33.818 [2024-11-18 00:40:57.421358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.818 [2024-11-18 00:40:57.421416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.818 qpair failed and we were unable to recover it. 00:35:33.818 [2024-11-18 00:40:57.421622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.818 [2024-11-18 00:40:57.421678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.818 qpair failed and we were unable to recover it. 00:35:33.818 [2024-11-18 00:40:57.421889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.818 [2024-11-18 00:40:57.421945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.818 qpair failed and we were unable to recover it. 00:35:33.818 [2024-11-18 00:40:57.422126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.818 [2024-11-18 00:40:57.422183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.818 qpair failed and we were unable to recover it. 00:35:33.818 [2024-11-18 00:40:57.422377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.818 [2024-11-18 00:40:57.422434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.818 qpair failed and we were unable to recover it. 00:35:33.818 [2024-11-18 00:40:57.422615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.818 [2024-11-18 00:40:57.422679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.818 qpair failed and we were unable to recover it. 00:35:33.818 [2024-11-18 00:40:57.422953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.818 [2024-11-18 00:40:57.423019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.818 qpair failed and we were unable to recover it. 00:35:33.818 [2024-11-18 00:40:57.423292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.818 [2024-11-18 00:40:57.423364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.818 qpair failed and we were unable to recover it. 00:35:33.818 [2024-11-18 00:40:57.423589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.818 [2024-11-18 00:40:57.423645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.818 qpair failed and we were unable to recover it. 00:35:33.818 [2024-11-18 00:40:57.423833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.818 [2024-11-18 00:40:57.423888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.818 qpair failed and we were unable to recover it. 00:35:33.818 [2024-11-18 00:40:57.424064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.818 [2024-11-18 00:40:57.424120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.818 qpair failed and we were unable to recover it. 00:35:33.818 [2024-11-18 00:40:57.424356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.818 [2024-11-18 00:40:57.424434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.818 qpair failed and we were unable to recover it. 00:35:33.818 [2024-11-18 00:40:57.424685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.818 [2024-11-18 00:40:57.424746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.818 qpair failed and we were unable to recover it. 00:35:33.818 [2024-11-18 00:40:57.424980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.818 [2024-11-18 00:40:57.425040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.818 qpair failed and we were unable to recover it. 00:35:33.818 [2024-11-18 00:40:57.425227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.818 [2024-11-18 00:40:57.425286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.818 qpair failed and we were unable to recover it. 00:35:33.818 [2024-11-18 00:40:57.425542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.818 [2024-11-18 00:40:57.425598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.818 qpair failed and we were unable to recover it. 00:35:33.818 [2024-11-18 00:40:57.425819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.818 [2024-11-18 00:40:57.425876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.818 qpair failed and we were unable to recover it. 00:35:33.818 [2024-11-18 00:40:57.426082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.818 [2024-11-18 00:40:57.426139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.818 qpair failed and we were unable to recover it. 00:35:33.818 [2024-11-18 00:40:57.426349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.818 [2024-11-18 00:40:57.426407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.818 qpair failed and we were unable to recover it. 00:35:33.818 [2024-11-18 00:40:57.426624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.818 [2024-11-18 00:40:57.426680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.818 qpair failed and we were unable to recover it. 00:35:33.818 [2024-11-18 00:40:57.426918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.819 [2024-11-18 00:40:57.426973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.819 qpair failed and we were unable to recover it. 00:35:33.819 [2024-11-18 00:40:57.427232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.819 [2024-11-18 00:40:57.427287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.819 qpair failed and we were unable to recover it. 00:35:33.819 [2024-11-18 00:40:57.427467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.819 [2024-11-18 00:40:57.427522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.819 qpair failed and we were unable to recover it. 00:35:33.819 [2024-11-18 00:40:57.427710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.819 [2024-11-18 00:40:57.427767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.819 qpair failed and we were unable to recover it. 00:35:33.819 [2024-11-18 00:40:57.427987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.819 [2024-11-18 00:40:57.428043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.819 qpair failed and we were unable to recover it. 00:35:33.819 [2024-11-18 00:40:57.428275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.819 [2024-11-18 00:40:57.428372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.819 qpair failed and we were unable to recover it. 00:35:33.819 [2024-11-18 00:40:57.428631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.819 [2024-11-18 00:40:57.428687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.819 qpair failed and we were unable to recover it. 00:35:33.819 [2024-11-18 00:40:57.428933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.819 [2024-11-18 00:40:57.428988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.819 qpair failed and we were unable to recover it. 00:35:33.819 [2024-11-18 00:40:57.429253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.819 [2024-11-18 00:40:57.429327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.819 qpair failed and we were unable to recover it. 00:35:33.819 [2024-11-18 00:40:57.429557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.819 [2024-11-18 00:40:57.429621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.819 qpair failed and we were unable to recover it. 00:35:33.819 [2024-11-18 00:40:57.429808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.819 [2024-11-18 00:40:57.429868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.819 qpair failed and we were unable to recover it. 00:35:33.819 [2024-11-18 00:40:57.430100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.819 [2024-11-18 00:40:57.430159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.819 qpair failed and we were unable to recover it. 00:35:33.819 [2024-11-18 00:40:57.430449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.819 [2024-11-18 00:40:57.430510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.819 qpair failed and we were unable to recover it. 00:35:33.819 [2024-11-18 00:40:57.430802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.819 [2024-11-18 00:40:57.430862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.819 qpair failed and we were unable to recover it. 00:35:33.819 [2024-11-18 00:40:57.431099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.819 [2024-11-18 00:40:57.431158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.819 qpair failed and we were unable to recover it. 00:35:33.819 [2024-11-18 00:40:57.431345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.819 [2024-11-18 00:40:57.431408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.819 qpair failed and we were unable to recover it. 00:35:33.819 [2024-11-18 00:40:57.431650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.819 [2024-11-18 00:40:57.431710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.819 qpair failed and we were unable to recover it. 00:35:33.819 [2024-11-18 00:40:57.431987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.819 [2024-11-18 00:40:57.432046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.819 qpair failed and we were unable to recover it. 00:35:33.819 [2024-11-18 00:40:57.432258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.819 [2024-11-18 00:40:57.432332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.819 qpair failed and we were unable to recover it. 00:35:33.819 [2024-11-18 00:40:57.432564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.819 [2024-11-18 00:40:57.432624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.819 qpair failed and we were unable to recover it. 00:35:33.819 [2024-11-18 00:40:57.432859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.819 [2024-11-18 00:40:57.432920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.819 qpair failed and we were unable to recover it. 00:35:33.819 [2024-11-18 00:40:57.433206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.819 [2024-11-18 00:40:57.433270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.819 qpair failed and we were unable to recover it. 00:35:33.819 [2024-11-18 00:40:57.433489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.819 [2024-11-18 00:40:57.433549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.819 qpair failed and we were unable to recover it. 00:35:33.819 [2024-11-18 00:40:57.433787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.819 [2024-11-18 00:40:57.433847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.819 qpair failed and we were unable to recover it. 00:35:33.819 [2024-11-18 00:40:57.434109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.819 [2024-11-18 00:40:57.434172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.819 qpair failed and we were unable to recover it. 00:35:33.819 [2024-11-18 00:40:57.434429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.819 [2024-11-18 00:40:57.434491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.819 qpair failed and we were unable to recover it. 00:35:33.819 [2024-11-18 00:40:57.434759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.819 [2024-11-18 00:40:57.434848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.819 qpair failed and we were unable to recover it. 00:35:33.819 [2024-11-18 00:40:57.435128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.819 [2024-11-18 00:40:57.435192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.819 qpair failed and we were unable to recover it. 00:35:33.819 [2024-11-18 00:40:57.435460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.819 [2024-11-18 00:40:57.435524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.819 qpair failed and we were unable to recover it. 00:35:33.819 [2024-11-18 00:40:57.435713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.819 [2024-11-18 00:40:57.435775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.819 qpair failed and we were unable to recover it. 00:35:33.819 [2024-11-18 00:40:57.435999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.819 [2024-11-18 00:40:57.436060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.819 qpair failed and we were unable to recover it. 00:35:33.819 [2024-11-18 00:40:57.436256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.819 [2024-11-18 00:40:57.436332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.819 qpair failed and we were unable to recover it. 00:35:33.819 [2024-11-18 00:40:57.436613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.819 [2024-11-18 00:40:57.436674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.819 qpair failed and we were unable to recover it. 00:35:33.819 [2024-11-18 00:40:57.436865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.819 [2024-11-18 00:40:57.436924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.819 qpair failed and we were unable to recover it. 00:35:33.819 [2024-11-18 00:40:57.437139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.819 [2024-11-18 00:40:57.437199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.819 qpair failed and we were unable to recover it. 00:35:33.819 [2024-11-18 00:40:57.437437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.819 [2024-11-18 00:40:57.437500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.819 qpair failed and we were unable to recover it. 00:35:33.819 [2024-11-18 00:40:57.437703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.819 [2024-11-18 00:40:57.437763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.819 qpair failed and we were unable to recover it. 00:35:33.819 [2024-11-18 00:40:57.438030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.819 [2024-11-18 00:40:57.438093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.819 qpair failed and we were unable to recover it. 00:35:33.819 [2024-11-18 00:40:57.438290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.820 [2024-11-18 00:40:57.438359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.820 qpair failed and we were unable to recover it. 00:35:33.820 [2024-11-18 00:40:57.438621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.820 [2024-11-18 00:40:57.438689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.820 qpair failed and we were unable to recover it. 00:35:33.820 [2024-11-18 00:40:57.438871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.820 [2024-11-18 00:40:57.438929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.820 qpair failed and we were unable to recover it. 00:35:33.820 [2024-11-18 00:40:57.439124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.820 [2024-11-18 00:40:57.439180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.820 qpair failed and we were unable to recover it. 00:35:33.820 [2024-11-18 00:40:57.439401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.820 [2024-11-18 00:40:57.439428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.820 qpair failed and we were unable to recover it. 00:35:33.820 [2024-11-18 00:40:57.439543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.820 [2024-11-18 00:40:57.439569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.820 qpair failed and we were unable to recover it. 00:35:33.820 [2024-11-18 00:40:57.439710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.820 [2024-11-18 00:40:57.439736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.820 qpair failed and we were unable to recover it. 00:35:33.820 [2024-11-18 00:40:57.439827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.820 [2024-11-18 00:40:57.439852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.820 qpair failed and we were unable to recover it. 00:35:33.820 [2024-11-18 00:40:57.439970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.820 [2024-11-18 00:40:57.439996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.820 qpair failed and we were unable to recover it. 00:35:33.820 [2024-11-18 00:40:57.440075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.820 [2024-11-18 00:40:57.440102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.820 qpair failed and we were unable to recover it. 00:35:33.820 [2024-11-18 00:40:57.440213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.820 [2024-11-18 00:40:57.440239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.820 qpair failed and we were unable to recover it. 00:35:33.820 [2024-11-18 00:40:57.440326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.820 [2024-11-18 00:40:57.440352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.820 qpair failed and we were unable to recover it. 00:35:33.820 [2024-11-18 00:40:57.440436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.820 [2024-11-18 00:40:57.440463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.820 qpair failed and we were unable to recover it. 00:35:33.820 [2024-11-18 00:40:57.440590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.820 [2024-11-18 00:40:57.440616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.820 qpair failed and we were unable to recover it. 00:35:33.820 [2024-11-18 00:40:57.440729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.820 [2024-11-18 00:40:57.440755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.820 qpair failed and we were unable to recover it. 00:35:33.820 [2024-11-18 00:40:57.440877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.820 [2024-11-18 00:40:57.440903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.820 qpair failed and we were unable to recover it. 00:35:33.820 [2024-11-18 00:40:57.441034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.820 [2024-11-18 00:40:57.441073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.820 qpair failed and we were unable to recover it. 00:35:33.820 [2024-11-18 00:40:57.441167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.820 [2024-11-18 00:40:57.441194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.820 qpair failed and we were unable to recover it. 00:35:33.820 [2024-11-18 00:40:57.441322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.820 [2024-11-18 00:40:57.441350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.820 qpair failed and we were unable to recover it. 00:35:33.820 [2024-11-18 00:40:57.441467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.820 [2024-11-18 00:40:57.441494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.820 qpair failed and we were unable to recover it. 00:35:33.820 [2024-11-18 00:40:57.441596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.820 [2024-11-18 00:40:57.441622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.820 qpair failed and we were unable to recover it. 00:35:33.820 [2024-11-18 00:40:57.441703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.820 [2024-11-18 00:40:57.441729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.820 qpair failed and we were unable to recover it. 00:35:33.820 [2024-11-18 00:40:57.441813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.820 [2024-11-18 00:40:57.441840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.820 qpair failed and we were unable to recover it. 00:35:33.820 [2024-11-18 00:40:57.441943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.820 [2024-11-18 00:40:57.441970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.820 qpair failed and we were unable to recover it. 00:35:33.820 [2024-11-18 00:40:57.442057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.820 [2024-11-18 00:40:57.442084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.820 qpair failed and we were unable to recover it. 00:35:33.820 [2024-11-18 00:40:57.442155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.820 [2024-11-18 00:40:57.442182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.820 qpair failed and we were unable to recover it. 00:35:33.820 [2024-11-18 00:40:57.442259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.820 [2024-11-18 00:40:57.442286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.820 qpair failed and we were unable to recover it. 00:35:33.820 [2024-11-18 00:40:57.442392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.820 [2024-11-18 00:40:57.442418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.820 qpair failed and we were unable to recover it. 00:35:33.820 [2024-11-18 00:40:57.442501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.820 [2024-11-18 00:40:57.442533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.820 qpair failed and we were unable to recover it. 00:35:33.820 [2024-11-18 00:40:57.442679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.820 [2024-11-18 00:40:57.442705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.820 qpair failed and we were unable to recover it. 00:35:33.820 [2024-11-18 00:40:57.442826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.820 [2024-11-18 00:40:57.442852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.820 qpair failed and we were unable to recover it. 00:35:33.820 [2024-11-18 00:40:57.442927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.820 [2024-11-18 00:40:57.442954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.820 qpair failed and we were unable to recover it. 00:35:33.820 [2024-11-18 00:40:57.443036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.820 [2024-11-18 00:40:57.443074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.820 qpair failed and we were unable to recover it. 00:35:33.820 [2024-11-18 00:40:57.443191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.820 [2024-11-18 00:40:57.443239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.820 qpair failed and we were unable to recover it. 00:35:33.820 [2024-11-18 00:40:57.443382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.820 [2024-11-18 00:40:57.443417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.820 qpair failed and we were unable to recover it. 00:35:33.820 [2024-11-18 00:40:57.443562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.820 [2024-11-18 00:40:57.443593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.820 qpair failed and we were unable to recover it. 00:35:33.820 [2024-11-18 00:40:57.443707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.820 [2024-11-18 00:40:57.443733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.820 qpair failed and we were unable to recover it. 00:35:33.820 [2024-11-18 00:40:57.443875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.820 [2024-11-18 00:40:57.443909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.820 qpair failed and we were unable to recover it. 00:35:33.820 [2024-11-18 00:40:57.444025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.821 [2024-11-18 00:40:57.444059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.821 qpair failed and we were unable to recover it. 00:35:33.821 [2024-11-18 00:40:57.444175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.821 [2024-11-18 00:40:57.444209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.821 qpair failed and we were unable to recover it. 00:35:33.821 [2024-11-18 00:40:57.444365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.821 [2024-11-18 00:40:57.444391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.821 qpair failed and we were unable to recover it. 00:35:33.821 [2024-11-18 00:40:57.444501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.821 [2024-11-18 00:40:57.444527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.821 qpair failed and we were unable to recover it. 00:35:33.821 [2024-11-18 00:40:57.444614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.821 [2024-11-18 00:40:57.444641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.821 qpair failed and we were unable to recover it. 00:35:33.821 [2024-11-18 00:40:57.444752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.821 [2024-11-18 00:40:57.444778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.821 qpair failed and we were unable to recover it. 00:35:33.821 [2024-11-18 00:40:57.444867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.821 [2024-11-18 00:40:57.444893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.821 qpair failed and we were unable to recover it. 00:35:33.821 [2024-11-18 00:40:57.445009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.821 [2024-11-18 00:40:57.445077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.821 qpair failed and we were unable to recover it. 00:35:33.821 [2024-11-18 00:40:57.445244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.821 [2024-11-18 00:40:57.445272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.821 qpair failed and we were unable to recover it. 00:35:33.821 [2024-11-18 00:40:57.445382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.821 [2024-11-18 00:40:57.445410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.821 qpair failed and we were unable to recover it. 00:35:33.821 [2024-11-18 00:40:57.445502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.821 [2024-11-18 00:40:57.445529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.821 qpair failed and we were unable to recover it. 00:35:33.821 [2024-11-18 00:40:57.445620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.821 [2024-11-18 00:40:57.445646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.821 qpair failed and we were unable to recover it. 00:35:33.821 [2024-11-18 00:40:57.445731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.821 [2024-11-18 00:40:57.445783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.821 qpair failed and we were unable to recover it. 00:35:33.821 [2024-11-18 00:40:57.445948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.821 [2024-11-18 00:40:57.446007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.821 qpair failed and we were unable to recover it. 00:35:33.821 [2024-11-18 00:40:57.446167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.821 [2024-11-18 00:40:57.446221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.821 qpair failed and we were unable to recover it. 00:35:33.821 [2024-11-18 00:40:57.446413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.821 [2024-11-18 00:40:57.446440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.821 qpair failed and we were unable to recover it. 00:35:33.821 [2024-11-18 00:40:57.446558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.821 [2024-11-18 00:40:57.446584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.821 qpair failed and we were unable to recover it. 00:35:33.821 [2024-11-18 00:40:57.446693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.821 [2024-11-18 00:40:57.446727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.821 qpair failed and we were unable to recover it. 00:35:33.821 [2024-11-18 00:40:57.446929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.821 [2024-11-18 00:40:57.446978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.821 qpair failed and we were unable to recover it. 00:35:33.821 [2024-11-18 00:40:57.447171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.821 [2024-11-18 00:40:57.447223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.821 qpair failed and we were unable to recover it. 00:35:33.821 [2024-11-18 00:40:57.447392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.821 [2024-11-18 00:40:57.447420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.821 qpair failed and we were unable to recover it. 00:35:33.821 [2024-11-18 00:40:57.447498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.821 [2024-11-18 00:40:57.447524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.821 qpair failed and we were unable to recover it. 00:35:33.821 [2024-11-18 00:40:57.447653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.821 [2024-11-18 00:40:57.447687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.821 qpair failed and we were unable to recover it. 00:35:33.821 [2024-11-18 00:40:57.447813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.821 [2024-11-18 00:40:57.447858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.821 qpair failed and we were unable to recover it. 00:35:33.821 [2024-11-18 00:40:57.448005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.821 [2024-11-18 00:40:57.448037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.821 qpair failed and we were unable to recover it. 00:35:33.821 [2024-11-18 00:40:57.448169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.821 [2024-11-18 00:40:57.448197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.821 qpair failed and we were unable to recover it. 00:35:33.821 [2024-11-18 00:40:57.448275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.821 [2024-11-18 00:40:57.448299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.821 qpair failed and we were unable to recover it. 00:35:33.821 [2024-11-18 00:40:57.448398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.821 [2024-11-18 00:40:57.448424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.821 qpair failed and we were unable to recover it. 00:35:33.821 [2024-11-18 00:40:57.448543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.821 [2024-11-18 00:40:57.448570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.821 qpair failed and we were unable to recover it. 00:35:33.821 [2024-11-18 00:40:57.448648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.821 [2024-11-18 00:40:57.448674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.821 qpair failed and we were unable to recover it. 00:35:33.821 [2024-11-18 00:40:57.448791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.821 [2024-11-18 00:40:57.448816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.821 qpair failed and we were unable to recover it. 00:35:33.821 [2024-11-18 00:40:57.448939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.821 [2024-11-18 00:40:57.448966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.821 qpair failed and we were unable to recover it. 00:35:33.821 [2024-11-18 00:40:57.449076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.821 [2024-11-18 00:40:57.449102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.821 qpair failed and we were unable to recover it. 00:35:33.821 [2024-11-18 00:40:57.449182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.821 [2024-11-18 00:40:57.449208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.821 qpair failed and we were unable to recover it. 00:35:33.821 [2024-11-18 00:40:57.449334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.821 [2024-11-18 00:40:57.449364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.821 qpair failed and we were unable to recover it. 00:35:33.821 [2024-11-18 00:40:57.449444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.821 [2024-11-18 00:40:57.449470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.821 qpair failed and we were unable to recover it. 00:35:33.821 [2024-11-18 00:40:57.449589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.821 [2024-11-18 00:40:57.449615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.821 qpair failed and we were unable to recover it. 00:35:33.821 [2024-11-18 00:40:57.449707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.821 [2024-11-18 00:40:57.449735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.821 qpair failed and we were unable to recover it. 00:35:33.821 [2024-11-18 00:40:57.449852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.822 [2024-11-18 00:40:57.449878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.822 qpair failed and we were unable to recover it. 00:35:33.822 [2024-11-18 00:40:57.449993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.822 [2024-11-18 00:40:57.450019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.822 qpair failed and we were unable to recover it. 00:35:33.822 [2024-11-18 00:40:57.450136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.822 [2024-11-18 00:40:57.450162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.822 qpair failed and we were unable to recover it. 00:35:33.822 [2024-11-18 00:40:57.450270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.822 [2024-11-18 00:40:57.450296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.822 qpair failed and we were unable to recover it. 00:35:33.822 [2024-11-18 00:40:57.450420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.822 [2024-11-18 00:40:57.450447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.822 qpair failed and we were unable to recover it. 00:35:33.822 [2024-11-18 00:40:57.450529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.822 [2024-11-18 00:40:57.450556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.822 qpair failed and we were unable to recover it. 00:35:33.822 [2024-11-18 00:40:57.450635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.822 [2024-11-18 00:40:57.450666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.822 qpair failed and we were unable to recover it. 00:35:33.822 [2024-11-18 00:40:57.450743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.822 [2024-11-18 00:40:57.450770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.822 qpair failed and we were unable to recover it. 00:35:33.822 [2024-11-18 00:40:57.450881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.822 [2024-11-18 00:40:57.450935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.822 qpair failed and we were unable to recover it. 00:35:33.822 [2024-11-18 00:40:57.451115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.822 [2024-11-18 00:40:57.451197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.822 qpair failed and we were unable to recover it. 00:35:33.822 [2024-11-18 00:40:57.451401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.822 [2024-11-18 00:40:57.451430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.822 qpair failed and we were unable to recover it. 00:35:33.822 [2024-11-18 00:40:57.451509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.822 [2024-11-18 00:40:57.451536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.822 qpair failed and we were unable to recover it. 00:35:33.822 [2024-11-18 00:40:57.451717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.822 [2024-11-18 00:40:57.451750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.822 qpair failed and we were unable to recover it. 00:35:33.822 [2024-11-18 00:40:57.451926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.822 [2024-11-18 00:40:57.451985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.822 qpair failed and we were unable to recover it. 00:35:33.822 [2024-11-18 00:40:57.452128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.822 [2024-11-18 00:40:57.452171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.822 qpair failed and we were unable to recover it. 00:35:33.822 [2024-11-18 00:40:57.452255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.822 [2024-11-18 00:40:57.452281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.822 qpair failed and we were unable to recover it. 00:35:33.822 [2024-11-18 00:40:57.452438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.822 [2024-11-18 00:40:57.452466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.822 qpair failed and we were unable to recover it. 00:35:33.822 [2024-11-18 00:40:57.452607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.822 [2024-11-18 00:40:57.452632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.822 qpair failed and we were unable to recover it. 00:35:33.822 [2024-11-18 00:40:57.452734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.822 [2024-11-18 00:40:57.452779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.822 qpair failed and we were unable to recover it. 00:35:33.822 [2024-11-18 00:40:57.452923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.822 [2024-11-18 00:40:57.452972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.822 qpair failed and we were unable to recover it. 00:35:33.822 [2024-11-18 00:40:57.453145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.822 [2024-11-18 00:40:57.453195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.822 qpair failed and we were unable to recover it. 00:35:33.822 [2024-11-18 00:40:57.453274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.822 [2024-11-18 00:40:57.453301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.822 qpair failed and we were unable to recover it. 00:35:33.822 [2024-11-18 00:40:57.453391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.822 [2024-11-18 00:40:57.453418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.822 qpair failed and we were unable to recover it. 00:35:33.822 [2024-11-18 00:40:57.453543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.822 [2024-11-18 00:40:57.453569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.822 qpair failed and we were unable to recover it. 00:35:33.822 [2024-11-18 00:40:57.453784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.822 [2024-11-18 00:40:57.453817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.822 qpair failed and we were unable to recover it. 00:35:33.822 [2024-11-18 00:40:57.453989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.822 [2024-11-18 00:40:57.454022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.822 qpair failed and we were unable to recover it. 00:35:33.822 [2024-11-18 00:40:57.454122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.822 [2024-11-18 00:40:57.454156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.822 qpair failed and we were unable to recover it. 00:35:33.822 [2024-11-18 00:40:57.454743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.822 [2024-11-18 00:40:57.454780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.822 qpair failed and we were unable to recover it. 00:35:33.822 [2024-11-18 00:40:57.455000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.822 [2024-11-18 00:40:57.455032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.822 qpair failed and we were unable to recover it. 00:35:33.822 [2024-11-18 00:40:57.455159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.822 [2024-11-18 00:40:57.455191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.822 qpair failed and we were unable to recover it. 00:35:33.822 [2024-11-18 00:40:57.455331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.822 [2024-11-18 00:40:57.455374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.822 qpair failed and we were unable to recover it. 00:35:33.822 [2024-11-18 00:40:57.455491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.822 [2024-11-18 00:40:57.455517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.822 qpair failed and we were unable to recover it. 00:35:33.822 [2024-11-18 00:40:57.455707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.822 [2024-11-18 00:40:57.455734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.822 qpair failed and we were unable to recover it. 00:35:33.822 [2024-11-18 00:40:57.455821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.822 [2024-11-18 00:40:57.455851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.822 qpair failed and we were unable to recover it. 00:35:33.822 [2024-11-18 00:40:57.455968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.822 [2024-11-18 00:40:57.455995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.822 qpair failed and we were unable to recover it. 00:35:33.822 [2024-11-18 00:40:57.456147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.822 [2024-11-18 00:40:57.456186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.822 qpair failed and we were unable to recover it. 00:35:33.822 [2024-11-18 00:40:57.456351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.822 [2024-11-18 00:40:57.456380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.822 qpair failed and we were unable to recover it. 00:35:33.822 [2024-11-18 00:40:57.456500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.822 [2024-11-18 00:40:57.456527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.822 qpair failed and we were unable to recover it. 00:35:33.822 [2024-11-18 00:40:57.456632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.823 [2024-11-18 00:40:57.456659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.823 qpair failed and we were unable to recover it. 00:35:33.823 [2024-11-18 00:40:57.456752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.823 [2024-11-18 00:40:57.456779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.823 qpair failed and we were unable to recover it. 00:35:33.823 [2024-11-18 00:40:57.456918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.823 [2024-11-18 00:40:57.456953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.823 qpair failed and we were unable to recover it. 00:35:33.823 [2024-11-18 00:40:57.457131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.823 [2024-11-18 00:40:57.457163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.823 qpair failed and we were unable to recover it. 00:35:33.823 [2024-11-18 00:40:57.457290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.823 [2024-11-18 00:40:57.457361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.823 qpair failed and we were unable to recover it. 00:35:33.823 [2024-11-18 00:40:57.457484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.823 [2024-11-18 00:40:57.457510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.823 qpair failed and we were unable to recover it. 00:35:33.823 [2024-11-18 00:40:57.457620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.823 [2024-11-18 00:40:57.457646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.823 qpair failed and we were unable to recover it. 00:35:33.823 [2024-11-18 00:40:57.457786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.823 [2024-11-18 00:40:57.457812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.823 qpair failed and we were unable to recover it. 00:35:33.823 [2024-11-18 00:40:57.457989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.823 [2024-11-18 00:40:57.458047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.823 qpair failed and we were unable to recover it. 00:35:33.823 [2024-11-18 00:40:57.458193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.823 [2024-11-18 00:40:57.458240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.823 qpair failed and we were unable to recover it. 00:35:33.823 [2024-11-18 00:40:57.458354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.823 [2024-11-18 00:40:57.458381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.823 qpair failed and we were unable to recover it. 00:35:33.823 [2024-11-18 00:40:57.458527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.823 [2024-11-18 00:40:57.458553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.823 qpair failed and we were unable to recover it. 00:35:33.823 [2024-11-18 00:40:57.458725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.823 [2024-11-18 00:40:57.458750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.823 qpair failed and we were unable to recover it. 00:35:33.823 [2024-11-18 00:40:57.458864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.823 [2024-11-18 00:40:57.458890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.823 qpair failed and we were unable to recover it. 00:35:33.823 [2024-11-18 00:40:57.459012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.823 [2024-11-18 00:40:57.459043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.823 qpair failed and we were unable to recover it. 00:35:33.823 [2024-11-18 00:40:57.459245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.823 [2024-11-18 00:40:57.459275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.823 qpair failed and we were unable to recover it. 00:35:33.823 [2024-11-18 00:40:57.459401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.823 [2024-11-18 00:40:57.459427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.823 qpair failed and we were unable to recover it. 00:35:33.823 [2024-11-18 00:40:57.459569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.823 [2024-11-18 00:40:57.459596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.823 qpair failed and we were unable to recover it. 00:35:33.823 [2024-11-18 00:40:57.459813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.823 [2024-11-18 00:40:57.459844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.823 qpair failed and we were unable to recover it. 00:35:33.823 [2024-11-18 00:40:57.459967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.823 [2024-11-18 00:40:57.459998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.823 qpair failed and we were unable to recover it. 00:35:33.823 [2024-11-18 00:40:57.460128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.823 [2024-11-18 00:40:57.460160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.823 qpair failed and we were unable to recover it. 00:35:33.823 [2024-11-18 00:40:57.460264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.823 [2024-11-18 00:40:57.460295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.823 qpair failed and we were unable to recover it. 00:35:33.823 [2024-11-18 00:40:57.460441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.823 [2024-11-18 00:40:57.460468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.823 qpair failed and we were unable to recover it. 00:35:33.823 [2024-11-18 00:40:57.460584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.823 [2024-11-18 00:40:57.460611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.823 qpair failed and we were unable to recover it. 00:35:33.823 [2024-11-18 00:40:57.460745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.823 [2024-11-18 00:40:57.460777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.823 qpair failed and we were unable to recover it. 00:35:33.823 [2024-11-18 00:40:57.460916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.823 [2024-11-18 00:40:57.460942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.823 qpair failed and we were unable to recover it. 00:35:33.823 [2024-11-18 00:40:57.461082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.823 [2024-11-18 00:40:57.461113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.823 qpair failed and we were unable to recover it. 00:35:33.823 [2024-11-18 00:40:57.461224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.823 [2024-11-18 00:40:57.461250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.823 qpair failed and we were unable to recover it. 00:35:33.823 [2024-11-18 00:40:57.461351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.823 [2024-11-18 00:40:57.461377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.823 qpair failed and we were unable to recover it. 00:35:33.823 [2024-11-18 00:40:57.461493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.823 [2024-11-18 00:40:57.461519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.823 qpair failed and we were unable to recover it. 00:35:33.823 [2024-11-18 00:40:57.461623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.823 [2024-11-18 00:40:57.461649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.823 qpair failed and we were unable to recover it. 00:35:33.823 [2024-11-18 00:40:57.461767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.823 [2024-11-18 00:40:57.461793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.823 qpair failed and we were unable to recover it. 00:35:33.823 [2024-11-18 00:40:57.461958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.823 [2024-11-18 00:40:57.461989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.823 qpair failed and we were unable to recover it. 00:35:33.823 [2024-11-18 00:40:57.462123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.824 [2024-11-18 00:40:57.462158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.824 qpair failed and we were unable to recover it. 00:35:33.824 [2024-11-18 00:40:57.462261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.824 [2024-11-18 00:40:57.462293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.824 qpair failed and we were unable to recover it. 00:35:33.824 [2024-11-18 00:40:57.462445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.824 [2024-11-18 00:40:57.462477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.824 qpair failed and we were unable to recover it. 00:35:33.824 [2024-11-18 00:40:57.462588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.824 [2024-11-18 00:40:57.462646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.824 qpair failed and we were unable to recover it. 00:35:33.824 [2024-11-18 00:40:57.462853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.824 [2024-11-18 00:40:57.462879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.824 qpair failed and we were unable to recover it. 00:35:33.824 [2024-11-18 00:40:57.463009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.824 [2024-11-18 00:40:57.463050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.824 qpair failed and we were unable to recover it. 00:35:33.824 [2024-11-18 00:40:57.463211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.824 [2024-11-18 00:40:57.463243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.824 qpair failed and we were unable to recover it. 00:35:33.824 [2024-11-18 00:40:57.463384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.824 [2024-11-18 00:40:57.463410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.824 qpair failed and we were unable to recover it. 00:35:33.824 [2024-11-18 00:40:57.463491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.824 [2024-11-18 00:40:57.463517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.824 qpair failed and we were unable to recover it. 00:35:33.824 [2024-11-18 00:40:57.463635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.824 [2024-11-18 00:40:57.463689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.824 qpair failed and we were unable to recover it. 00:35:33.824 [2024-11-18 00:40:57.463812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.824 [2024-11-18 00:40:57.463883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.824 qpair failed and we were unable to recover it. 00:35:33.824 [2024-11-18 00:40:57.464096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.824 [2024-11-18 00:40:57.464138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.824 qpair failed and we were unable to recover it. 00:35:33.824 [2024-11-18 00:40:57.464287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.824 [2024-11-18 00:40:57.464326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.824 qpair failed and we were unable to recover it. 00:35:33.824 [2024-11-18 00:40:57.464433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.824 [2024-11-18 00:40:57.464459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.824 qpair failed and we were unable to recover it. 00:35:33.824 [2024-11-18 00:40:57.464569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.824 [2024-11-18 00:40:57.464596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.824 qpair failed and we were unable to recover it. 00:35:33.824 [2024-11-18 00:40:57.464711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.824 [2024-11-18 00:40:57.464736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.824 qpair failed and we were unable to recover it. 00:35:33.824 [2024-11-18 00:40:57.464846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.824 [2024-11-18 00:40:57.464889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.824 qpair failed and we were unable to recover it. 00:35:33.824 [2024-11-18 00:40:57.465048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.824 [2024-11-18 00:40:57.465090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.824 qpair failed and we were unable to recover it. 00:35:33.824 [2024-11-18 00:40:57.465261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.824 [2024-11-18 00:40:57.465308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.824 qpair failed and we were unable to recover it. 00:35:33.824 [2024-11-18 00:40:57.465458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.824 [2024-11-18 00:40:57.465486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.824 qpair failed and we were unable to recover it. 00:35:33.824 [2024-11-18 00:40:57.465628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.824 [2024-11-18 00:40:57.465654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.824 qpair failed and we were unable to recover it. 00:35:33.824 [2024-11-18 00:40:57.465762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.824 [2024-11-18 00:40:57.465788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.824 qpair failed and we were unable to recover it. 00:35:33.824 [2024-11-18 00:40:57.465898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.824 [2024-11-18 00:40:57.465924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.824 qpair failed and we were unable to recover it. 00:35:33.824 [2024-11-18 00:40:57.466065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.824 [2024-11-18 00:40:57.466106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.824 qpair failed and we were unable to recover it. 00:35:33.824 [2024-11-18 00:40:57.466253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.824 [2024-11-18 00:40:57.466284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.824 qpair failed and we were unable to recover it. 00:35:33.824 [2024-11-18 00:40:57.466433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.824 [2024-11-18 00:40:57.466472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.824 qpair failed and we were unable to recover it. 00:35:33.824 [2024-11-18 00:40:57.466579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.824 [2024-11-18 00:40:57.466609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.824 qpair failed and we were unable to recover it. 00:35:33.824 [2024-11-18 00:40:57.466784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.824 [2024-11-18 00:40:57.466816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.824 qpair failed and we were unable to recover it. 00:35:33.824 [2024-11-18 00:40:57.466976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.824 [2024-11-18 00:40:57.467018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.824 qpair failed and we were unable to recover it. 00:35:33.824 [2024-11-18 00:40:57.467169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.824 [2024-11-18 00:40:57.467208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.824 qpair failed and we were unable to recover it. 00:35:33.824 [2024-11-18 00:40:57.467367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.824 [2024-11-18 00:40:57.467396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.824 qpair failed and we were unable to recover it. 00:35:33.824 [2024-11-18 00:40:57.467489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.824 [2024-11-18 00:40:57.467518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.824 qpair failed and we were unable to recover it. 00:35:33.824 [2024-11-18 00:40:57.467671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.824 [2024-11-18 00:40:57.467723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.824 qpair failed and we were unable to recover it. 00:35:33.824 [2024-11-18 00:40:57.467869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.824 [2024-11-18 00:40:57.467920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.824 qpair failed and we were unable to recover it. 00:35:33.824 [2024-11-18 00:40:57.468053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.824 [2024-11-18 00:40:57.468086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.824 qpair failed and we were unable to recover it. 00:35:33.824 [2024-11-18 00:40:57.468197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.824 [2024-11-18 00:40:57.468229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.824 qpair failed and we were unable to recover it. 00:35:33.824 [2024-11-18 00:40:57.468390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.824 [2024-11-18 00:40:57.468430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.824 qpair failed and we were unable to recover it. 00:35:33.824 [2024-11-18 00:40:57.468552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.824 [2024-11-18 00:40:57.468580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.824 qpair failed and we were unable to recover it. 00:35:33.824 [2024-11-18 00:40:57.468743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.825 [2024-11-18 00:40:57.468786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.825 qpair failed and we were unable to recover it. 00:35:33.825 [2024-11-18 00:40:57.468950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.825 [2024-11-18 00:40:57.468991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.825 qpair failed and we were unable to recover it. 00:35:33.825 [2024-11-18 00:40:57.469155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.825 [2024-11-18 00:40:57.469207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.825 qpair failed and we were unable to recover it. 00:35:33.825 [2024-11-18 00:40:57.469405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.825 [2024-11-18 00:40:57.469432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.825 qpair failed and we were unable to recover it. 00:35:33.825 [2024-11-18 00:40:57.469546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.825 [2024-11-18 00:40:57.469573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.825 qpair failed and we were unable to recover it. 00:35:33.825 [2024-11-18 00:40:57.469694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.825 [2024-11-18 00:40:57.469721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.825 qpair failed and we were unable to recover it. 00:35:33.825 [2024-11-18 00:40:57.469834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.825 [2024-11-18 00:40:57.469860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.825 qpair failed and we were unable to recover it. 00:35:33.825 [2024-11-18 00:40:57.469941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.825 [2024-11-18 00:40:57.469983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.825 qpair failed and we were unable to recover it. 00:35:33.825 [2024-11-18 00:40:57.470112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.825 [2024-11-18 00:40:57.470144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.825 qpair failed and we were unable to recover it. 00:35:33.825 [2024-11-18 00:40:57.470305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.825 [2024-11-18 00:40:57.470363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.825 qpair failed and we were unable to recover it. 00:35:33.825 [2024-11-18 00:40:57.470452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.825 [2024-11-18 00:40:57.470478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.825 qpair failed and we were unable to recover it. 00:35:33.825 [2024-11-18 00:40:57.470621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.825 [2024-11-18 00:40:57.470648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.825 qpair failed and we were unable to recover it. 00:35:33.825 [2024-11-18 00:40:57.470750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.825 [2024-11-18 00:40:57.470782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.825 qpair failed and we were unable to recover it. 00:35:33.825 [2024-11-18 00:40:57.470879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.825 [2024-11-18 00:40:57.470912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.825 qpair failed and we were unable to recover it. 00:35:33.825 [2024-11-18 00:40:57.471070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.825 [2024-11-18 00:40:57.471101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.825 qpair failed and we were unable to recover it. 00:35:33.825 [2024-11-18 00:40:57.471228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.825 [2024-11-18 00:40:57.471259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.825 qpair failed and we were unable to recover it. 00:35:33.825 [2024-11-18 00:40:57.471369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.825 [2024-11-18 00:40:57.471395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.825 qpair failed and we were unable to recover it. 00:35:33.825 [2024-11-18 00:40:57.471502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.825 [2024-11-18 00:40:57.471529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.825 qpair failed and we were unable to recover it. 00:35:33.825 [2024-11-18 00:40:57.471620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.825 [2024-11-18 00:40:57.471651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.825 qpair failed and we were unable to recover it. 00:35:33.825 [2024-11-18 00:40:57.471792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.825 [2024-11-18 00:40:57.471818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.825 qpair failed and we were unable to recover it. 00:35:33.825 [2024-11-18 00:40:57.471955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.825 [2024-11-18 00:40:57.471998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.825 qpair failed and we were unable to recover it. 00:35:33.825 [2024-11-18 00:40:57.472137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.825 [2024-11-18 00:40:57.472168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.825 qpair failed and we were unable to recover it. 00:35:33.825 [2024-11-18 00:40:57.472294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.825 [2024-11-18 00:40:57.472335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.825 qpair failed and we were unable to recover it. 00:35:33.825 [2024-11-18 00:40:57.472467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.825 [2024-11-18 00:40:57.472496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.825 qpair failed and we were unable to recover it. 00:35:33.825 [2024-11-18 00:40:57.472634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.825 [2024-11-18 00:40:57.472666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.825 qpair failed and we were unable to recover it. 00:35:33.825 [2024-11-18 00:40:57.472825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.825 [2024-11-18 00:40:57.472856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.825 qpair failed and we were unable to recover it. 00:35:33.825 [2024-11-18 00:40:57.472985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.825 [2024-11-18 00:40:57.473018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.825 qpair failed and we were unable to recover it. 00:35:33.825 [2024-11-18 00:40:57.473159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.825 [2024-11-18 00:40:57.473192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.825 qpair failed and we were unable to recover it. 00:35:33.825 [2024-11-18 00:40:57.473366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.825 [2024-11-18 00:40:57.473393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.825 qpair failed and we were unable to recover it. 00:35:33.825 [2024-11-18 00:40:57.473505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.825 [2024-11-18 00:40:57.473531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.825 qpair failed and we were unable to recover it. 00:35:33.825 [2024-11-18 00:40:57.473646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.825 [2024-11-18 00:40:57.473686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.825 qpair failed and we were unable to recover it. 00:35:33.825 [2024-11-18 00:40:57.473876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.825 [2024-11-18 00:40:57.473916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.825 qpair failed and we were unable to recover it. 00:35:33.825 [2024-11-18 00:40:57.474114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.825 [2024-11-18 00:40:57.474154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.825 qpair failed and we were unable to recover it. 00:35:33.825 [2024-11-18 00:40:57.474322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.825 [2024-11-18 00:40:57.474371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.825 qpair failed and we were unable to recover it. 00:35:33.825 [2024-11-18 00:40:57.474511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.825 [2024-11-18 00:40:57.474538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.825 qpair failed and we were unable to recover it. 00:35:33.825 [2024-11-18 00:40:57.474651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.825 [2024-11-18 00:40:57.474677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.825 qpair failed and we were unable to recover it. 00:35:33.825 [2024-11-18 00:40:57.474858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.825 [2024-11-18 00:40:57.474898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.825 qpair failed and we were unable to recover it. 00:35:33.825 [2024-11-18 00:40:57.475086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.825 [2024-11-18 00:40:57.475125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.825 qpair failed and we were unable to recover it. 00:35:33.825 [2024-11-18 00:40:57.475286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.826 [2024-11-18 00:40:57.475328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.826 qpair failed and we were unable to recover it. 00:35:33.826 [2024-11-18 00:40:57.475468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.826 [2024-11-18 00:40:57.475505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.826 qpair failed and we were unable to recover it. 00:35:33.826 [2024-11-18 00:40:57.475650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.826 [2024-11-18 00:40:57.475699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.826 qpair failed and we were unable to recover it. 00:35:33.826 [2024-11-18 00:40:57.475888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.826 [2024-11-18 00:40:57.475939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.826 qpair failed and we were unable to recover it. 00:35:33.826 [2024-11-18 00:40:57.476143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.826 [2024-11-18 00:40:57.476175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.826 qpair failed and we were unable to recover it. 00:35:33.826 [2024-11-18 00:40:57.476334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.826 [2024-11-18 00:40:57.476366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.826 qpair failed and we were unable to recover it. 00:35:33.826 [2024-11-18 00:40:57.476576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.826 [2024-11-18 00:40:57.476607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.826 qpair failed and we were unable to recover it. 00:35:33.826 [2024-11-18 00:40:57.476794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.826 [2024-11-18 00:40:57.476820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.826 qpair failed and we were unable to recover it. 00:35:33.826 [2024-11-18 00:40:57.476933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.826 [2024-11-18 00:40:57.476959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.826 qpair failed and we were unable to recover it. 00:35:33.826 [2024-11-18 00:40:57.477090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.826 [2024-11-18 00:40:57.477123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.826 qpair failed and we were unable to recover it. 00:35:33.826 [2024-11-18 00:40:57.477268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.826 [2024-11-18 00:40:57.477296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.826 qpair failed and we were unable to recover it. 00:35:33.826 [2024-11-18 00:40:57.477386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.826 [2024-11-18 00:40:57.477412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.826 qpair failed and we were unable to recover it. 00:35:33.826 [2024-11-18 00:40:57.477525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.826 [2024-11-18 00:40:57.477552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.826 qpair failed and we were unable to recover it. 00:35:33.826 [2024-11-18 00:40:57.477670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.826 [2024-11-18 00:40:57.477710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.826 qpair failed and we were unable to recover it. 00:35:33.826 [2024-11-18 00:40:57.477864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.826 [2024-11-18 00:40:57.477903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.826 qpair failed and we were unable to recover it. 00:35:33.826 [2024-11-18 00:40:57.478120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.826 [2024-11-18 00:40:57.478147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.826 qpair failed and we were unable to recover it. 00:35:33.826 [2024-11-18 00:40:57.478264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.826 [2024-11-18 00:40:57.478290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.826 qpair failed and we were unable to recover it. 00:35:33.826 [2024-11-18 00:40:57.478389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.826 [2024-11-18 00:40:57.478416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.826 qpair failed and we were unable to recover it. 00:35:33.826 [2024-11-18 00:40:57.478555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.826 [2024-11-18 00:40:57.478600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.826 qpair failed and we were unable to recover it. 00:35:33.826 [2024-11-18 00:40:57.478693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.826 [2024-11-18 00:40:57.478724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.826 qpair failed and we were unable to recover it. 00:35:33.826 [2024-11-18 00:40:57.478938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.826 [2024-11-18 00:40:57.478995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.826 qpair failed and we were unable to recover it. 00:35:33.826 [2024-11-18 00:40:57.479101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.826 [2024-11-18 00:40:57.479134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.826 qpair failed and we were unable to recover it. 00:35:33.826 [2024-11-18 00:40:57.479263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.826 [2024-11-18 00:40:57.479294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.826 qpair failed and we were unable to recover it. 00:35:33.826 [2024-11-18 00:40:57.479445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.826 [2024-11-18 00:40:57.479471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.826 qpair failed and we were unable to recover it. 00:35:33.826 [2024-11-18 00:40:57.479587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.826 [2024-11-18 00:40:57.479613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.826 qpair failed and we were unable to recover it. 00:35:33.826 [2024-11-18 00:40:57.479729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.826 [2024-11-18 00:40:57.479756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.826 qpair failed and we were unable to recover it. 00:35:33.826 [2024-11-18 00:40:57.479900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.826 [2024-11-18 00:40:57.479931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.826 qpair failed and we were unable to recover it. 00:35:33.826 [2024-11-18 00:40:57.480063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.826 [2024-11-18 00:40:57.480097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.826 qpair failed and we were unable to recover it. 00:35:33.826 [2024-11-18 00:40:57.480205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.826 [2024-11-18 00:40:57.480237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.826 qpair failed and we were unable to recover it. 00:35:33.826 [2024-11-18 00:40:57.480403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.826 [2024-11-18 00:40:57.480442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.826 qpair failed and we were unable to recover it. 00:35:33.826 [2024-11-18 00:40:57.480536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.826 [2024-11-18 00:40:57.480564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.826 qpair failed and we were unable to recover it. 00:35:33.826 [2024-11-18 00:40:57.480781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.826 [2024-11-18 00:40:57.480861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.826 qpair failed and we were unable to recover it. 00:35:33.826 [2024-11-18 00:40:57.481029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.826 [2024-11-18 00:40:57.481056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.826 qpair failed and we were unable to recover it. 00:35:33.826 [2024-11-18 00:40:57.481136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.826 [2024-11-18 00:40:57.481163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.826 qpair failed and we were unable to recover it. 00:35:33.826 [2024-11-18 00:40:57.481292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.826 [2024-11-18 00:40:57.481326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.826 qpair failed and we were unable to recover it. 00:35:33.826 [2024-11-18 00:40:57.481434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.826 [2024-11-18 00:40:57.481459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.826 qpair failed and we were unable to recover it. 00:35:33.826 [2024-11-18 00:40:57.481536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.826 [2024-11-18 00:40:57.481562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.826 qpair failed and we were unable to recover it. 00:35:33.826 [2024-11-18 00:40:57.481645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.826 [2024-11-18 00:40:57.481672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.826 qpair failed and we were unable to recover it. 00:35:33.826 [2024-11-18 00:40:57.481778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.827 [2024-11-18 00:40:57.481831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.827 qpair failed and we were unable to recover it. 00:35:33.827 [2024-11-18 00:40:57.482014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.827 [2024-11-18 00:40:57.482040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.827 qpair failed and we were unable to recover it. 00:35:33.827 [2024-11-18 00:40:57.482124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.827 [2024-11-18 00:40:57.482150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.827 qpair failed and we were unable to recover it. 00:35:33.827 [2024-11-18 00:40:57.482287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.827 [2024-11-18 00:40:57.482334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.827 qpair failed and we were unable to recover it. 00:35:33.827 [2024-11-18 00:40:57.482471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.827 [2024-11-18 00:40:57.482498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.827 qpair failed and we were unable to recover it. 00:35:33.827 [2024-11-18 00:40:57.482639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.827 [2024-11-18 00:40:57.482679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.827 qpair failed and we were unable to recover it. 00:35:33.827 [2024-11-18 00:40:57.482843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.827 [2024-11-18 00:40:57.482883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.827 qpair failed and we were unable to recover it. 00:35:33.827 [2024-11-18 00:40:57.483098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.827 [2024-11-18 00:40:57.483152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.827 qpair failed and we were unable to recover it. 00:35:33.827 [2024-11-18 00:40:57.483290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.827 [2024-11-18 00:40:57.483329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.827 qpair failed and we were unable to recover it. 00:35:33.827 [2024-11-18 00:40:57.483460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.827 [2024-11-18 00:40:57.483493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.827 qpair failed and we were unable to recover it. 00:35:33.827 [2024-11-18 00:40:57.483652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.827 [2024-11-18 00:40:57.483691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.827 qpair failed and we were unable to recover it. 00:35:33.827 [2024-11-18 00:40:57.483882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.827 [2024-11-18 00:40:57.483921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.827 qpair failed and we were unable to recover it. 00:35:33.827 [2024-11-18 00:40:57.484049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.827 [2024-11-18 00:40:57.484101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.827 qpair failed and we were unable to recover it. 00:35:33.827 [2024-11-18 00:40:57.484288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.827 [2024-11-18 00:40:57.484321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.827 qpair failed and we were unable to recover it. 00:35:33.827 [2024-11-18 00:40:57.484440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.827 [2024-11-18 00:40:57.484466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.827 qpair failed and we were unable to recover it. 00:35:33.827 [2024-11-18 00:40:57.484562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.827 [2024-11-18 00:40:57.484610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.827 qpair failed and we were unable to recover it. 00:35:33.827 [2024-11-18 00:40:57.484771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.827 [2024-11-18 00:40:57.484811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.827 qpair failed and we were unable to recover it. 00:35:33.827 [2024-11-18 00:40:57.484959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.827 [2024-11-18 00:40:57.484999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.827 qpair failed and we were unable to recover it. 00:35:33.827 [2024-11-18 00:40:57.485159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.827 [2024-11-18 00:40:57.485199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.827 qpair failed and we were unable to recover it. 00:35:33.827 [2024-11-18 00:40:57.485368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.827 [2024-11-18 00:40:57.485401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.827 qpair failed and we were unable to recover it. 00:35:33.827 [2024-11-18 00:40:57.485561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.827 [2024-11-18 00:40:57.485588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.827 qpair failed and we were unable to recover it. 00:35:33.827 [2024-11-18 00:40:57.485700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.827 [2024-11-18 00:40:57.485727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.827 qpair failed and we were unable to recover it. 00:35:33.827 [2024-11-18 00:40:57.485840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.827 [2024-11-18 00:40:57.485866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.827 qpair failed and we were unable to recover it. 00:35:33.827 [2024-11-18 00:40:57.485958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.827 [2024-11-18 00:40:57.486007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.827 qpair failed and we were unable to recover it. 00:35:33.827 [2024-11-18 00:40:57.486213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.827 [2024-11-18 00:40:57.486270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.827 qpair failed and we were unable to recover it. 00:35:33.827 [2024-11-18 00:40:57.486422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.827 [2024-11-18 00:40:57.486462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.827 qpair failed and we were unable to recover it. 00:35:33.827 [2024-11-18 00:40:57.486561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.827 [2024-11-18 00:40:57.486589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.827 qpair failed and we were unable to recover it. 00:35:33.827 [2024-11-18 00:40:57.486770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.827 [2024-11-18 00:40:57.486810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.827 qpair failed and we were unable to recover it. 00:35:33.827 [2024-11-18 00:40:57.486935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.827 [2024-11-18 00:40:57.486991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.827 qpair failed and we were unable to recover it. 00:35:33.827 [2024-11-18 00:40:57.487125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.827 [2024-11-18 00:40:57.487165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.827 qpair failed and we were unable to recover it. 00:35:33.827 [2024-11-18 00:40:57.487292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.827 [2024-11-18 00:40:57.487360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.827 qpair failed and we were unable to recover it. 00:35:33.827 [2024-11-18 00:40:57.487482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.827 [2024-11-18 00:40:57.487523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.827 qpair failed and we were unable to recover it. 00:35:33.827 [2024-11-18 00:40:57.487717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.827 [2024-11-18 00:40:57.487758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.827 qpair failed and we were unable to recover it. 00:35:33.827 [2024-11-18 00:40:57.487890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.827 [2024-11-18 00:40:57.487930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.827 qpair failed and we were unable to recover it. 00:35:33.827 [2024-11-18 00:40:57.488092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.828 [2024-11-18 00:40:57.488132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.828 qpair failed and we were unable to recover it. 00:35:33.828 [2024-11-18 00:40:57.488279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.828 [2024-11-18 00:40:57.488321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.828 qpair failed and we were unable to recover it. 00:35:33.828 [2024-11-18 00:40:57.488469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.828 [2024-11-18 00:40:57.488502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.828 qpair failed and we were unable to recover it. 00:35:33.828 [2024-11-18 00:40:57.488669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.828 [2024-11-18 00:40:57.488710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.828 qpair failed and we were unable to recover it. 00:35:33.828 [2024-11-18 00:40:57.488871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.828 [2024-11-18 00:40:57.488912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.828 qpair failed and we were unable to recover it. 00:35:33.828 [2024-11-18 00:40:57.489032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.828 [2024-11-18 00:40:57.489071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.828 qpair failed and we were unable to recover it. 00:35:33.828 [2024-11-18 00:40:57.489226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.828 [2024-11-18 00:40:57.489267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.828 qpair failed and we were unable to recover it. 00:35:33.828 [2024-11-18 00:40:57.489437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.828 [2024-11-18 00:40:57.489469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.828 qpair failed and we were unable to recover it. 00:35:33.828 [2024-11-18 00:40:57.489621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.828 [2024-11-18 00:40:57.489660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.828 qpair failed and we were unable to recover it. 00:35:33.828 [2024-11-18 00:40:57.489869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.828 [2024-11-18 00:40:57.489909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.828 qpair failed and we were unable to recover it. 00:35:33.828 [2024-11-18 00:40:57.490073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.828 [2024-11-18 00:40:57.490153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.828 qpair failed and we were unable to recover it. 00:35:33.828 [2024-11-18 00:40:57.490302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.828 [2024-11-18 00:40:57.490377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.828 qpair failed and we were unable to recover it. 00:35:33.828 [2024-11-18 00:40:57.490508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.828 [2024-11-18 00:40:57.490541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.828 qpair failed and we were unable to recover it. 00:35:33.828 [2024-11-18 00:40:57.490675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.828 [2024-11-18 00:40:57.490737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.828 qpair failed and we were unable to recover it. 00:35:33.828 [2024-11-18 00:40:57.490938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.828 [2024-11-18 00:40:57.491011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.828 qpair failed and we were unable to recover it. 00:35:33.828 [2024-11-18 00:40:57.491234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.828 [2024-11-18 00:40:57.491345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.828 qpair failed and we were unable to recover it. 00:35:33.828 [2024-11-18 00:40:57.491480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.828 [2024-11-18 00:40:57.491513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.828 qpair failed and we were unable to recover it. 00:35:33.828 [2024-11-18 00:40:57.491677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.828 [2024-11-18 00:40:57.491740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.828 qpair failed and we were unable to recover it. 00:35:33.828 [2024-11-18 00:40:57.492039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.828 [2024-11-18 00:40:57.492105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.828 qpair failed and we were unable to recover it. 00:35:33.828 [2024-11-18 00:40:57.492252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.828 [2024-11-18 00:40:57.492284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.828 qpair failed and we were unable to recover it. 00:35:33.828 [2024-11-18 00:40:57.492429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.828 [2024-11-18 00:40:57.492462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.828 qpair failed and we were unable to recover it. 00:35:33.828 [2024-11-18 00:40:57.492640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.828 [2024-11-18 00:40:57.492698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.828 qpair failed and we were unable to recover it. 00:35:33.828 [2024-11-18 00:40:57.492853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.828 [2024-11-18 00:40:57.492903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.828 qpair failed and we were unable to recover it. 00:35:33.828 [2024-11-18 00:40:57.493126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.828 [2024-11-18 00:40:57.493178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.828 qpair failed and we were unable to recover it. 00:35:33.828 [2024-11-18 00:40:57.493339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.828 [2024-11-18 00:40:57.493395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.828 qpair failed and we were unable to recover it. 00:35:33.828 [2024-11-18 00:40:57.493594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.828 [2024-11-18 00:40:57.493634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.828 qpair failed and we were unable to recover it. 00:35:33.828 [2024-11-18 00:40:57.493758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.828 [2024-11-18 00:40:57.493798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.828 qpair failed and we were unable to recover it. 00:35:33.828 [2024-11-18 00:40:57.493951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.828 [2024-11-18 00:40:57.493990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.828 qpair failed and we were unable to recover it. 00:35:33.828 [2024-11-18 00:40:57.494146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.828 [2024-11-18 00:40:57.494185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:33.828 qpair failed and we were unable to recover it. 00:35:33.828 [2024-11-18 00:40:57.494344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.828 [2024-11-18 00:40:57.494409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.828 qpair failed and we were unable to recover it. 00:35:33.828 [2024-11-18 00:40:57.494542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.828 [2024-11-18 00:40:57.494594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.828 qpair failed and we were unable to recover it. 00:35:33.828 [2024-11-18 00:40:57.494708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.828 [2024-11-18 00:40:57.494740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.828 qpair failed and we were unable to recover it. 00:35:33.828 [2024-11-18 00:40:57.494839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.828 [2024-11-18 00:40:57.494871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.828 qpair failed and we were unable to recover it. 00:35:33.828 [2024-11-18 00:40:57.495024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.828 [2024-11-18 00:40:57.495074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.828 qpair failed and we were unable to recover it. 00:35:33.828 [2024-11-18 00:40:57.495211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.828 [2024-11-18 00:40:57.495243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.828 qpair failed and we were unable to recover it. 00:35:33.828 [2024-11-18 00:40:57.495341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.828 [2024-11-18 00:40:57.495373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.828 qpair failed and we were unable to recover it. 00:35:33.828 [2024-11-18 00:40:57.495469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.828 [2024-11-18 00:40:57.495500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.828 qpair failed and we were unable to recover it. 00:35:33.828 [2024-11-18 00:40:57.495632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.828 [2024-11-18 00:40:57.495664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.828 qpair failed and we were unable to recover it. 00:35:33.828 [2024-11-18 00:40:57.495824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.829 [2024-11-18 00:40:57.495854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.829 qpair failed and we were unable to recover it. 00:35:33.829 [2024-11-18 00:40:57.496021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.829 [2024-11-18 00:40:57.496056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.829 qpair failed and we were unable to recover it. 00:35:33.829 [2024-11-18 00:40:57.496195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.829 [2024-11-18 00:40:57.496227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.829 qpair failed and we were unable to recover it. 00:35:33.829 [2024-11-18 00:40:57.496334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.829 [2024-11-18 00:40:57.496367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.829 qpair failed and we were unable to recover it. 00:35:33.829 [2024-11-18 00:40:57.496528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.829 [2024-11-18 00:40:57.496580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.829 qpair failed and we were unable to recover it. 00:35:33.829 [2024-11-18 00:40:57.496734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.829 [2024-11-18 00:40:57.496789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.829 qpair failed and we were unable to recover it. 00:35:33.829 [2024-11-18 00:40:57.496947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.829 [2024-11-18 00:40:57.496997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.829 qpair failed and we were unable to recover it. 00:35:33.829 [2024-11-18 00:40:57.497201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.829 [2024-11-18 00:40:57.497232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.829 qpair failed and we were unable to recover it. 00:35:33.829 [2024-11-18 00:40:57.497419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.829 [2024-11-18 00:40:57.497471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.829 qpair failed and we were unable to recover it. 00:35:33.829 [2024-11-18 00:40:57.497566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.829 [2024-11-18 00:40:57.497597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.829 qpair failed and we were unable to recover it. 00:35:33.829 [2024-11-18 00:40:57.497744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.829 [2024-11-18 00:40:57.497793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.829 qpair failed and we were unable to recover it. 00:35:33.829 [2024-11-18 00:40:57.497922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.829 [2024-11-18 00:40:57.497954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.829 qpair failed and we were unable to recover it. 00:35:33.829 [2024-11-18 00:40:57.498077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.829 [2024-11-18 00:40:57.498109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.829 qpair failed and we were unable to recover it. 00:35:33.829 [2024-11-18 00:40:57.498262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.829 [2024-11-18 00:40:57.498293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.829 qpair failed and we were unable to recover it. 00:35:33.829 [2024-11-18 00:40:57.498440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.829 [2024-11-18 00:40:57.498471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.829 qpair failed and we were unable to recover it. 00:35:33.829 [2024-11-18 00:40:57.498568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.829 [2024-11-18 00:40:57.498600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.829 qpair failed and we were unable to recover it. 00:35:33.829 [2024-11-18 00:40:57.498757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.829 [2024-11-18 00:40:57.498788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.829 qpair failed and we were unable to recover it. 00:35:33.829 [2024-11-18 00:40:57.498950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.829 [2024-11-18 00:40:57.498988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.829 qpair failed and we were unable to recover it. 00:35:33.829 [2024-11-18 00:40:57.499119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.829 [2024-11-18 00:40:57.499150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.829 qpair failed and we were unable to recover it. 00:35:33.829 [2024-11-18 00:40:57.499280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.829 [2024-11-18 00:40:57.499330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.829 qpair failed and we were unable to recover it. 00:35:33.829 [2024-11-18 00:40:57.499477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.829 [2024-11-18 00:40:57.499518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.829 qpair failed and we were unable to recover it. 00:35:33.829 [2024-11-18 00:40:57.499719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.829 [2024-11-18 00:40:57.499781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.829 qpair failed and we were unable to recover it. 00:35:33.829 [2024-11-18 00:40:57.500085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.829 [2024-11-18 00:40:57.500163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.829 qpair failed and we were unable to recover it. 00:35:33.829 [2024-11-18 00:40:57.500367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.829 [2024-11-18 00:40:57.500400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.829 qpair failed and we were unable to recover it. 00:35:33.829 [2024-11-18 00:40:57.500537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.829 [2024-11-18 00:40:57.500568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.829 qpair failed and we were unable to recover it. 00:35:33.829 [2024-11-18 00:40:57.500784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.829 [2024-11-18 00:40:57.500824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.829 qpair failed and we were unable to recover it. 00:35:33.829 [2024-11-18 00:40:57.500977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.829 [2024-11-18 00:40:57.501017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.829 qpair failed and we were unable to recover it. 00:35:33.829 [2024-11-18 00:40:57.501205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.829 [2024-11-18 00:40:57.501244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.829 qpair failed and we were unable to recover it. 00:35:33.829 [2024-11-18 00:40:57.501465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.829 [2024-11-18 00:40:57.501512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.829 qpair failed and we were unable to recover it. 00:35:33.829 [2024-11-18 00:40:57.501625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.829 [2024-11-18 00:40:57.501659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.829 qpair failed and we were unable to recover it. 00:35:33.829 [2024-11-18 00:40:57.501855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.829 [2024-11-18 00:40:57.501908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.829 qpair failed and we were unable to recover it. 00:35:33.829 [2024-11-18 00:40:57.502072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.829 [2024-11-18 00:40:57.502123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.829 qpair failed and we were unable to recover it. 00:35:33.829 [2024-11-18 00:40:57.502217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.829 [2024-11-18 00:40:57.502249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.829 qpair failed and we were unable to recover it. 00:35:33.829 [2024-11-18 00:40:57.502350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.829 [2024-11-18 00:40:57.502380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.829 qpair failed and we were unable to recover it. 00:35:33.829 [2024-11-18 00:40:57.502589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.829 [2024-11-18 00:40:57.502637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.829 qpair failed and we were unable to recover it. 00:35:33.829 [2024-11-18 00:40:57.502804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.829 [2024-11-18 00:40:57.502856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.829 qpair failed and we were unable to recover it. 00:35:33.829 [2024-11-18 00:40:57.503040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.829 [2024-11-18 00:40:57.503091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.829 qpair failed and we were unable to recover it. 00:35:33.829 [2024-11-18 00:40:57.503293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.829 [2024-11-18 00:40:57.503334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.829 qpair failed and we were unable to recover it. 00:35:33.829 [2024-11-18 00:40:57.503528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.830 [2024-11-18 00:40:57.503578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.830 qpair failed and we were unable to recover it. 00:35:33.830 [2024-11-18 00:40:57.503765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.830 [2024-11-18 00:40:57.503816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.830 qpair failed and we were unable to recover it. 00:35:33.830 [2024-11-18 00:40:57.504003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.830 [2024-11-18 00:40:57.504054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.830 qpair failed and we were unable to recover it. 00:35:33.830 [2024-11-18 00:40:57.504183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.830 [2024-11-18 00:40:57.504215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.830 qpair failed and we were unable to recover it. 00:35:33.830 [2024-11-18 00:40:57.504344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.830 [2024-11-18 00:40:57.504376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.830 qpair failed and we were unable to recover it. 00:35:33.830 [2024-11-18 00:40:57.504495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.830 [2024-11-18 00:40:57.504548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.830 qpair failed and we were unable to recover it. 00:35:33.830 [2024-11-18 00:40:57.504737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.830 [2024-11-18 00:40:57.504795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.830 qpair failed and we were unable to recover it. 00:35:33.830 [2024-11-18 00:40:57.504955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.830 [2024-11-18 00:40:57.505010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.830 qpair failed and we were unable to recover it. 00:35:33.830 [2024-11-18 00:40:57.505113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.830 [2024-11-18 00:40:57.505143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.830 qpair failed and we were unable to recover it. 00:35:33.830 [2024-11-18 00:40:57.505253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.830 [2024-11-18 00:40:57.505284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.830 qpair failed and we were unable to recover it. 00:35:33.830 [2024-11-18 00:40:57.505422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.830 [2024-11-18 00:40:57.505454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.830 qpair failed and we were unable to recover it. 00:35:33.830 [2024-11-18 00:40:57.505558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.830 [2024-11-18 00:40:57.505593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.830 qpair failed and we were unable to recover it. 00:35:33.830 [2024-11-18 00:40:57.505700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.830 [2024-11-18 00:40:57.505754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.830 qpair failed and we were unable to recover it. 00:35:33.830 [2024-11-18 00:40:57.505928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.830 [2024-11-18 00:40:57.505967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.830 qpair failed and we were unable to recover it. 00:35:33.830 [2024-11-18 00:40:57.506082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.830 [2024-11-18 00:40:57.506113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.830 qpair failed and we were unable to recover it. 00:35:33.830 [2024-11-18 00:40:57.506253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.830 [2024-11-18 00:40:57.506284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.830 qpair failed and we were unable to recover it. 00:35:33.830 [2024-11-18 00:40:57.506429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.830 [2024-11-18 00:40:57.506461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.830 qpair failed and we were unable to recover it. 00:35:33.830 [2024-11-18 00:40:57.506607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.830 [2024-11-18 00:40:57.506646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.830 qpair failed and we were unable to recover it. 00:35:33.830 [2024-11-18 00:40:57.506805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.830 [2024-11-18 00:40:57.506845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.830 qpair failed and we were unable to recover it. 00:35:33.830 [2024-11-18 00:40:57.506978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.830 [2024-11-18 00:40:57.507017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.830 qpair failed and we were unable to recover it. 00:35:33.830 [2024-11-18 00:40:57.507211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.830 [2024-11-18 00:40:57.507242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.830 qpair failed and we were unable to recover it. 00:35:33.830 [2024-11-18 00:40:57.507341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.830 [2024-11-18 00:40:57.507373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.830 qpair failed and we were unable to recover it. 00:35:33.830 [2024-11-18 00:40:57.507503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.830 [2024-11-18 00:40:57.507535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.830 qpair failed and we were unable to recover it. 00:35:33.830 [2024-11-18 00:40:57.507686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.830 [2024-11-18 00:40:57.507725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.830 qpair failed and we were unable to recover it. 00:35:33.830 [2024-11-18 00:40:57.507917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.830 [2024-11-18 00:40:57.507956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.830 qpair failed and we were unable to recover it. 00:35:33.830 [2024-11-18 00:40:57.508112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.830 [2024-11-18 00:40:57.508152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.830 qpair failed and we were unable to recover it. 00:35:33.830 [2024-11-18 00:40:57.508320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.830 [2024-11-18 00:40:57.508355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.830 qpair failed and we were unable to recover it. 00:35:33.830 [2024-11-18 00:40:57.508520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.830 [2024-11-18 00:40:57.508552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.830 qpair failed and we were unable to recover it. 00:35:33.830 [2024-11-18 00:40:57.508659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.830 [2024-11-18 00:40:57.508689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.830 qpair failed and we were unable to recover it. 00:35:33.830 [2024-11-18 00:40:57.508864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.830 [2024-11-18 00:40:57.508913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.830 qpair failed and we were unable to recover it. 00:35:33.830 [2024-11-18 00:40:57.509101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.830 [2024-11-18 00:40:57.509152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.830 qpair failed and we were unable to recover it. 00:35:33.830 [2024-11-18 00:40:57.509251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.830 [2024-11-18 00:40:57.509283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.830 qpair failed and we were unable to recover it. 00:35:33.830 [2024-11-18 00:40:57.509393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.830 [2024-11-18 00:40:57.509426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.830 qpair failed and we were unable to recover it. 00:35:33.830 [2024-11-18 00:40:57.509564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.830 [2024-11-18 00:40:57.509617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.830 qpair failed and we were unable to recover it. 00:35:33.830 [2024-11-18 00:40:57.509772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.830 [2024-11-18 00:40:57.509812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.830 qpair failed and we were unable to recover it. 00:35:33.830 [2024-11-18 00:40:57.509963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.830 [2024-11-18 00:40:57.510002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.830 qpair failed and we were unable to recover it. 00:35:33.830 [2024-11-18 00:40:57.510120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.830 [2024-11-18 00:40:57.510160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.830 qpair failed and we were unable to recover it. 00:35:33.830 [2024-11-18 00:40:57.510359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.830 [2024-11-18 00:40:57.510391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.830 qpair failed and we were unable to recover it. 00:35:33.830 [2024-11-18 00:40:57.510581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.831 [2024-11-18 00:40:57.510619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.831 qpair failed and we were unable to recover it. 00:35:33.831 [2024-11-18 00:40:57.510774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.831 [2024-11-18 00:40:57.510814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.831 qpair failed and we were unable to recover it. 00:35:33.831 [2024-11-18 00:40:57.510973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.831 [2024-11-18 00:40:57.511012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.831 qpair failed and we were unable to recover it. 00:35:33.831 [2024-11-18 00:40:57.511173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.831 [2024-11-18 00:40:57.511207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.831 qpair failed and we were unable to recover it. 00:35:33.831 [2024-11-18 00:40:57.511347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.831 [2024-11-18 00:40:57.511380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.831 qpair failed and we were unable to recover it. 00:35:33.831 [2024-11-18 00:40:57.511539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.831 [2024-11-18 00:40:57.511590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.831 qpair failed and we were unable to recover it. 00:35:33.831 [2024-11-18 00:40:57.511800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.831 [2024-11-18 00:40:57.511850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.831 qpair failed and we were unable to recover it. 00:35:33.831 [2024-11-18 00:40:57.512057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.831 [2024-11-18 00:40:57.512108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.831 qpair failed and we were unable to recover it. 00:35:33.831 [2024-11-18 00:40:57.512244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.831 [2024-11-18 00:40:57.512275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:33.831 qpair failed and we were unable to recover it. 00:35:33.831 [2024-11-18 00:40:57.512473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.831 [2024-11-18 00:40:57.512506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.831 qpair failed and we were unable to recover it. 00:35:33.831 [2024-11-18 00:40:57.512715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.831 [2024-11-18 00:40:57.512755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.831 qpair failed and we were unable to recover it. 00:35:33.831 [2024-11-18 00:40:57.512910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.831 [2024-11-18 00:40:57.512950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.831 qpair failed and we were unable to recover it. 00:35:33.831 [2024-11-18 00:40:57.513148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.831 [2024-11-18 00:40:57.513187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.831 qpair failed and we were unable to recover it. 00:35:33.831 [2024-11-18 00:40:57.513328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.831 [2024-11-18 00:40:57.513381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.831 qpair failed and we were unable to recover it. 00:35:33.831 [2024-11-18 00:40:57.513520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.831 [2024-11-18 00:40:57.513551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.831 qpair failed and we were unable to recover it. 00:35:33.831 [2024-11-18 00:40:57.513705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.831 [2024-11-18 00:40:57.513744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.831 qpair failed and we were unable to recover it. 00:35:33.831 [2024-11-18 00:40:57.513912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.831 [2024-11-18 00:40:57.513952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.831 qpair failed and we were unable to recover it. 00:35:33.831 [2024-11-18 00:40:57.514117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.831 [2024-11-18 00:40:57.514156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.831 qpair failed and we were unable to recover it. 00:35:33.831 [2024-11-18 00:40:57.514332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.831 [2024-11-18 00:40:57.514364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.831 qpair failed and we were unable to recover it. 00:35:33.831 [2024-11-18 00:40:57.514527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.831 [2024-11-18 00:40:57.514559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.831 qpair failed and we were unable to recover it. 00:35:33.831 [2024-11-18 00:40:57.514697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.831 [2024-11-18 00:40:57.514737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.831 qpair failed and we were unable to recover it. 00:35:33.831 [2024-11-18 00:40:57.514871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.831 [2024-11-18 00:40:57.514925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.831 qpair failed and we were unable to recover it. 00:35:33.831 [2024-11-18 00:40:57.515122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.831 [2024-11-18 00:40:57.515162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.831 qpair failed and we were unable to recover it. 00:35:33.831 [2024-11-18 00:40:57.515307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.831 [2024-11-18 00:40:57.515366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.831 qpair failed and we were unable to recover it. 00:35:33.831 [2024-11-18 00:40:57.515470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.831 [2024-11-18 00:40:57.515501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.831 qpair failed and we were unable to recover it. 00:35:33.831 [2024-11-18 00:40:57.515643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.831 [2024-11-18 00:40:57.515675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.831 qpair failed and we were unable to recover it. 00:35:33.831 [2024-11-18 00:40:57.515834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.831 [2024-11-18 00:40:57.515872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.831 qpair failed and we were unable to recover it. 00:35:33.831 [2024-11-18 00:40:57.516065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.831 [2024-11-18 00:40:57.516104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.831 qpair failed and we were unable to recover it. 00:35:33.831 [2024-11-18 00:40:57.516252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.831 [2024-11-18 00:40:57.516291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.831 qpair failed and we were unable to recover it. 00:35:33.831 [2024-11-18 00:40:57.516419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.831 [2024-11-18 00:40:57.516450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.831 qpair failed and we were unable to recover it. 00:35:33.831 [2024-11-18 00:40:57.516616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.831 [2024-11-18 00:40:57.516655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.831 qpair failed and we were unable to recover it. 00:35:33.831 [2024-11-18 00:40:57.516808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.831 [2024-11-18 00:40:57.516847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.831 qpair failed and we were unable to recover it. 00:35:33.831 [2024-11-18 00:40:57.517005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.831 [2024-11-18 00:40:57.517044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.831 qpair failed and we were unable to recover it. 00:35:33.831 [2024-11-18 00:40:57.517218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.831 [2024-11-18 00:40:57.517279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.831 qpair failed and we were unable to recover it. 00:35:33.831 [2024-11-18 00:40:57.517505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.831 [2024-11-18 00:40:57.517537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.831 qpair failed and we were unable to recover it. 00:35:33.831 [2024-11-18 00:40:57.517738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.831 [2024-11-18 00:40:57.517777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.831 qpair failed and we were unable to recover it. 00:35:33.831 [2024-11-18 00:40:57.518005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.831 [2024-11-18 00:40:57.518044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.831 qpair failed and we were unable to recover it. 00:35:33.831 [2024-11-18 00:40:57.518172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.831 [2024-11-18 00:40:57.518244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.831 qpair failed and we were unable to recover it. 00:35:33.831 [2024-11-18 00:40:57.518414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.832 [2024-11-18 00:40:57.518446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.832 qpair failed and we were unable to recover it. 00:35:33.832 [2024-11-18 00:40:57.518580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.832 [2024-11-18 00:40:57.518645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.832 qpair failed and we were unable to recover it. 00:35:33.832 [2024-11-18 00:40:57.518851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.832 [2024-11-18 00:40:57.518933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.832 qpair failed and we were unable to recover it. 00:35:33.832 [2024-11-18 00:40:57.519140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.832 [2024-11-18 00:40:57.519200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.832 qpair failed and we were unable to recover it. 00:35:33.832 [2024-11-18 00:40:57.519384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.832 [2024-11-18 00:40:57.519416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.832 qpair failed and we were unable to recover it. 00:35:33.832 [2024-11-18 00:40:57.519547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.832 [2024-11-18 00:40:57.519579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.832 qpair failed and we were unable to recover it. 00:35:33.832 [2024-11-18 00:40:57.519711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.832 [2024-11-18 00:40:57.519742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.832 qpair failed and we were unable to recover it. 00:35:33.832 [2024-11-18 00:40:57.519954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.832 [2024-11-18 00:40:57.520016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.832 qpair failed and we were unable to recover it. 00:35:33.832 [2024-11-18 00:40:57.520250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.832 [2024-11-18 00:40:57.520325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.832 qpair failed and we were unable to recover it. 00:35:33.832 [2024-11-18 00:40:57.520437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.832 [2024-11-18 00:40:57.520469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.832 qpair failed and we were unable to recover it. 00:35:33.832 [2024-11-18 00:40:57.520626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.832 [2024-11-18 00:40:57.520658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.832 qpair failed and we were unable to recover it. 00:35:33.832 [2024-11-18 00:40:57.520827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.832 [2024-11-18 00:40:57.520918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.832 qpair failed and we were unable to recover it. 00:35:33.832 [2024-11-18 00:40:57.521145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.832 [2024-11-18 00:40:57.521205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.832 qpair failed and we were unable to recover it. 00:35:33.832 [2024-11-18 00:40:57.521368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.832 [2024-11-18 00:40:57.521400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.832 qpair failed and we were unable to recover it. 00:35:33.832 [2024-11-18 00:40:57.521535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.832 [2024-11-18 00:40:57.521566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.832 qpair failed and we were unable to recover it. 00:35:33.832 [2024-11-18 00:40:57.521816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.832 [2024-11-18 00:40:57.521862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.832 qpair failed and we were unable to recover it. 00:35:33.832 [2024-11-18 00:40:57.522136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.832 [2024-11-18 00:40:57.522196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.832 qpair failed and we were unable to recover it. 00:35:33.832 [2024-11-18 00:40:57.522366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.832 [2024-11-18 00:40:57.522398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.832 qpair failed and we were unable to recover it. 00:35:33.832 [2024-11-18 00:40:57.522503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.832 [2024-11-18 00:40:57.522534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.832 qpair failed and we were unable to recover it. 00:35:33.832 [2024-11-18 00:40:57.522719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.832 [2024-11-18 00:40:57.522800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.832 qpair failed and we were unable to recover it. 00:35:33.832 [2024-11-18 00:40:57.523052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.832 [2024-11-18 00:40:57.523112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.832 qpair failed and we were unable to recover it. 00:35:33.832 [2024-11-18 00:40:57.523264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.832 [2024-11-18 00:40:57.523303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.832 qpair failed and we were unable to recover it. 00:35:33.832 [2024-11-18 00:40:57.523464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.832 [2024-11-18 00:40:57.523496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.832 qpair failed and we were unable to recover it. 00:35:33.832 [2024-11-18 00:40:57.523644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.832 [2024-11-18 00:40:57.523683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.832 qpair failed and we were unable to recover it. 00:35:33.832 [2024-11-18 00:40:57.523840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.832 [2024-11-18 00:40:57.523879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.832 qpair failed and we were unable to recover it. 00:35:33.832 [2024-11-18 00:40:57.524016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.832 [2024-11-18 00:40:57.524068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.832 qpair failed and we were unable to recover it. 00:35:33.832 [2024-11-18 00:40:57.524263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.832 [2024-11-18 00:40:57.524300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.832 qpair failed and we were unable to recover it. 00:35:33.832 [2024-11-18 00:40:57.524465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.832 [2024-11-18 00:40:57.524495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.832 qpair failed and we were unable to recover it. 00:35:33.832 [2024-11-18 00:40:57.524597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.832 [2024-11-18 00:40:57.524647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.832 qpair failed and we were unable to recover it. 00:35:33.832 [2024-11-18 00:40:57.524839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.832 [2024-11-18 00:40:57.524870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.832 qpair failed and we were unable to recover it. 00:35:33.832 [2024-11-18 00:40:57.525169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.832 [2024-11-18 00:40:57.525229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.832 qpair failed and we were unable to recover it. 00:35:33.832 [2024-11-18 00:40:57.525452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.832 [2024-11-18 00:40:57.525484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.832 qpair failed and we were unable to recover it. 00:35:33.832 [2024-11-18 00:40:57.525638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.832 [2024-11-18 00:40:57.525703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.832 qpair failed and we were unable to recover it. 00:35:33.832 [2024-11-18 00:40:57.525953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.832 [2024-11-18 00:40:57.526031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.832 qpair failed and we were unable to recover it. 00:35:33.832 [2024-11-18 00:40:57.526229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.832 [2024-11-18 00:40:57.526268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.833 qpair failed and we were unable to recover it. 00:35:33.833 [2024-11-18 00:40:57.526453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.833 [2024-11-18 00:40:57.526485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.833 qpair failed and we were unable to recover it. 00:35:33.833 [2024-11-18 00:40:57.526642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.833 [2024-11-18 00:40:57.526688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.833 qpair failed and we were unable to recover it. 00:35:33.833 [2024-11-18 00:40:57.526840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.833 [2024-11-18 00:40:57.526871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.833 qpair failed and we were unable to recover it. 00:35:33.833 [2024-11-18 00:40:57.527000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.833 [2024-11-18 00:40:57.527037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.833 qpair failed and we were unable to recover it. 00:35:33.833 [2024-11-18 00:40:57.527252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.833 [2024-11-18 00:40:57.527291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.833 qpair failed and we were unable to recover it. 00:35:33.833 [2024-11-18 00:40:57.527532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.833 [2024-11-18 00:40:57.527593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.833 qpair failed and we were unable to recover it. 00:35:33.833 [2024-11-18 00:40:57.527842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.833 [2024-11-18 00:40:57.527921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.833 qpair failed and we were unable to recover it. 00:35:33.833 [2024-11-18 00:40:57.528071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.833 [2024-11-18 00:40:57.528125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.833 qpair failed and we were unable to recover it. 00:35:33.833 [2024-11-18 00:40:57.528275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.833 [2024-11-18 00:40:57.528323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.833 qpair failed and we were unable to recover it. 00:35:33.833 [2024-11-18 00:40:57.528445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.833 [2024-11-18 00:40:57.528483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.833 qpair failed and we were unable to recover it. 00:35:33.833 [2024-11-18 00:40:57.528654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.833 [2024-11-18 00:40:57.528715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.833 qpair failed and we were unable to recover it. 00:35:33.833 [2024-11-18 00:40:57.528957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.833 [2024-11-18 00:40:57.529016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.833 qpair failed and we were unable to recover it. 00:35:33.833 [2024-11-18 00:40:57.529165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.833 [2024-11-18 00:40:57.529202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.833 qpair failed and we were unable to recover it. 00:35:33.833 [2024-11-18 00:40:57.529331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.833 [2024-11-18 00:40:57.529369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.833 qpair failed and we were unable to recover it. 00:35:33.833 [2024-11-18 00:40:57.529559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.833 [2024-11-18 00:40:57.529595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.833 qpair failed and we were unable to recover it. 00:35:33.833 [2024-11-18 00:40:57.529714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.833 [2024-11-18 00:40:57.529752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.833 qpair failed and we were unable to recover it. 00:35:33.833 [2024-11-18 00:40:57.529935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.833 [2024-11-18 00:40:57.529972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.833 qpair failed and we were unable to recover it. 00:35:33.833 [2024-11-18 00:40:57.530130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.833 [2024-11-18 00:40:57.530167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.833 qpair failed and we were unable to recover it. 00:35:33.833 [2024-11-18 00:40:57.530391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.833 [2024-11-18 00:40:57.530453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.833 qpair failed and we were unable to recover it. 00:35:33.833 [2024-11-18 00:40:57.530589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.833 [2024-11-18 00:40:57.530626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.833 qpair failed and we were unable to recover it. 00:35:33.833 [2024-11-18 00:40:57.530840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.833 [2024-11-18 00:40:57.530900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.833 qpair failed and we were unable to recover it. 00:35:33.833 [2024-11-18 00:40:57.531079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.833 [2024-11-18 00:40:57.531116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.833 qpair failed and we were unable to recover it. 00:35:33.833 [2024-11-18 00:40:57.531263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.833 [2024-11-18 00:40:57.531300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.833 qpair failed and we were unable to recover it. 00:35:33.833 [2024-11-18 00:40:57.531529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.833 [2024-11-18 00:40:57.531590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.833 qpair failed and we were unable to recover it. 00:35:33.833 [2024-11-18 00:40:57.531788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.833 [2024-11-18 00:40:57.531849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.833 qpair failed and we were unable to recover it. 00:35:33.833 [2024-11-18 00:40:57.532048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.833 [2024-11-18 00:40:57.532085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.833 qpair failed and we were unable to recover it. 00:35:33.833 [2024-11-18 00:40:57.532233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.833 [2024-11-18 00:40:57.532271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.833 qpair failed and we were unable to recover it. 00:35:33.833 [2024-11-18 00:40:57.532436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.833 [2024-11-18 00:40:57.532513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.833 qpair failed and we were unable to recover it. 00:35:33.833 [2024-11-18 00:40:57.532749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.833 [2024-11-18 00:40:57.532810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.833 qpair failed and we were unable to recover it. 00:35:33.833 [2024-11-18 00:40:57.532986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.833 [2024-11-18 00:40:57.533023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.833 qpair failed and we were unable to recover it. 00:35:33.833 [2024-11-18 00:40:57.533154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.833 [2024-11-18 00:40:57.533191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.833 qpair failed and we were unable to recover it. 00:35:33.833 [2024-11-18 00:40:57.533353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.833 [2024-11-18 00:40:57.533390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.833 qpair failed and we were unable to recover it. 00:35:33.833 [2024-11-18 00:40:57.533542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.833 [2024-11-18 00:40:57.533579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.833 qpair failed and we were unable to recover it. 00:35:33.833 [2024-11-18 00:40:57.533760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.833 [2024-11-18 00:40:57.533798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.833 qpair failed and we were unable to recover it. 00:35:33.833 [2024-11-18 00:40:57.533956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.833 [2024-11-18 00:40:57.533994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.833 qpair failed and we were unable to recover it. 00:35:33.833 [2024-11-18 00:40:57.534141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.833 [2024-11-18 00:40:57.534178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.834 qpair failed and we were unable to recover it. 00:35:33.834 [2024-11-18 00:40:57.534305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.834 [2024-11-18 00:40:57.534362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.834 qpair failed and we were unable to recover it. 00:35:33.834 [2024-11-18 00:40:57.534478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.834 [2024-11-18 00:40:57.534515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.834 qpair failed and we were unable to recover it. 00:35:33.834 [2024-11-18 00:40:57.534675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.834 [2024-11-18 00:40:57.534713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.834 qpair failed and we were unable to recover it. 00:35:33.834 [2024-11-18 00:40:57.534867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.834 [2024-11-18 00:40:57.534905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.834 qpair failed and we were unable to recover it. 00:35:33.834 [2024-11-18 00:40:57.535048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.834 [2024-11-18 00:40:57.535084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.834 qpair failed and we were unable to recover it. 00:35:33.834 [2024-11-18 00:40:57.535238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.834 [2024-11-18 00:40:57.535274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.834 qpair failed and we were unable to recover it. 00:35:33.834 [2024-11-18 00:40:57.535432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.834 [2024-11-18 00:40:57.535470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.834 qpair failed and we were unable to recover it. 00:35:33.834 [2024-11-18 00:40:57.535618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.834 [2024-11-18 00:40:57.535655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.834 qpair failed and we were unable to recover it. 00:35:33.834 [2024-11-18 00:40:57.535813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.834 [2024-11-18 00:40:57.535851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.834 qpair failed and we were unable to recover it. 00:35:33.834 [2024-11-18 00:40:57.536013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.834 [2024-11-18 00:40:57.536050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.834 qpair failed and we were unable to recover it. 00:35:33.834 [2024-11-18 00:40:57.536200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.834 [2024-11-18 00:40:57.536237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.834 qpair failed and we were unable to recover it. 00:35:33.834 [2024-11-18 00:40:57.536396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.834 [2024-11-18 00:40:57.536434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.834 qpair failed and we were unable to recover it. 00:35:33.834 [2024-11-18 00:40:57.536582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.834 [2024-11-18 00:40:57.536619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.834 qpair failed and we were unable to recover it. 00:35:33.834 [2024-11-18 00:40:57.536745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.834 [2024-11-18 00:40:57.536782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.834 qpair failed and we were unable to recover it. 00:35:33.834 [2024-11-18 00:40:57.536889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.834 [2024-11-18 00:40:57.536926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.834 qpair failed and we were unable to recover it. 00:35:33.834 [2024-11-18 00:40:57.537084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.834 [2024-11-18 00:40:57.537121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.834 qpair failed and we were unable to recover it. 00:35:33.834 [2024-11-18 00:40:57.537303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.834 [2024-11-18 00:40:57.537349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.834 qpair failed and we were unable to recover it. 00:35:33.834 [2024-11-18 00:40:57.537464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.834 [2024-11-18 00:40:57.537502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.834 qpair failed and we were unable to recover it. 00:35:33.834 [2024-11-18 00:40:57.537658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.834 [2024-11-18 00:40:57.537695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.834 qpair failed and we were unable to recover it. 00:35:33.834 [2024-11-18 00:40:57.537846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.834 [2024-11-18 00:40:57.537883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.834 qpair failed and we were unable to recover it. 00:35:33.834 [2024-11-18 00:40:57.538024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.834 [2024-11-18 00:40:57.538061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.834 qpair failed and we were unable to recover it. 00:35:33.834 [2024-11-18 00:40:57.538212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.834 [2024-11-18 00:40:57.538249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.834 qpair failed and we were unable to recover it. 00:35:33.834 [2024-11-18 00:40:57.538426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.834 [2024-11-18 00:40:57.538463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.834 qpair failed and we were unable to recover it. 00:35:33.834 [2024-11-18 00:40:57.538608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.834 [2024-11-18 00:40:57.538646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.834 qpair failed and we were unable to recover it. 00:35:33.834 [2024-11-18 00:40:57.538758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.834 [2024-11-18 00:40:57.538795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.834 qpair failed and we were unable to recover it. 00:35:33.834 [2024-11-18 00:40:57.538976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.834 [2024-11-18 00:40:57.539014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.834 qpair failed and we were unable to recover it. 00:35:33.834 [2024-11-18 00:40:57.539165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.834 [2024-11-18 00:40:57.539202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.834 qpair failed and we were unable to recover it. 00:35:33.834 [2024-11-18 00:40:57.539357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.834 [2024-11-18 00:40:57.539395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.834 qpair failed and we were unable to recover it. 00:35:33.834 [2024-11-18 00:40:57.539498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.834 [2024-11-18 00:40:57.539535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.834 qpair failed and we were unable to recover it. 00:35:33.834 [2024-11-18 00:40:57.539725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.834 [2024-11-18 00:40:57.539763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.834 qpair failed and we were unable to recover it. 00:35:33.834 [2024-11-18 00:40:57.539911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.834 [2024-11-18 00:40:57.539948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.834 qpair failed and we were unable to recover it. 00:35:33.834 [2024-11-18 00:40:57.540101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.834 [2024-11-18 00:40:57.540138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.834 qpair failed and we were unable to recover it. 00:35:33.834 [2024-11-18 00:40:57.540280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.834 [2024-11-18 00:40:57.540330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.834 qpair failed and we were unable to recover it. 00:35:33.834 [2024-11-18 00:40:57.540529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.834 [2024-11-18 00:40:57.540567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.834 qpair failed and we were unable to recover it. 00:35:33.834 [2024-11-18 00:40:57.540676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.834 [2024-11-18 00:40:57.540712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.834 qpair failed and we were unable to recover it. 00:35:33.834 [2024-11-18 00:40:57.540863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.834 [2024-11-18 00:40:57.540906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.834 qpair failed and we were unable to recover it. 00:35:33.834 [2024-11-18 00:40:57.541096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.834 [2024-11-18 00:40:57.541133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.834 qpair failed and we were unable to recover it. 00:35:33.834 [2024-11-18 00:40:57.541326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.834 [2024-11-18 00:40:57.541364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.834 qpair failed and we were unable to recover it. 00:35:33.835 [2024-11-18 00:40:57.541473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.835 [2024-11-18 00:40:57.541510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.835 qpair failed and we were unable to recover it. 00:35:33.835 [2024-11-18 00:40:57.541655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.835 [2024-11-18 00:40:57.541725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.835 qpair failed and we were unable to recover it. 00:35:33.835 [2024-11-18 00:40:57.541928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.835 [2024-11-18 00:40:57.541965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.835 qpair failed and we were unable to recover it. 00:35:33.835 [2024-11-18 00:40:57.542145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.835 [2024-11-18 00:40:57.542182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.835 qpair failed and we were unable to recover it. 00:35:33.835 [2024-11-18 00:40:57.542373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.835 [2024-11-18 00:40:57.542412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.835 qpair failed and we were unable to recover it. 00:35:33.835 [2024-11-18 00:40:57.542559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.835 [2024-11-18 00:40:57.542596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.835 qpair failed and we were unable to recover it. 00:35:33.835 [2024-11-18 00:40:57.542717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.835 [2024-11-18 00:40:57.542753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.835 qpair failed and we were unable to recover it. 00:35:33.835 [2024-11-18 00:40:57.542905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.835 [2024-11-18 00:40:57.542943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.835 qpair failed and we were unable to recover it. 00:35:33.835 [2024-11-18 00:40:57.543098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.835 [2024-11-18 00:40:57.543135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.835 qpair failed and we were unable to recover it. 00:35:33.835 [2024-11-18 00:40:57.543303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.835 [2024-11-18 00:40:57.543349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.835 qpair failed and we were unable to recover it. 00:35:33.835 [2024-11-18 00:40:57.543480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.835 [2024-11-18 00:40:57.543518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.835 qpair failed and we were unable to recover it. 00:35:33.835 [2024-11-18 00:40:57.543706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.835 [2024-11-18 00:40:57.543742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.835 qpair failed and we were unable to recover it. 00:35:33.835 [2024-11-18 00:40:57.543856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.835 [2024-11-18 00:40:57.543893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.835 qpair failed and we were unable to recover it. 00:35:33.835 [2024-11-18 00:40:57.544040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.835 [2024-11-18 00:40:57.544078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.835 qpair failed and we were unable to recover it. 00:35:33.835 [2024-11-18 00:40:57.544231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.835 [2024-11-18 00:40:57.544269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.835 qpair failed and we were unable to recover it. 00:35:33.835 [2024-11-18 00:40:57.544392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.835 [2024-11-18 00:40:57.544455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.835 qpair failed and we were unable to recover it. 00:35:33.835 [2024-11-18 00:40:57.544635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.835 [2024-11-18 00:40:57.544672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.835 qpair failed and we were unable to recover it. 00:35:33.835 [2024-11-18 00:40:57.544814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.835 [2024-11-18 00:40:57.544850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.835 qpair failed and we were unable to recover it. 00:35:33.835 [2024-11-18 00:40:57.545037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.835 [2024-11-18 00:40:57.545074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.835 qpair failed and we were unable to recover it. 00:35:33.835 [2024-11-18 00:40:57.545231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.835 [2024-11-18 00:40:57.545268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.835 qpair failed and we were unable to recover it. 00:35:33.835 [2024-11-18 00:40:57.545441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.835 [2024-11-18 00:40:57.545478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.835 qpair failed and we were unable to recover it. 00:35:33.835 [2024-11-18 00:40:57.545625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.835 [2024-11-18 00:40:57.545662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.835 qpair failed and we were unable to recover it. 00:35:33.835 [2024-11-18 00:40:57.545803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.835 [2024-11-18 00:40:57.545840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.835 qpair failed and we were unable to recover it. 00:35:33.835 [2024-11-18 00:40:57.545991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.835 [2024-11-18 00:40:57.546028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.835 qpair failed and we were unable to recover it. 00:35:33.835 [2024-11-18 00:40:57.546182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.835 [2024-11-18 00:40:57.546225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.835 qpair failed and we were unable to recover it. 00:35:33.835 [2024-11-18 00:40:57.546380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.835 [2024-11-18 00:40:57.546419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.835 qpair failed and we were unable to recover it. 00:35:33.835 [2024-11-18 00:40:57.546580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.835 [2024-11-18 00:40:57.546617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.835 qpair failed and we were unable to recover it. 00:35:33.835 [2024-11-18 00:40:57.546797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.835 [2024-11-18 00:40:57.546834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.835 qpair failed and we were unable to recover it. 00:35:33.835 [2024-11-18 00:40:57.547023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.835 [2024-11-18 00:40:57.547060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.835 qpair failed and we were unable to recover it. 00:35:33.835 [2024-11-18 00:40:57.547279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.835 [2024-11-18 00:40:57.547371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.835 qpair failed and we were unable to recover it. 00:35:33.835 [2024-11-18 00:40:57.547535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.835 [2024-11-18 00:40:57.547573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.835 qpair failed and we were unable to recover it. 00:35:33.835 [2024-11-18 00:40:57.547731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.835 [2024-11-18 00:40:57.547768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.835 qpair failed and we were unable to recover it. 00:35:33.835 [2024-11-18 00:40:57.547946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.835 [2024-11-18 00:40:57.547983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.835 qpair failed and we were unable to recover it. 00:35:33.835 [2024-11-18 00:40:57.548093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.835 [2024-11-18 00:40:57.548130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.835 qpair failed and we were unable to recover it. 00:35:33.835 [2024-11-18 00:40:57.548251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.835 [2024-11-18 00:40:57.548289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.835 qpair failed and we were unable to recover it. 00:35:33.835 [2024-11-18 00:40:57.548485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.835 [2024-11-18 00:40:57.548522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.835 qpair failed and we were unable to recover it. 00:35:33.835 [2024-11-18 00:40:57.548699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.835 [2024-11-18 00:40:57.548736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.835 qpair failed and we were unable to recover it. 00:35:33.835 [2024-11-18 00:40:57.548859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.835 [2024-11-18 00:40:57.548898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.835 qpair failed and we were unable to recover it. 00:35:33.836 [2024-11-18 00:40:57.549055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.836 [2024-11-18 00:40:57.549091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.836 qpair failed and we were unable to recover it. 00:35:33.836 [2024-11-18 00:40:57.549199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.836 [2024-11-18 00:40:57.549255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.836 qpair failed and we were unable to recover it. 00:35:33.836 [2024-11-18 00:40:57.549470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.836 [2024-11-18 00:40:57.549508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.836 qpair failed and we were unable to recover it. 00:35:33.836 [2024-11-18 00:40:57.549650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.836 [2024-11-18 00:40:57.549689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.836 qpair failed and we were unable to recover it. 00:35:33.836 [2024-11-18 00:40:57.549850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.836 [2024-11-18 00:40:57.549889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.836 qpair failed and we were unable to recover it. 00:35:33.836 [2024-11-18 00:40:57.550045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.836 [2024-11-18 00:40:57.550085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.836 qpair failed and we were unable to recover it. 00:35:33.836 [2024-11-18 00:40:57.550251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.836 [2024-11-18 00:40:57.550290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.836 qpair failed and we were unable to recover it. 00:35:33.836 [2024-11-18 00:40:57.550443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.836 [2024-11-18 00:40:57.550483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.836 qpair failed and we were unable to recover it. 00:35:33.836 [2024-11-18 00:40:57.550630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.836 [2024-11-18 00:40:57.550669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.836 qpair failed and we were unable to recover it. 00:35:33.836 [2024-11-18 00:40:57.550783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.836 [2024-11-18 00:40:57.550822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.836 qpair failed and we were unable to recover it. 00:35:33.836 [2024-11-18 00:40:57.551017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.836 [2024-11-18 00:40:57.551056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.836 qpair failed and we were unable to recover it. 00:35:33.836 [2024-11-18 00:40:57.551208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.836 [2024-11-18 00:40:57.551246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.836 qpair failed and we were unable to recover it. 00:35:33.836 [2024-11-18 00:40:57.551416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.836 [2024-11-18 00:40:57.551456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.836 qpair failed and we were unable to recover it. 00:35:33.836 [2024-11-18 00:40:57.551616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.836 [2024-11-18 00:40:57.551662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.836 qpair failed and we were unable to recover it. 00:35:33.836 [2024-11-18 00:40:57.551790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.836 [2024-11-18 00:40:57.551832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.836 qpair failed and we were unable to recover it. 00:35:33.836 [2024-11-18 00:40:57.552020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.836 [2024-11-18 00:40:57.552058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.836 qpair failed and we were unable to recover it. 00:35:33.836 [2024-11-18 00:40:57.552250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.836 [2024-11-18 00:40:57.552306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.836 qpair failed and we were unable to recover it. 00:35:33.836 [2024-11-18 00:40:57.552475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.836 [2024-11-18 00:40:57.552515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.836 qpair failed and we were unable to recover it. 00:35:33.836 [2024-11-18 00:40:57.552625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.836 [2024-11-18 00:40:57.552664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.836 qpair failed and we were unable to recover it. 00:35:33.836 [2024-11-18 00:40:57.552796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.836 [2024-11-18 00:40:57.552836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.836 qpair failed and we were unable to recover it. 00:35:33.836 [2024-11-18 00:40:57.552984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.836 [2024-11-18 00:40:57.553022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.836 qpair failed and we were unable to recover it. 00:35:33.836 [2024-11-18 00:40:57.553183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.836 [2024-11-18 00:40:57.553221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.836 qpair failed and we were unable to recover it. 00:35:33.836 [2024-11-18 00:40:57.553380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.836 [2024-11-18 00:40:57.553421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.836 qpair failed and we were unable to recover it. 00:35:33.836 [2024-11-18 00:40:57.553579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.836 [2024-11-18 00:40:57.553618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.836 qpair failed and we were unable to recover it. 00:35:33.836 [2024-11-18 00:40:57.553777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.836 [2024-11-18 00:40:57.553816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.836 qpair failed and we were unable to recover it. 00:35:33.836 [2024-11-18 00:40:57.554005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.836 [2024-11-18 00:40:57.554045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.836 qpair failed and we were unable to recover it. 00:35:33.836 [2024-11-18 00:40:57.554203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.836 [2024-11-18 00:40:57.554242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.836 qpair failed and we were unable to recover it. 00:35:33.836 [2024-11-18 00:40:57.554410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.836 [2024-11-18 00:40:57.554450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.836 qpair failed and we were unable to recover it. 00:35:33.836 [2024-11-18 00:40:57.554581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.836 [2024-11-18 00:40:57.554623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.836 qpair failed and we were unable to recover it. 00:35:33.836 [2024-11-18 00:40:57.554789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.836 [2024-11-18 00:40:57.554826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.836 qpair failed and we were unable to recover it. 00:35:33.836 [2024-11-18 00:40:57.554976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.836 [2024-11-18 00:40:57.555014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.836 qpair failed and we were unable to recover it. 00:35:33.836 [2024-11-18 00:40:57.555171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.836 [2024-11-18 00:40:57.555208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.836 qpair failed and we were unable to recover it. 00:35:33.836 [2024-11-18 00:40:57.555371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.836 [2024-11-18 00:40:57.555409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.836 qpair failed and we were unable to recover it. 00:35:33.836 [2024-11-18 00:40:57.555592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.836 [2024-11-18 00:40:57.555629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.837 qpair failed and we were unable to recover it. 00:35:33.837 [2024-11-18 00:40:57.555809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.837 [2024-11-18 00:40:57.555845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.837 qpair failed and we were unable to recover it. 00:35:33.837 [2024-11-18 00:40:57.555991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.837 [2024-11-18 00:40:57.556028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.837 qpair failed and we were unable to recover it. 00:35:33.837 [2024-11-18 00:40:57.556177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.837 [2024-11-18 00:40:57.556214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.837 qpair failed and we were unable to recover it. 00:35:33.837 [2024-11-18 00:40:57.556401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.837 [2024-11-18 00:40:57.556438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.837 qpair failed and we were unable to recover it. 00:35:33.837 [2024-11-18 00:40:57.556621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.837 [2024-11-18 00:40:57.556657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.837 qpair failed and we were unable to recover it. 00:35:33.837 [2024-11-18 00:40:57.556817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.837 [2024-11-18 00:40:57.556854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.837 qpair failed and we were unable to recover it. 00:35:33.837 [2024-11-18 00:40:57.557006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.837 [2024-11-18 00:40:57.557043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.837 qpair failed and we were unable to recover it. 00:35:33.837 [2024-11-18 00:40:57.557178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.837 [2024-11-18 00:40:57.557216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.837 qpair failed and we were unable to recover it. 00:35:33.837 [2024-11-18 00:40:57.557372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.837 [2024-11-18 00:40:57.557410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.837 qpair failed and we were unable to recover it. 00:35:33.837 [2024-11-18 00:40:57.557569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.837 [2024-11-18 00:40:57.557605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.837 qpair failed and we were unable to recover it. 00:35:33.837 [2024-11-18 00:40:57.557763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.837 [2024-11-18 00:40:57.557800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.837 qpair failed and we were unable to recover it. 00:35:33.837 [2024-11-18 00:40:57.557920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.837 [2024-11-18 00:40:57.557957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.837 qpair failed and we were unable to recover it. 00:35:33.837 [2024-11-18 00:40:57.558097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.837 [2024-11-18 00:40:57.558134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.837 qpair failed and we were unable to recover it. 00:35:33.837 [2024-11-18 00:40:57.558253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.837 [2024-11-18 00:40:57.558290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.837 qpair failed and we were unable to recover it. 00:35:33.837 [2024-11-18 00:40:57.558456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.837 [2024-11-18 00:40:57.558494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.837 qpair failed and we were unable to recover it. 00:35:33.837 [2024-11-18 00:40:57.558643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.837 [2024-11-18 00:40:57.558680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.837 qpair failed and we were unable to recover it. 00:35:33.837 [2024-11-18 00:40:57.558793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.837 [2024-11-18 00:40:57.558829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.837 qpair failed and we were unable to recover it. 00:35:33.837 [2024-11-18 00:40:57.558989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.837 [2024-11-18 00:40:57.559026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.837 qpair failed and we were unable to recover it. 00:35:33.837 [2024-11-18 00:40:57.559207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.837 [2024-11-18 00:40:57.559244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.837 qpair failed and we were unable to recover it. 00:35:33.837 [2024-11-18 00:40:57.559398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.837 [2024-11-18 00:40:57.559435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.837 qpair failed and we were unable to recover it. 00:35:33.837 [2024-11-18 00:40:57.559592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.837 [2024-11-18 00:40:57.559635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.837 qpair failed and we were unable to recover it. 00:35:33.837 [2024-11-18 00:40:57.559789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.837 [2024-11-18 00:40:57.559826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.837 qpair failed and we were unable to recover it. 00:35:33.837 [2024-11-18 00:40:57.559972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.837 [2024-11-18 00:40:57.560009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.837 qpair failed and we were unable to recover it. 00:35:33.837 [2024-11-18 00:40:57.560162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.837 [2024-11-18 00:40:57.560230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.837 qpair failed and we were unable to recover it. 00:35:33.837 [2024-11-18 00:40:57.560471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.837 [2024-11-18 00:40:57.560529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.837 qpair failed and we were unable to recover it. 00:35:33.837 [2024-11-18 00:40:57.560765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.837 [2024-11-18 00:40:57.560821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.837 qpair failed and we were unable to recover it. 00:35:33.837 [2024-11-18 00:40:57.561064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.837 [2024-11-18 00:40:57.561121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.837 qpair failed and we were unable to recover it. 00:35:33.837 [2024-11-18 00:40:57.561291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.837 [2024-11-18 00:40:57.561342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.837 qpair failed and we were unable to recover it. 00:35:33.837 [2024-11-18 00:40:57.561473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.837 [2024-11-18 00:40:57.561510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.837 qpair failed and we were unable to recover it. 00:35:33.837 [2024-11-18 00:40:57.561697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.837 [2024-11-18 00:40:57.561735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.837 qpair failed and we were unable to recover it. 00:35:33.837 [2024-11-18 00:40:57.561860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.837 [2024-11-18 00:40:57.561899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.837 qpair failed and we were unable to recover it. 00:35:33.837 [2024-11-18 00:40:57.562084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.837 [2024-11-18 00:40:57.562120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.837 qpair failed and we were unable to recover it. 00:35:33.837 [2024-11-18 00:40:57.562281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.837 [2024-11-18 00:40:57.562329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.837 qpair failed and we were unable to recover it. 00:35:33.837 [2024-11-18 00:40:57.562459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.837 [2024-11-18 00:40:57.562496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.837 qpair failed and we were unable to recover it. 00:35:33.837 [2024-11-18 00:40:57.562646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.837 [2024-11-18 00:40:57.562683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.837 qpair failed and we were unable to recover it. 00:35:33.837 [2024-11-18 00:40:57.562870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.837 [2024-11-18 00:40:57.562908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.837 qpair failed and we were unable to recover it. 00:35:33.837 [2024-11-18 00:40:57.563090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.837 [2024-11-18 00:40:57.563127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.837 qpair failed and we were unable to recover it. 00:35:33.837 [2024-11-18 00:40:57.563322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.838 [2024-11-18 00:40:57.563361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.838 qpair failed and we were unable to recover it. 00:35:33.838 [2024-11-18 00:40:57.563514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.838 [2024-11-18 00:40:57.563553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.838 qpair failed and we were unable to recover it. 00:35:33.838 [2024-11-18 00:40:57.563675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.838 [2024-11-18 00:40:57.563712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.838 qpair failed and we were unable to recover it. 00:35:33.838 [2024-11-18 00:40:57.563818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.838 [2024-11-18 00:40:57.563855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.838 qpair failed and we were unable to recover it. 00:35:33.838 [2024-11-18 00:40:57.564041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.838 [2024-11-18 00:40:57.564079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.838 qpair failed and we were unable to recover it. 00:35:33.838 [2024-11-18 00:40:57.564201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.838 [2024-11-18 00:40:57.564237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.838 qpair failed and we were unable to recover it. 00:35:33.838 [2024-11-18 00:40:57.564365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.838 [2024-11-18 00:40:57.564404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.838 qpair failed and we were unable to recover it. 00:35:33.838 [2024-11-18 00:40:57.564589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.838 [2024-11-18 00:40:57.564627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.838 qpair failed and we were unable to recover it. 00:35:33.838 [2024-11-18 00:40:57.564811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.838 [2024-11-18 00:40:57.564848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.838 qpair failed and we were unable to recover it. 00:35:33.838 [2024-11-18 00:40:57.565000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.838 [2024-11-18 00:40:57.565037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.838 qpair failed and we were unable to recover it. 00:35:33.838 [2024-11-18 00:40:57.565194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.838 [2024-11-18 00:40:57.565239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.838 qpair failed and we were unable to recover it. 00:35:33.838 [2024-11-18 00:40:57.565386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.838 [2024-11-18 00:40:57.565424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.838 qpair failed and we were unable to recover it. 00:35:33.838 [2024-11-18 00:40:57.565574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.838 [2024-11-18 00:40:57.565612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.838 qpair failed and we were unable to recover it. 00:35:33.838 [2024-11-18 00:40:57.565800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.838 [2024-11-18 00:40:57.565838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.838 qpair failed and we were unable to recover it. 00:35:33.838 [2024-11-18 00:40:57.566019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.838 [2024-11-18 00:40:57.566056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.838 qpair failed and we were unable to recover it. 00:35:33.838 [2024-11-18 00:40:57.566204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.838 [2024-11-18 00:40:57.566242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.838 qpair failed and we were unable to recover it. 00:35:33.838 [2024-11-18 00:40:57.566422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.838 [2024-11-18 00:40:57.566461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.838 qpair failed and we were unable to recover it. 00:35:33.838 [2024-11-18 00:40:57.566566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.838 [2024-11-18 00:40:57.566603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.838 qpair failed and we were unable to recover it. 00:35:33.838 [2024-11-18 00:40:57.566781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.838 [2024-11-18 00:40:57.566818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.838 qpair failed and we were unable to recover it. 00:35:33.838 [2024-11-18 00:40:57.566967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.838 [2024-11-18 00:40:57.567004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.838 qpair failed and we were unable to recover it. 00:35:33.838 [2024-11-18 00:40:57.567151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.838 [2024-11-18 00:40:57.567190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.838 qpair failed and we were unable to recover it. 00:35:33.838 [2024-11-18 00:40:57.567338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.838 [2024-11-18 00:40:57.567376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.838 qpair failed and we were unable to recover it. 00:35:33.838 [2024-11-18 00:40:57.567535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.838 [2024-11-18 00:40:57.567572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.838 qpair failed and we were unable to recover it. 00:35:33.838 [2024-11-18 00:40:57.567695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.838 [2024-11-18 00:40:57.567733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.838 qpair failed and we were unable to recover it. 00:35:33.838 [2024-11-18 00:40:57.567865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.838 [2024-11-18 00:40:57.567902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.838 qpair failed and we were unable to recover it. 00:35:33.838 [2024-11-18 00:40:57.568088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.838 [2024-11-18 00:40:57.568126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.838 qpair failed and we were unable to recover it. 00:35:33.838 [2024-11-18 00:40:57.568229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.838 [2024-11-18 00:40:57.568266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.838 qpair failed and we were unable to recover it. 00:35:33.838 [2024-11-18 00:40:57.568461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.838 [2024-11-18 00:40:57.568499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.838 qpair failed and we were unable to recover it. 00:35:33.838 [2024-11-18 00:40:57.568619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.838 [2024-11-18 00:40:57.568657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.838 qpair failed and we were unable to recover it. 00:35:33.838 [2024-11-18 00:40:57.568816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.838 [2024-11-18 00:40:57.568853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.838 qpair failed and we were unable to recover it. 00:35:33.838 [2024-11-18 00:40:57.568996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.838 [2024-11-18 00:40:57.569033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.838 qpair failed and we were unable to recover it. 00:35:33.838 [2024-11-18 00:40:57.569183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.838 [2024-11-18 00:40:57.569221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.838 qpair failed and we were unable to recover it. 00:35:33.838 [2024-11-18 00:40:57.569349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.838 [2024-11-18 00:40:57.569386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.838 qpair failed and we were unable to recover it. 00:35:33.838 [2024-11-18 00:40:57.569569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.838 [2024-11-18 00:40:57.569606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.838 qpair failed and we were unable to recover it. 00:35:33.838 [2024-11-18 00:40:57.569799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.838 [2024-11-18 00:40:57.569836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.838 qpair failed and we were unable to recover it. 00:35:33.838 [2024-11-18 00:40:57.569940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.838 [2024-11-18 00:40:57.569977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.838 qpair failed and we were unable to recover it. 00:35:33.838 [2024-11-18 00:40:57.570117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.838 [2024-11-18 00:40:57.570154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.838 qpair failed and we were unable to recover it. 00:35:33.838 [2024-11-18 00:40:57.570316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.838 [2024-11-18 00:40:57.570364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.838 qpair failed and we were unable to recover it. 00:35:33.838 [2024-11-18 00:40:57.570507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.839 [2024-11-18 00:40:57.570544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.839 qpair failed and we were unable to recover it. 00:35:33.839 [2024-11-18 00:40:57.570692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.839 [2024-11-18 00:40:57.570729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.839 qpair failed and we were unable to recover it. 00:35:33.839 [2024-11-18 00:40:57.570874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.839 [2024-11-18 00:40:57.570912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.839 qpair failed and we were unable to recover it. 00:35:33.839 [2024-11-18 00:40:57.571061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.839 [2024-11-18 00:40:57.571098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.839 qpair failed and we were unable to recover it. 00:35:33.839 [2024-11-18 00:40:57.571241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.839 [2024-11-18 00:40:57.571279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.839 qpair failed and we were unable to recover it. 00:35:33.839 [2024-11-18 00:40:57.571445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.839 [2024-11-18 00:40:57.571483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.839 qpair failed and we were unable to recover it. 00:35:33.839 [2024-11-18 00:40:57.571623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.839 [2024-11-18 00:40:57.571660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.839 qpair failed and we were unable to recover it. 00:35:33.839 [2024-11-18 00:40:57.571817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.839 [2024-11-18 00:40:57.571854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.839 qpair failed and we were unable to recover it. 00:35:33.839 [2024-11-18 00:40:57.572063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.839 [2024-11-18 00:40:57.572103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.839 qpair failed and we were unable to recover it. 00:35:33.839 [2024-11-18 00:40:57.572248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.839 [2024-11-18 00:40:57.572286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.839 qpair failed and we were unable to recover it. 00:35:33.839 [2024-11-18 00:40:57.572456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.839 [2024-11-18 00:40:57.572495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.839 qpair failed and we were unable to recover it. 00:35:33.839 [2024-11-18 00:40:57.572651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.839 [2024-11-18 00:40:57.572692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.839 qpair failed and we were unable to recover it. 00:35:33.839 [2024-11-18 00:40:57.572859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.839 [2024-11-18 00:40:57.572897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.839 qpair failed and we were unable to recover it. 00:35:33.839 [2024-11-18 00:40:57.573065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.839 [2024-11-18 00:40:57.573102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.839 qpair failed and we were unable to recover it. 00:35:33.839 [2024-11-18 00:40:57.573292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.839 [2024-11-18 00:40:57.573356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.839 qpair failed and we were unable to recover it. 00:35:33.839 [2024-11-18 00:40:57.573535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.839 [2024-11-18 00:40:57.573572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.839 qpair failed and we were unable to recover it. 00:35:33.839 [2024-11-18 00:40:57.573750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.839 [2024-11-18 00:40:57.573787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.839 qpair failed and we were unable to recover it. 00:35:33.839 [2024-11-18 00:40:57.573932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.839 [2024-11-18 00:40:57.573969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.839 qpair failed and we were unable to recover it. 00:35:33.839 [2024-11-18 00:40:57.574126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.839 [2024-11-18 00:40:57.574165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.839 qpair failed and we were unable to recover it. 00:35:33.839 [2024-11-18 00:40:57.574330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.839 [2024-11-18 00:40:57.574370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.839 qpair failed and we were unable to recover it. 00:35:33.839 [2024-11-18 00:40:57.574530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.839 [2024-11-18 00:40:57.574568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.839 qpair failed and we were unable to recover it. 00:35:33.839 [2024-11-18 00:40:57.574732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.839 [2024-11-18 00:40:57.574770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.839 qpair failed and we were unable to recover it. 00:35:33.839 [2024-11-18 00:40:57.574920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.839 [2024-11-18 00:40:57.574959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.839 qpair failed and we were unable to recover it. 00:35:33.839 [2024-11-18 00:40:57.575161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.839 [2024-11-18 00:40:57.575200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.839 qpair failed and we were unable to recover it. 00:35:33.839 [2024-11-18 00:40:57.575358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.839 [2024-11-18 00:40:57.575398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.839 qpair failed and we were unable to recover it. 00:35:33.839 [2024-11-18 00:40:57.575539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.839 [2024-11-18 00:40:57.575578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.839 qpair failed and we were unable to recover it. 00:35:33.839 [2024-11-18 00:40:57.575770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.839 [2024-11-18 00:40:57.575810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.839 qpair failed and we were unable to recover it. 00:35:33.839 [2024-11-18 00:40:57.575963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.839 [2024-11-18 00:40:57.576001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.839 qpair failed and we were unable to recover it. 00:35:33.839 [2024-11-18 00:40:57.576155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.839 [2024-11-18 00:40:57.576194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.839 qpair failed and we were unable to recover it. 00:35:33.839 [2024-11-18 00:40:57.576353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.839 [2024-11-18 00:40:57.576394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.839 qpair failed and we were unable to recover it. 00:35:33.839 [2024-11-18 00:40:57.576542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.839 [2024-11-18 00:40:57.576581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.839 qpair failed and we were unable to recover it. 00:35:33.839 [2024-11-18 00:40:57.576687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.839 [2024-11-18 00:40:57.576726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.839 qpair failed and we were unable to recover it. 00:35:33.839 [2024-11-18 00:40:57.576889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.839 [2024-11-18 00:40:57.576928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.839 qpair failed and we were unable to recover it. 00:35:33.839 [2024-11-18 00:40:57.577057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.839 [2024-11-18 00:40:57.577096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.839 qpair failed and we were unable to recover it. 00:35:33.839 [2024-11-18 00:40:57.577282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.839 [2024-11-18 00:40:57.577331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.839 qpair failed and we were unable to recover it. 00:35:33.839 [2024-11-18 00:40:57.577498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.839 [2024-11-18 00:40:57.577538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.839 qpair failed and we were unable to recover it. 00:35:33.839 [2024-11-18 00:40:57.577699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.839 [2024-11-18 00:40:57.577737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.839 qpair failed and we were unable to recover it. 00:35:33.839 [2024-11-18 00:40:57.577880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.839 [2024-11-18 00:40:57.577919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.839 qpair failed and we were unable to recover it. 00:35:33.839 [2024-11-18 00:40:57.578069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.840 [2024-11-18 00:40:57.578109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.840 qpair failed and we were unable to recover it. 00:35:33.840 [2024-11-18 00:40:57.578256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.840 [2024-11-18 00:40:57.578295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.840 qpair failed and we were unable to recover it. 00:35:33.840 [2024-11-18 00:40:57.578434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.840 [2024-11-18 00:40:57.578474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.840 qpair failed and we were unable to recover it. 00:35:33.840 [2024-11-18 00:40:57.578629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.840 [2024-11-18 00:40:57.578669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.840 qpair failed and we were unable to recover it. 00:35:33.840 [2024-11-18 00:40:57.578857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.840 [2024-11-18 00:40:57.578896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.840 qpair failed and we were unable to recover it. 00:35:33.840 [2024-11-18 00:40:57.579004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.840 [2024-11-18 00:40:57.579043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.840 qpair failed and we were unable to recover it. 00:35:33.840 [2024-11-18 00:40:57.579239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.840 [2024-11-18 00:40:57.579279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.840 qpair failed and we were unable to recover it. 00:35:33.840 [2024-11-18 00:40:57.579479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.840 [2024-11-18 00:40:57.579519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.840 qpair failed and we were unable to recover it. 00:35:33.840 [2024-11-18 00:40:57.579686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.840 [2024-11-18 00:40:57.579725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.840 qpair failed and we were unable to recover it. 00:35:33.840 [2024-11-18 00:40:57.579874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.840 [2024-11-18 00:40:57.579913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.840 qpair failed and we were unable to recover it. 00:35:33.840 [2024-11-18 00:40:57.580030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.840 [2024-11-18 00:40:57.580069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.840 qpair failed and we were unable to recover it. 00:35:33.840 [2024-11-18 00:40:57.580232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.840 [2024-11-18 00:40:57.580272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.840 qpair failed and we were unable to recover it. 00:35:33.840 [2024-11-18 00:40:57.580444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.840 [2024-11-18 00:40:57.580484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.840 qpair failed and we were unable to recover it. 00:35:33.840 [2024-11-18 00:40:57.580672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.840 [2024-11-18 00:40:57.580711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.840 qpair failed and we were unable to recover it. 00:35:33.840 [2024-11-18 00:40:57.580838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.840 [2024-11-18 00:40:57.580877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.840 qpair failed and we were unable to recover it. 00:35:33.840 [2024-11-18 00:40:57.581040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.840 [2024-11-18 00:40:57.581078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.840 qpair failed and we were unable to recover it. 00:35:33.840 [2024-11-18 00:40:57.581230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.840 [2024-11-18 00:40:57.581269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.840 qpair failed and we were unable to recover it. 00:35:33.840 [2024-11-18 00:40:57.581441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.840 [2024-11-18 00:40:57.581481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.840 qpair failed and we were unable to recover it. 00:35:33.840 [2024-11-18 00:40:57.581610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.840 [2024-11-18 00:40:57.581649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.840 qpair failed and we were unable to recover it. 00:35:33.840 [2024-11-18 00:40:57.581824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.840 [2024-11-18 00:40:57.581863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.840 qpair failed and we were unable to recover it. 00:35:33.840 [2024-11-18 00:40:57.581971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.840 [2024-11-18 00:40:57.582010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.840 qpair failed and we were unable to recover it. 00:35:33.840 [2024-11-18 00:40:57.582174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.840 [2024-11-18 00:40:57.582213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.840 qpair failed and we were unable to recover it. 00:35:33.840 [2024-11-18 00:40:57.582357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.840 [2024-11-18 00:40:57.582427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.840 qpair failed and we were unable to recover it. 00:35:33.840 [2024-11-18 00:40:57.582559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.840 [2024-11-18 00:40:57.582598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.840 qpair failed and we were unable to recover it. 00:35:33.840 [2024-11-18 00:40:57.582798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.840 [2024-11-18 00:40:57.582837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.840 qpair failed and we were unable to recover it. 00:35:33.840 [2024-11-18 00:40:57.583051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.840 [2024-11-18 00:40:57.583100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.840 qpair failed and we were unable to recover it. 00:35:33.840 [2024-11-18 00:40:57.583321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.840 [2024-11-18 00:40:57.583361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.840 qpair failed and we were unable to recover it. 00:35:33.840 [2024-11-18 00:40:57.583498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.840 [2024-11-18 00:40:57.583537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.840 qpair failed and we were unable to recover it. 00:35:33.840 [2024-11-18 00:40:57.583734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.840 [2024-11-18 00:40:57.583773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.840 qpair failed and we were unable to recover it. 00:35:33.840 [2024-11-18 00:40:57.583936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.840 [2024-11-18 00:40:57.583981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.840 qpair failed and we were unable to recover it. 00:35:33.840 [2024-11-18 00:40:57.584115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.840 [2024-11-18 00:40:57.584156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.840 qpair failed and we were unable to recover it. 00:35:33.840 [2024-11-18 00:40:57.584353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.840 [2024-11-18 00:40:57.584393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.840 qpair failed and we were unable to recover it. 00:35:33.840 [2024-11-18 00:40:57.584546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.840 [2024-11-18 00:40:57.584585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.840 qpair failed and we were unable to recover it. 00:35:33.840 [2024-11-18 00:40:57.584777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.840 [2024-11-18 00:40:57.584817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.840 qpair failed and we were unable to recover it. 00:35:33.840 [2024-11-18 00:40:57.584981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.840 [2024-11-18 00:40:57.585021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.840 qpair failed and we were unable to recover it. 00:35:33.840 [2024-11-18 00:40:57.585175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.840 [2024-11-18 00:40:57.585214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.840 qpair failed and we were unable to recover it. 00:35:33.840 [2024-11-18 00:40:57.585405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.840 [2024-11-18 00:40:57.585446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.840 qpair failed and we were unable to recover it. 00:35:33.840 [2024-11-18 00:40:57.585569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.840 [2024-11-18 00:40:57.585608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.840 qpair failed and we were unable to recover it. 00:35:33.840 [2024-11-18 00:40:57.585761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.840 [2024-11-18 00:40:57.585800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.841 qpair failed and we were unable to recover it. 00:35:33.841 [2024-11-18 00:40:57.585967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.841 [2024-11-18 00:40:57.586007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.841 qpair failed and we were unable to recover it. 00:35:33.841 [2024-11-18 00:40:57.586191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.841 [2024-11-18 00:40:57.586239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.841 qpair failed and we were unable to recover it. 00:35:33.841 [2024-11-18 00:40:57.586404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.841 [2024-11-18 00:40:57.586444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.841 qpair failed and we were unable to recover it. 00:35:33.841 [2024-11-18 00:40:57.586645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.841 [2024-11-18 00:40:57.586685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.841 qpair failed and we were unable to recover it. 00:35:33.841 [2024-11-18 00:40:57.586837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.841 [2024-11-18 00:40:57.586876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.841 qpair failed and we were unable to recover it. 00:35:33.841 [2024-11-18 00:40:57.586991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.841 [2024-11-18 00:40:57.587031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.841 qpair failed and we were unable to recover it. 00:35:33.841 [2024-11-18 00:40:57.587222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.841 [2024-11-18 00:40:57.587261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.841 qpair failed and we were unable to recover it. 00:35:33.841 [2024-11-18 00:40:57.587468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.841 [2024-11-18 00:40:57.587507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.841 qpair failed and we were unable to recover it. 00:35:33.841 [2024-11-18 00:40:57.587666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.841 [2024-11-18 00:40:57.587704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.841 qpair failed and we were unable to recover it. 00:35:33.841 [2024-11-18 00:40:57.587864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.841 [2024-11-18 00:40:57.587905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.841 qpair failed and we were unable to recover it. 00:35:33.841 [2024-11-18 00:40:57.588087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.841 [2024-11-18 00:40:57.588136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.841 qpair failed and we were unable to recover it. 00:35:33.841 [2024-11-18 00:40:57.588308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.841 [2024-11-18 00:40:57.588358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.841 qpair failed and we were unable to recover it. 00:35:33.841 [2024-11-18 00:40:57.588552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.841 [2024-11-18 00:40:57.588591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.841 qpair failed and we were unable to recover it. 00:35:33.841 [2024-11-18 00:40:57.588754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.841 [2024-11-18 00:40:57.588793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.841 qpair failed and we were unable to recover it. 00:35:33.841 [2024-11-18 00:40:57.588900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.841 [2024-11-18 00:40:57.588939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.841 qpair failed and we were unable to recover it. 00:35:33.841 [2024-11-18 00:40:57.589099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.841 [2024-11-18 00:40:57.589139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.841 qpair failed and we were unable to recover it. 00:35:33.841 [2024-11-18 00:40:57.589257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.841 [2024-11-18 00:40:57.589295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.841 qpair failed and we were unable to recover it. 00:35:33.841 [2024-11-18 00:40:57.589461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.841 [2024-11-18 00:40:57.589506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.841 qpair failed and we were unable to recover it. 00:35:33.841 [2024-11-18 00:40:57.589665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.841 [2024-11-18 00:40:57.589704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.841 qpair failed and we were unable to recover it. 00:35:33.841 [2024-11-18 00:40:57.589891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.841 [2024-11-18 00:40:57.589930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.841 qpair failed and we were unable to recover it. 00:35:33.841 [2024-11-18 00:40:57.590086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.841 [2024-11-18 00:40:57.590125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.841 qpair failed and we were unable to recover it. 00:35:33.841 [2024-11-18 00:40:57.590284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.841 [2024-11-18 00:40:57.590344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.841 qpair failed and we were unable to recover it. 00:35:33.841 [2024-11-18 00:40:57.590510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.841 [2024-11-18 00:40:57.590549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.841 qpair failed and we were unable to recover it. 00:35:33.841 [2024-11-18 00:40:57.590674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.841 [2024-11-18 00:40:57.590714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.841 qpair failed and we were unable to recover it. 00:35:33.841 [2024-11-18 00:40:57.590846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.841 [2024-11-18 00:40:57.590886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.841 qpair failed and we were unable to recover it. 00:35:33.841 [2024-11-18 00:40:57.591050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.841 [2024-11-18 00:40:57.591090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.841 qpair failed and we were unable to recover it. 00:35:33.841 [2024-11-18 00:40:57.591330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.841 [2024-11-18 00:40:57.591370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.841 qpair failed and we were unable to recover it. 00:35:33.841 [2024-11-18 00:40:57.591562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.841 [2024-11-18 00:40:57.591601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.841 qpair failed and we were unable to recover it. 00:35:33.841 [2024-11-18 00:40:57.591799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.841 [2024-11-18 00:40:57.591837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.841 qpair failed and we were unable to recover it. 00:35:33.841 [2024-11-18 00:40:57.591966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.841 [2024-11-18 00:40:57.592006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.841 qpair failed and we were unable to recover it. 00:35:33.841 [2024-11-18 00:40:57.592180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.841 [2024-11-18 00:40:57.592219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.841 qpair failed and we were unable to recover it. 00:35:33.841 [2024-11-18 00:40:57.592391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.841 [2024-11-18 00:40:57.592431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.841 qpair failed and we were unable to recover it. 00:35:33.841 [2024-11-18 00:40:57.592619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.841 [2024-11-18 00:40:57.592659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.841 qpair failed and we were unable to recover it. 00:35:33.841 [2024-11-18 00:40:57.592786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.842 [2024-11-18 00:40:57.592826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.842 qpair failed and we were unable to recover it. 00:35:33.842 [2024-11-18 00:40:57.592974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.842 [2024-11-18 00:40:57.593013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.842 qpair failed and we were unable to recover it. 00:35:33.842 [2024-11-18 00:40:57.593198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.842 [2024-11-18 00:40:57.593237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.842 qpair failed and we were unable to recover it. 00:35:33.842 [2024-11-18 00:40:57.593415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.842 [2024-11-18 00:40:57.593455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.842 qpair failed and we were unable to recover it. 00:35:33.842 [2024-11-18 00:40:57.593643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.842 [2024-11-18 00:40:57.593682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.842 qpair failed and we were unable to recover it. 00:35:33.842 [2024-11-18 00:40:57.593841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.842 [2024-11-18 00:40:57.593879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.842 qpair failed and we were unable to recover it. 00:35:33.842 [2024-11-18 00:40:57.594001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.842 [2024-11-18 00:40:57.594041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.842 qpair failed and we were unable to recover it. 00:35:33.842 [2024-11-18 00:40:57.594228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.842 [2024-11-18 00:40:57.594267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.842 qpair failed and we were unable to recover it. 00:35:33.842 [2024-11-18 00:40:57.594442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.842 [2024-11-18 00:40:57.594481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.842 qpair failed and we were unable to recover it. 00:35:33.842 [2024-11-18 00:40:57.594671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.842 [2024-11-18 00:40:57.594711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.842 qpair failed and we were unable to recover it. 00:35:33.842 [2024-11-18 00:40:57.594857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.842 [2024-11-18 00:40:57.594905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.842 qpair failed and we were unable to recover it. 00:35:33.842 [2024-11-18 00:40:57.595039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.842 [2024-11-18 00:40:57.595084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.842 qpair failed and we were unable to recover it. 00:35:33.842 [2024-11-18 00:40:57.595248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.842 [2024-11-18 00:40:57.595324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.842 qpair failed and we were unable to recover it. 00:35:33.842 [2024-11-18 00:40:57.595553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.842 [2024-11-18 00:40:57.595610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.842 qpair failed and we were unable to recover it. 00:35:33.842 [2024-11-18 00:40:57.595783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.842 [2024-11-18 00:40:57.595823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.842 qpair failed and we were unable to recover it. 00:35:33.842 [2024-11-18 00:40:57.595960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.842 [2024-11-18 00:40:57.596000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.842 qpair failed and we were unable to recover it. 00:35:33.842 [2024-11-18 00:40:57.596166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.842 [2024-11-18 00:40:57.596206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.842 qpair failed and we were unable to recover it. 00:35:33.842 [2024-11-18 00:40:57.596392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.842 [2024-11-18 00:40:57.596432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.842 qpair failed and we were unable to recover it. 00:35:33.842 [2024-11-18 00:40:57.596633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.842 [2024-11-18 00:40:57.596673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.842 qpair failed and we were unable to recover it. 00:35:33.842 [2024-11-18 00:40:57.596804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.842 [2024-11-18 00:40:57.596843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.842 qpair failed and we were unable to recover it. 00:35:33.842 [2024-11-18 00:40:57.597028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.842 [2024-11-18 00:40:57.597066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.842 qpair failed and we were unable to recover it. 00:35:33.842 [2024-11-18 00:40:57.597254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.842 [2024-11-18 00:40:57.597303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.842 qpair failed and we were unable to recover it. 00:35:33.842 [2024-11-18 00:40:57.597483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.842 [2024-11-18 00:40:57.597525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.842 qpair failed and we were unable to recover it. 00:35:33.842 [2024-11-18 00:40:57.597735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.842 [2024-11-18 00:40:57.597789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.842 qpair failed and we were unable to recover it. 00:35:33.842 [2024-11-18 00:40:57.597963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.842 [2024-11-18 00:40:57.598005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.842 qpair failed and we were unable to recover it. 00:35:33.842 [2024-11-18 00:40:57.598147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.842 [2024-11-18 00:40:57.598187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.842 qpair failed and we were unable to recover it. 00:35:33.842 [2024-11-18 00:40:57.598345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.842 [2024-11-18 00:40:57.598385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.842 qpair failed and we were unable to recover it. 00:35:33.842 [2024-11-18 00:40:57.598561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.842 [2024-11-18 00:40:57.598603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.842 qpair failed and we were unable to recover it. 00:35:33.842 [2024-11-18 00:40:57.598778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.842 [2024-11-18 00:40:57.598819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.842 qpair failed and we were unable to recover it. 00:35:33.842 [2024-11-18 00:40:57.599019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.842 [2024-11-18 00:40:57.599059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.842 qpair failed and we were unable to recover it. 00:35:33.842 [2024-11-18 00:40:57.599217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.842 [2024-11-18 00:40:57.599259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.842 qpair failed and we were unable to recover it. 00:35:33.842 [2024-11-18 00:40:57.599460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.842 [2024-11-18 00:40:57.599503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.842 qpair failed and we were unable to recover it. 00:35:33.842 [2024-11-18 00:40:57.599670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.842 [2024-11-18 00:40:57.599718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.842 qpair failed and we were unable to recover it. 00:35:33.842 [2024-11-18 00:40:57.599933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.842 [2024-11-18 00:40:57.599992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:33.842 qpair failed and we were unable to recover it. 00:35:33.842 [2024-11-18 00:40:57.600185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.122 [2024-11-18 00:40:57.600226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.122 qpair failed and we were unable to recover it. 00:35:34.122 [2024-11-18 00:40:57.600422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.122 [2024-11-18 00:40:57.600464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.122 qpair failed and we were unable to recover it. 00:35:34.122 [2024-11-18 00:40:57.600612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.122 [2024-11-18 00:40:57.600655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.122 qpair failed and we were unable to recover it. 00:35:34.122 [2024-11-18 00:40:57.600852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.122 [2024-11-18 00:40:57.600893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.122 qpair failed and we were unable to recover it. 00:35:34.122 [2024-11-18 00:40:57.601025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.122 [2024-11-18 00:40:57.601065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.122 qpair failed and we were unable to recover it. 00:35:34.122 [2024-11-18 00:40:57.601243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.122 [2024-11-18 00:40:57.601286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.122 qpair failed and we were unable to recover it. 00:35:34.122 [2024-11-18 00:40:57.601422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.122 [2024-11-18 00:40:57.601463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.122 qpair failed and we were unable to recover it. 00:35:34.122 [2024-11-18 00:40:57.601621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.122 [2024-11-18 00:40:57.601662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.122 qpair failed and we were unable to recover it. 00:35:34.122 [2024-11-18 00:40:57.601800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.122 [2024-11-18 00:40:57.601841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.122 qpair failed and we were unable to recover it. 00:35:34.122 [2024-11-18 00:40:57.602011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.122 [2024-11-18 00:40:57.602052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.122 qpair failed and we were unable to recover it. 00:35:34.122 [2024-11-18 00:40:57.602219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.122 [2024-11-18 00:40:57.602260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.122 qpair failed and we were unable to recover it. 00:35:34.122 [2024-11-18 00:40:57.602456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.122 [2024-11-18 00:40:57.602498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.122 qpair failed and we were unable to recover it. 00:35:34.122 [2024-11-18 00:40:57.602634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.122 [2024-11-18 00:40:57.602675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.122 qpair failed and we were unable to recover it. 00:35:34.122 [2024-11-18 00:40:57.602815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.122 [2024-11-18 00:40:57.602856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.122 qpair failed and we were unable to recover it. 00:35:34.122 [2024-11-18 00:40:57.603037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.122 [2024-11-18 00:40:57.603086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.122 qpair failed and we were unable to recover it. 00:35:34.122 [2024-11-18 00:40:57.603277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.122 [2024-11-18 00:40:57.603337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.122 qpair failed and we were unable to recover it. 00:35:34.122 [2024-11-18 00:40:57.603519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.122 [2024-11-18 00:40:57.603560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.122 qpair failed and we were unable to recover it. 00:35:34.122 [2024-11-18 00:40:57.603739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.122 [2024-11-18 00:40:57.603780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.122 qpair failed and we were unable to recover it. 00:35:34.122 [2024-11-18 00:40:57.603942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.122 [2024-11-18 00:40:57.603990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.122 qpair failed and we were unable to recover it. 00:35:34.122 [2024-11-18 00:40:57.604127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.122 [2024-11-18 00:40:57.604169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.122 qpair failed and we were unable to recover it. 00:35:34.122 [2024-11-18 00:40:57.604344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.122 [2024-11-18 00:40:57.604385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.122 qpair failed and we were unable to recover it. 00:35:34.122 [2024-11-18 00:40:57.604582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.122 [2024-11-18 00:40:57.604623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.122 qpair failed and we were unable to recover it. 00:35:34.122 [2024-11-18 00:40:57.604787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.122 [2024-11-18 00:40:57.604828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.122 qpair failed and we were unable to recover it. 00:35:34.122 [2024-11-18 00:40:57.605027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.122 [2024-11-18 00:40:57.605068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.122 qpair failed and we were unable to recover it. 00:35:34.122 [2024-11-18 00:40:57.605270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.122 [2024-11-18 00:40:57.605321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.122 qpair failed and we were unable to recover it. 00:35:34.122 [2024-11-18 00:40:57.605487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.122 [2024-11-18 00:40:57.605528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.122 qpair failed and we were unable to recover it. 00:35:34.122 [2024-11-18 00:40:57.605706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.122 [2024-11-18 00:40:57.605747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.122 qpair failed and we were unable to recover it. 00:35:34.122 [2024-11-18 00:40:57.605882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.122 [2024-11-18 00:40:57.605923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.122 qpair failed and we were unable to recover it. 00:35:34.122 [2024-11-18 00:40:57.606043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.122 [2024-11-18 00:40:57.606084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.122 qpair failed and we were unable to recover it. 00:35:34.122 [2024-11-18 00:40:57.606207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.122 [2024-11-18 00:40:57.606248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.122 qpair failed and we were unable to recover it. 00:35:34.122 [2024-11-18 00:40:57.606441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.122 [2024-11-18 00:40:57.606483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.122 qpair failed and we were unable to recover it. 00:35:34.122 [2024-11-18 00:40:57.606647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.122 [2024-11-18 00:40:57.606688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.122 qpair failed and we were unable to recover it. 00:35:34.122 [2024-11-18 00:40:57.606857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.122 [2024-11-18 00:40:57.606898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.122 qpair failed and we were unable to recover it. 00:35:34.122 [2024-11-18 00:40:57.607059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.123 [2024-11-18 00:40:57.607099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.123 qpair failed and we were unable to recover it. 00:35:34.123 [2024-11-18 00:40:57.607264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.123 [2024-11-18 00:40:57.607305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.123 qpair failed and we were unable to recover it. 00:35:34.123 [2024-11-18 00:40:57.607480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.123 [2024-11-18 00:40:57.607521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.123 qpair failed and we were unable to recover it. 00:35:34.123 [2024-11-18 00:40:57.607726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.123 [2024-11-18 00:40:57.607767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.123 qpair failed and we were unable to recover it. 00:35:34.123 [2024-11-18 00:40:57.607922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.123 [2024-11-18 00:40:57.607962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.123 qpair failed and we were unable to recover it. 00:35:34.123 [2024-11-18 00:40:57.608119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.123 [2024-11-18 00:40:57.608160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.123 qpair failed and we were unable to recover it. 00:35:34.123 [2024-11-18 00:40:57.608280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.123 [2024-11-18 00:40:57.608334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.123 qpair failed and we were unable to recover it. 00:35:34.123 [2024-11-18 00:40:57.608502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.123 [2024-11-18 00:40:57.608545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.123 qpair failed and we were unable to recover it. 00:35:34.123 [2024-11-18 00:40:57.608720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.123 [2024-11-18 00:40:57.608762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.123 qpair failed and we were unable to recover it. 00:35:34.123 [2024-11-18 00:40:57.608954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.123 [2024-11-18 00:40:57.608996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.123 qpair failed and we were unable to recover it. 00:35:34.123 [2024-11-18 00:40:57.609176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.123 [2024-11-18 00:40:57.609225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.123 qpair failed and we were unable to recover it. 00:35:34.123 [2024-11-18 00:40:57.609453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.123 [2024-11-18 00:40:57.609504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.123 qpair failed and we were unable to recover it. 00:35:34.123 [2024-11-18 00:40:57.609725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.123 [2024-11-18 00:40:57.609782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.123 qpair failed and we were unable to recover it. 00:35:34.123 [2024-11-18 00:40:57.609994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.123 [2024-11-18 00:40:57.610041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.123 qpair failed and we were unable to recover it. 00:35:34.123 [2024-11-18 00:40:57.610250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.123 [2024-11-18 00:40:57.610332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.123 qpair failed and we were unable to recover it. 00:35:34.123 [2024-11-18 00:40:57.610540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.123 [2024-11-18 00:40:57.610582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.123 qpair failed and we were unable to recover it. 00:35:34.123 [2024-11-18 00:40:57.610736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.123 [2024-11-18 00:40:57.610777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.123 qpair failed and we were unable to recover it. 00:35:34.123 [2024-11-18 00:40:57.610952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.123 [2024-11-18 00:40:57.610993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.123 qpair failed and we were unable to recover it. 00:35:34.123 [2024-11-18 00:40:57.611110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.123 [2024-11-18 00:40:57.611151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.123 qpair failed and we were unable to recover it. 00:35:34.123 [2024-11-18 00:40:57.611356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.123 [2024-11-18 00:40:57.611398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.123 qpair failed and we were unable to recover it. 00:35:34.123 [2024-11-18 00:40:57.611565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.123 [2024-11-18 00:40:57.611606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.123 qpair failed and we were unable to recover it. 00:35:34.123 [2024-11-18 00:40:57.611803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.123 [2024-11-18 00:40:57.611844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.123 qpair failed and we were unable to recover it. 00:35:34.123 [2024-11-18 00:40:57.612037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.123 [2024-11-18 00:40:57.612078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.123 qpair failed and we were unable to recover it. 00:35:34.123 [2024-11-18 00:40:57.612243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.123 [2024-11-18 00:40:57.612285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.123 qpair failed and we were unable to recover it. 00:35:34.123 [2024-11-18 00:40:57.612459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.123 [2024-11-18 00:40:57.612501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.123 qpair failed and we were unable to recover it. 00:35:34.123 [2024-11-18 00:40:57.612668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.123 [2024-11-18 00:40:57.612709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.123 qpair failed and we were unable to recover it. 00:35:34.123 [2024-11-18 00:40:57.612905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.123 [2024-11-18 00:40:57.612947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.123 qpair failed and we were unable to recover it. 00:35:34.123 [2024-11-18 00:40:57.613111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.123 [2024-11-18 00:40:57.613153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.123 qpair failed and we were unable to recover it. 00:35:34.123 [2024-11-18 00:40:57.613320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.123 [2024-11-18 00:40:57.613362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.123 qpair failed and we were unable to recover it. 00:35:34.123 [2024-11-18 00:40:57.613530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.123 [2024-11-18 00:40:57.613572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.123 qpair failed and we were unable to recover it. 00:35:34.123 [2024-11-18 00:40:57.613766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.123 [2024-11-18 00:40:57.613807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.123 qpair failed and we were unable to recover it. 00:35:34.123 [2024-11-18 00:40:57.614008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.123 [2024-11-18 00:40:57.614049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.123 qpair failed and we were unable to recover it. 00:35:34.123 [2024-11-18 00:40:57.614234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.123 [2024-11-18 00:40:57.614276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.123 qpair failed and we were unable to recover it. 00:35:34.123 [2024-11-18 00:40:57.614448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.123 [2024-11-18 00:40:57.614490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.123 qpair failed and we were unable to recover it. 00:35:34.123 [2024-11-18 00:40:57.614657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.123 [2024-11-18 00:40:57.614698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.123 qpair failed and we were unable to recover it. 00:35:34.123 [2024-11-18 00:40:57.614822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.123 [2024-11-18 00:40:57.614863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.123 qpair failed and we were unable to recover it. 00:35:34.123 [2024-11-18 00:40:57.615032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.123 [2024-11-18 00:40:57.615072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.123 qpair failed and we were unable to recover it. 00:35:34.123 [2024-11-18 00:40:57.615204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.123 [2024-11-18 00:40:57.615246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.123 qpair failed and we were unable to recover it. 00:35:34.124 [2024-11-18 00:40:57.615455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.124 [2024-11-18 00:40:57.615498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.124 qpair failed and we were unable to recover it. 00:35:34.124 [2024-11-18 00:40:57.615664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.124 [2024-11-18 00:40:57.615711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.124 qpair failed and we were unable to recover it. 00:35:34.124 [2024-11-18 00:40:57.615842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.124 [2024-11-18 00:40:57.615883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.124 qpair failed and we were unable to recover it. 00:35:34.124 [2024-11-18 00:40:57.616043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.124 [2024-11-18 00:40:57.616085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.124 qpair failed and we were unable to recover it. 00:35:34.124 [2024-11-18 00:40:57.616276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.124 [2024-11-18 00:40:57.616338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.124 qpair failed and we were unable to recover it. 00:35:34.124 [2024-11-18 00:40:57.616516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.124 [2024-11-18 00:40:57.616557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.124 qpair failed and we were unable to recover it. 00:35:34.124 [2024-11-18 00:40:57.616727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.124 [2024-11-18 00:40:57.616769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.124 qpair failed and we were unable to recover it. 00:35:34.124 [2024-11-18 00:40:57.616966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.124 [2024-11-18 00:40:57.617007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.124 qpair failed and we were unable to recover it. 00:35:34.124 [2024-11-18 00:40:57.617122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.124 [2024-11-18 00:40:57.617163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.124 qpair failed and we were unable to recover it. 00:35:34.124 [2024-11-18 00:40:57.617341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.124 [2024-11-18 00:40:57.617384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.124 qpair failed and we were unable to recover it. 00:35:34.124 [2024-11-18 00:40:57.617580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.124 [2024-11-18 00:40:57.617621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.124 qpair failed and we were unable to recover it. 00:35:34.124 [2024-11-18 00:40:57.617771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.124 [2024-11-18 00:40:57.617812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.124 qpair failed and we were unable to recover it. 00:35:34.124 [2024-11-18 00:40:57.617961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.124 [2024-11-18 00:40:57.618002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.124 qpair failed and we were unable to recover it. 00:35:34.124 [2024-11-18 00:40:57.618176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.124 [2024-11-18 00:40:57.618217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.124 qpair failed and we were unable to recover it. 00:35:34.124 [2024-11-18 00:40:57.618396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.124 [2024-11-18 00:40:57.618438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.124 qpair failed and we were unable to recover it. 00:35:34.124 [2024-11-18 00:40:57.618574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.124 [2024-11-18 00:40:57.618615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.124 qpair failed and we were unable to recover it. 00:35:34.124 [2024-11-18 00:40:57.618786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.124 [2024-11-18 00:40:57.618827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.124 qpair failed and we were unable to recover it. 00:35:34.124 [2024-11-18 00:40:57.618987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.124 [2024-11-18 00:40:57.619027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.124 qpair failed and we were unable to recover it. 00:35:34.124 [2024-11-18 00:40:57.619195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.124 [2024-11-18 00:40:57.619238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.124 qpair failed and we were unable to recover it. 00:35:34.124 [2024-11-18 00:40:57.619394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.124 [2024-11-18 00:40:57.619436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.124 qpair failed and we were unable to recover it. 00:35:34.124 [2024-11-18 00:40:57.619598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.124 [2024-11-18 00:40:57.619639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.124 qpair failed and we were unable to recover it. 00:35:34.124 [2024-11-18 00:40:57.619795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.124 [2024-11-18 00:40:57.619837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.124 qpair failed and we were unable to recover it. 00:35:34.124 [2024-11-18 00:40:57.620003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.124 [2024-11-18 00:40:57.620043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.124 qpair failed and we were unable to recover it. 00:35:34.124 [2024-11-18 00:40:57.620204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.124 [2024-11-18 00:40:57.620244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.124 qpair failed and we were unable to recover it. 00:35:34.124 [2024-11-18 00:40:57.620454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.124 [2024-11-18 00:40:57.620495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.124 qpair failed and we were unable to recover it. 00:35:34.124 [2024-11-18 00:40:57.620665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.124 [2024-11-18 00:40:57.620706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.124 qpair failed and we were unable to recover it. 00:35:34.124 [2024-11-18 00:40:57.620876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.124 [2024-11-18 00:40:57.620917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.124 qpair failed and we were unable to recover it. 00:35:34.124 [2024-11-18 00:40:57.621083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.124 [2024-11-18 00:40:57.621124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.124 qpair failed and we were unable to recover it. 00:35:34.124 [2024-11-18 00:40:57.621281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.124 [2024-11-18 00:40:57.621334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.124 qpair failed and we were unable to recover it. 00:35:34.124 [2024-11-18 00:40:57.621545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.124 [2024-11-18 00:40:57.621586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.124 qpair failed and we were unable to recover it. 00:35:34.124 [2024-11-18 00:40:57.621710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.124 [2024-11-18 00:40:57.621751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.124 qpair failed and we were unable to recover it. 00:35:34.124 [2024-11-18 00:40:57.621906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.124 [2024-11-18 00:40:57.621947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.124 qpair failed and we were unable to recover it. 00:35:34.124 [2024-11-18 00:40:57.622114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.124 [2024-11-18 00:40:57.622156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.124 qpair failed and we were unable to recover it. 00:35:34.124 [2024-11-18 00:40:57.622371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.124 [2024-11-18 00:40:57.622413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.124 qpair failed and we were unable to recover it. 00:35:34.124 [2024-11-18 00:40:57.622609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.124 [2024-11-18 00:40:57.622649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.124 qpair failed and we were unable to recover it. 00:35:34.124 [2024-11-18 00:40:57.622809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.124 [2024-11-18 00:40:57.622852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.124 qpair failed and we were unable to recover it. 00:35:34.124 [2024-11-18 00:40:57.623052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.124 [2024-11-18 00:40:57.623093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.124 qpair failed and we were unable to recover it. 00:35:34.124 [2024-11-18 00:40:57.623264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.124 [2024-11-18 00:40:57.623305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.124 qpair failed and we were unable to recover it. 00:35:34.125 [2024-11-18 00:40:57.623484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.125 [2024-11-18 00:40:57.623525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.125 qpair failed and we were unable to recover it. 00:35:34.125 [2024-11-18 00:40:57.623729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.125 [2024-11-18 00:40:57.623770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.125 qpair failed and we were unable to recover it. 00:35:34.125 [2024-11-18 00:40:57.623933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.125 [2024-11-18 00:40:57.623973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.125 qpair failed and we were unable to recover it. 00:35:34.125 [2024-11-18 00:40:57.624142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.125 [2024-11-18 00:40:57.624184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.125 qpair failed and we were unable to recover it. 00:35:34.125 [2024-11-18 00:40:57.624355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.125 [2024-11-18 00:40:57.624398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.125 qpair failed and we were unable to recover it. 00:35:34.125 [2024-11-18 00:40:57.624562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.125 [2024-11-18 00:40:57.624602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.125 qpair failed and we were unable to recover it. 00:35:34.125 [2024-11-18 00:40:57.624765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.125 [2024-11-18 00:40:57.624806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.125 qpair failed and we were unable to recover it. 00:35:34.125 [2024-11-18 00:40:57.624978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.125 [2024-11-18 00:40:57.625019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.125 qpair failed and we were unable to recover it. 00:35:34.125 [2024-11-18 00:40:57.625213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.125 [2024-11-18 00:40:57.625254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.125 qpair failed and we were unable to recover it. 00:35:34.125 [2024-11-18 00:40:57.625400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.125 [2024-11-18 00:40:57.625441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.125 qpair failed and we were unable to recover it. 00:35:34.125 [2024-11-18 00:40:57.625609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.125 [2024-11-18 00:40:57.625651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.125 qpair failed and we were unable to recover it. 00:35:34.125 [2024-11-18 00:40:57.625811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.125 [2024-11-18 00:40:57.625852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.125 qpair failed and we were unable to recover it. 00:35:34.125 [2024-11-18 00:40:57.626006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.125 [2024-11-18 00:40:57.626047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.125 qpair failed and we were unable to recover it. 00:35:34.125 [2024-11-18 00:40:57.626204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.125 [2024-11-18 00:40:57.626246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.125 qpair failed and we were unable to recover it. 00:35:34.125 [2024-11-18 00:40:57.626421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.125 [2024-11-18 00:40:57.626463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.125 qpair failed and we were unable to recover it. 00:35:34.125 [2024-11-18 00:40:57.626586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.125 [2024-11-18 00:40:57.626627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.125 qpair failed and we were unable to recover it. 00:35:34.125 [2024-11-18 00:40:57.626829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.125 [2024-11-18 00:40:57.626870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.125 qpair failed and we were unable to recover it. 00:35:34.125 [2024-11-18 00:40:57.627005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.125 [2024-11-18 00:40:57.627047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.125 qpair failed and we were unable to recover it. 00:35:34.125 [2024-11-18 00:40:57.627234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.125 [2024-11-18 00:40:57.627284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.125 qpair failed and we were unable to recover it. 00:35:34.125 [2024-11-18 00:40:57.627525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.125 [2024-11-18 00:40:57.627567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.125 qpair failed and we were unable to recover it. 00:35:34.125 [2024-11-18 00:40:57.627736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.125 [2024-11-18 00:40:57.627777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.125 qpair failed and we were unable to recover it. 00:35:34.125 [2024-11-18 00:40:57.627930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.125 [2024-11-18 00:40:57.627983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.125 qpair failed and we were unable to recover it. 00:35:34.125 [2024-11-18 00:40:57.628123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.125 [2024-11-18 00:40:57.628186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.125 qpair failed and we were unable to recover it. 00:35:34.125 [2024-11-18 00:40:57.628348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.125 [2024-11-18 00:40:57.628391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.125 qpair failed and we were unable to recover it. 00:35:34.125 [2024-11-18 00:40:57.628552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.125 [2024-11-18 00:40:57.628593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.125 qpair failed and we were unable to recover it. 00:35:34.125 [2024-11-18 00:40:57.628754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.125 [2024-11-18 00:40:57.628795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.125 qpair failed and we were unable to recover it. 00:35:34.125 [2024-11-18 00:40:57.628963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.125 [2024-11-18 00:40:57.629005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.125 qpair failed and we were unable to recover it. 00:35:34.125 [2024-11-18 00:40:57.629203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.125 [2024-11-18 00:40:57.629245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.125 qpair failed and we were unable to recover it. 00:35:34.125 [2024-11-18 00:40:57.629396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.125 [2024-11-18 00:40:57.629459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.125 qpair failed and we were unable to recover it. 00:35:34.125 [2024-11-18 00:40:57.629622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.125 [2024-11-18 00:40:57.629663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.125 qpair failed and we were unable to recover it. 00:35:34.125 [2024-11-18 00:40:57.629828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.125 [2024-11-18 00:40:57.629870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.125 qpair failed and we were unable to recover it. 00:35:34.125 [2024-11-18 00:40:57.630044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.125 [2024-11-18 00:40:57.630092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.125 qpair failed and we were unable to recover it. 00:35:34.125 [2024-11-18 00:40:57.630332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.125 [2024-11-18 00:40:57.630394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.125 qpair failed and we were unable to recover it. 00:35:34.125 [2024-11-18 00:40:57.630539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.125 [2024-11-18 00:40:57.630580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.125 qpair failed and we were unable to recover it. 00:35:34.125 [2024-11-18 00:40:57.630775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.125 [2024-11-18 00:40:57.630818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.125 qpair failed and we were unable to recover it. 00:35:34.125 [2024-11-18 00:40:57.630977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.125 [2024-11-18 00:40:57.631020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.125 qpair failed and we were unable to recover it. 00:35:34.125 [2024-11-18 00:40:57.631177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.125 [2024-11-18 00:40:57.631221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.125 qpair failed and we were unable to recover it. 00:35:34.125 [2024-11-18 00:40:57.631429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.125 [2024-11-18 00:40:57.631474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.125 qpair failed and we were unable to recover it. 00:35:34.125 [2024-11-18 00:40:57.631676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.126 [2024-11-18 00:40:57.631719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.126 qpair failed and we were unable to recover it. 00:35:34.126 [2024-11-18 00:40:57.631893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.126 [2024-11-18 00:40:57.631936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.126 qpair failed and we were unable to recover it. 00:35:34.126 [2024-11-18 00:40:57.632100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.126 [2024-11-18 00:40:57.632143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.126 qpair failed and we were unable to recover it. 00:35:34.126 [2024-11-18 00:40:57.632326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.126 [2024-11-18 00:40:57.632371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.126 qpair failed and we were unable to recover it. 00:35:34.126 [2024-11-18 00:40:57.632508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.126 [2024-11-18 00:40:57.632551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.126 qpair failed and we were unable to recover it. 00:35:34.126 [2024-11-18 00:40:57.632677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.126 [2024-11-18 00:40:57.632721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.126 qpair failed and we were unable to recover it. 00:35:34.126 [2024-11-18 00:40:57.632897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.126 [2024-11-18 00:40:57.632941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.126 qpair failed and we were unable to recover it. 00:35:34.126 [2024-11-18 00:40:57.633139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.126 [2024-11-18 00:40:57.633188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.126 qpair failed and we were unable to recover it. 00:35:34.126 [2024-11-18 00:40:57.633426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.126 [2024-11-18 00:40:57.633470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.126 qpair failed and we were unable to recover it. 00:35:34.126 [2024-11-18 00:40:57.633637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.126 [2024-11-18 00:40:57.633681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.126 qpair failed and we were unable to recover it. 00:35:34.126 [2024-11-18 00:40:57.633857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.126 [2024-11-18 00:40:57.633900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.126 qpair failed and we were unable to recover it. 00:35:34.126 [2024-11-18 00:40:57.634112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.126 [2024-11-18 00:40:57.634161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.126 qpair failed and we were unable to recover it. 00:35:34.126 [2024-11-18 00:40:57.634346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.126 [2024-11-18 00:40:57.634390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.126 qpair failed and we were unable to recover it. 00:35:34.126 [2024-11-18 00:40:57.634564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.126 [2024-11-18 00:40:57.634608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.126 qpair failed and we were unable to recover it. 00:35:34.126 [2024-11-18 00:40:57.634771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.126 [2024-11-18 00:40:57.634815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.126 qpair failed and we were unable to recover it. 00:35:34.126 [2024-11-18 00:40:57.635015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.126 [2024-11-18 00:40:57.635059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.126 qpair failed and we were unable to recover it. 00:35:34.126 [2024-11-18 00:40:57.635235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.126 [2024-11-18 00:40:57.635278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.126 qpair failed and we were unable to recover it. 00:35:34.126 [2024-11-18 00:40:57.635472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.126 [2024-11-18 00:40:57.635516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.126 qpair failed and we were unable to recover it. 00:35:34.126 [2024-11-18 00:40:57.635686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.126 [2024-11-18 00:40:57.635729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.126 qpair failed and we were unable to recover it. 00:35:34.126 [2024-11-18 00:40:57.635887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.126 [2024-11-18 00:40:57.635930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.126 qpair failed and we were unable to recover it. 00:35:34.126 [2024-11-18 00:40:57.636123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.126 [2024-11-18 00:40:57.636175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.126 qpair failed and we were unable to recover it. 00:35:34.126 [2024-11-18 00:40:57.636380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.126 [2024-11-18 00:40:57.636425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.126 qpair failed and we were unable to recover it. 00:35:34.126 [2024-11-18 00:40:57.636556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.126 [2024-11-18 00:40:57.636599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.126 qpair failed and we were unable to recover it. 00:35:34.126 [2024-11-18 00:40:57.636781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.126 [2024-11-18 00:40:57.636825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.126 qpair failed and we were unable to recover it. 00:35:34.126 [2024-11-18 00:40:57.636948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.126 [2024-11-18 00:40:57.636992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.126 qpair failed and we were unable to recover it. 00:35:34.126 [2024-11-18 00:40:57.637130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.126 [2024-11-18 00:40:57.637173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.126 qpair failed and we were unable to recover it. 00:35:34.126 [2024-11-18 00:40:57.637356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.126 [2024-11-18 00:40:57.637400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.126 qpair failed and we were unable to recover it. 00:35:34.126 [2024-11-18 00:40:57.637558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.126 [2024-11-18 00:40:57.637602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.126 qpair failed and we were unable to recover it. 00:35:34.126 [2024-11-18 00:40:57.637807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.126 [2024-11-18 00:40:57.637850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.126 qpair failed and we were unable to recover it. 00:35:34.126 [2024-11-18 00:40:57.637996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.126 [2024-11-18 00:40:57.638040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.126 qpair failed and we were unable to recover it. 00:35:34.126 [2024-11-18 00:40:57.638252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.126 [2024-11-18 00:40:57.638296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.126 qpair failed and we were unable to recover it. 00:35:34.126 [2024-11-18 00:40:57.638444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.126 [2024-11-18 00:40:57.638488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.126 qpair failed and we were unable to recover it. 00:35:34.126 [2024-11-18 00:40:57.638636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.126 [2024-11-18 00:40:57.638680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.126 qpair failed and we were unable to recover it. 00:35:34.127 [2024-11-18 00:40:57.638843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.127 [2024-11-18 00:40:57.638887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.127 qpair failed and we were unable to recover it. 00:35:34.127 [2024-11-18 00:40:57.639064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.127 [2024-11-18 00:40:57.639107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.127 qpair failed and we were unable to recover it. 00:35:34.127 [2024-11-18 00:40:57.639283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.127 [2024-11-18 00:40:57.639342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.127 qpair failed and we were unable to recover it. 00:35:34.127 [2024-11-18 00:40:57.639553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.127 [2024-11-18 00:40:57.639602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.127 qpair failed and we were unable to recover it. 00:35:34.127 [2024-11-18 00:40:57.639740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.127 [2024-11-18 00:40:57.639783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.127 qpair failed and we were unable to recover it. 00:35:34.127 [2024-11-18 00:40:57.640003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.127 [2024-11-18 00:40:57.640047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.127 qpair failed and we were unable to recover it. 00:35:34.127 [2024-11-18 00:40:57.640225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.127 [2024-11-18 00:40:57.640269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.127 qpair failed and we were unable to recover it. 00:35:34.127 [2024-11-18 00:40:57.640440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.127 [2024-11-18 00:40:57.640484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.127 qpair failed and we were unable to recover it. 00:35:34.127 [2024-11-18 00:40:57.640659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.127 [2024-11-18 00:40:57.640703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.127 qpair failed and we were unable to recover it. 00:35:34.127 [2024-11-18 00:40:57.640907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.127 [2024-11-18 00:40:57.640950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.127 qpair failed and we were unable to recover it. 00:35:34.127 [2024-11-18 00:40:57.641091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.127 [2024-11-18 00:40:57.641140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.127 qpair failed and we were unable to recover it. 00:35:34.127 [2024-11-18 00:40:57.641340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.127 [2024-11-18 00:40:57.641386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.127 qpair failed and we were unable to recover it. 00:35:34.127 [2024-11-18 00:40:57.641532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.127 [2024-11-18 00:40:57.641575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.127 qpair failed and we were unable to recover it. 00:35:34.127 [2024-11-18 00:40:57.641714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.127 [2024-11-18 00:40:57.641757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.127 qpair failed and we were unable to recover it. 00:35:34.127 [2024-11-18 00:40:57.641964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.127 [2024-11-18 00:40:57.642015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.127 qpair failed and we were unable to recover it. 00:35:34.127 [2024-11-18 00:40:57.642225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.127 [2024-11-18 00:40:57.642268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.127 qpair failed and we were unable to recover it. 00:35:34.127 [2024-11-18 00:40:57.642425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.127 [2024-11-18 00:40:57.642469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.127 qpair failed and we were unable to recover it. 00:35:34.127 [2024-11-18 00:40:57.642681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.127 [2024-11-18 00:40:57.642726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.127 qpair failed and we were unable to recover it. 00:35:34.127 [2024-11-18 00:40:57.642932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.127 [2024-11-18 00:40:57.642976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.127 qpair failed and we were unable to recover it. 00:35:34.127 [2024-11-18 00:40:57.643150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.127 [2024-11-18 00:40:57.643193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.127 qpair failed and we were unable to recover it. 00:35:34.127 [2024-11-18 00:40:57.643404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.127 [2024-11-18 00:40:57.643449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.127 qpair failed and we were unable to recover it. 00:35:34.127 [2024-11-18 00:40:57.643658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.127 [2024-11-18 00:40:57.643701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.127 qpair failed and we were unable to recover it. 00:35:34.127 [2024-11-18 00:40:57.643877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.127 [2024-11-18 00:40:57.643920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.127 qpair failed and we were unable to recover it. 00:35:34.127 [2024-11-18 00:40:57.644102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.127 [2024-11-18 00:40:57.644146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.127 qpair failed and we were unable to recover it. 00:35:34.127 [2024-11-18 00:40:57.644308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.127 [2024-11-18 00:40:57.644361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.127 qpair failed and we were unable to recover it. 00:35:34.127 [2024-11-18 00:40:57.644568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.127 [2024-11-18 00:40:57.644611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.127 qpair failed and we were unable to recover it. 00:35:34.127 [2024-11-18 00:40:57.644818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.127 [2024-11-18 00:40:57.644862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.127 qpair failed and we were unable to recover it. 00:35:34.127 [2024-11-18 00:40:57.645029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.127 [2024-11-18 00:40:57.645072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.127 qpair failed and we were unable to recover it. 00:35:34.127 [2024-11-18 00:40:57.645266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.127 [2024-11-18 00:40:57.645330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.127 qpair failed and we were unable to recover it. 00:35:34.127 [2024-11-18 00:40:57.645504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.127 [2024-11-18 00:40:57.645548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.127 qpair failed and we were unable to recover it. 00:35:34.127 [2024-11-18 00:40:57.645760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.127 [2024-11-18 00:40:57.645802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.127 qpair failed and we were unable to recover it. 00:35:34.127 [2024-11-18 00:40:57.645930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.127 [2024-11-18 00:40:57.645973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.127 qpair failed and we were unable to recover it. 00:35:34.127 [2024-11-18 00:40:57.646144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.127 [2024-11-18 00:40:57.646188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.127 qpair failed and we were unable to recover it. 00:35:34.127 [2024-11-18 00:40:57.646399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.127 [2024-11-18 00:40:57.646443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.127 qpair failed and we were unable to recover it. 00:35:34.127 [2024-11-18 00:40:57.646622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.127 [2024-11-18 00:40:57.646665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.127 qpair failed and we were unable to recover it. 00:35:34.127 [2024-11-18 00:40:57.646847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.127 [2024-11-18 00:40:57.646891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.127 qpair failed and we were unable to recover it. 00:35:34.127 [2024-11-18 00:40:57.647091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.127 [2024-11-18 00:40:57.647134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.127 qpair failed and we were unable to recover it. 00:35:34.127 [2024-11-18 00:40:57.647292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.128 [2024-11-18 00:40:57.647354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.128 qpair failed and we were unable to recover it. 00:35:34.128 [2024-11-18 00:40:57.647538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.128 [2024-11-18 00:40:57.647582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.128 qpair failed and we were unable to recover it. 00:35:34.128 [2024-11-18 00:40:57.647722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.128 [2024-11-18 00:40:57.647765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.128 qpair failed and we were unable to recover it. 00:35:34.128 [2024-11-18 00:40:57.647932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.128 [2024-11-18 00:40:57.647976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.128 qpair failed and we were unable to recover it. 00:35:34.128 [2024-11-18 00:40:57.648153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.128 [2024-11-18 00:40:57.648197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.128 qpair failed and we were unable to recover it. 00:35:34.128 [2024-11-18 00:40:57.648376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.128 [2024-11-18 00:40:57.648421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.128 qpair failed and we were unable to recover it. 00:35:34.128 [2024-11-18 00:40:57.648644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.128 [2024-11-18 00:40:57.648688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.128 qpair failed and we were unable to recover it. 00:35:34.128 [2024-11-18 00:40:57.648899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.128 [2024-11-18 00:40:57.648943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.128 qpair failed and we were unable to recover it. 00:35:34.128 [2024-11-18 00:40:57.649146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.128 [2024-11-18 00:40:57.649189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.128 qpair failed and we were unable to recover it. 00:35:34.128 [2024-11-18 00:40:57.649395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.128 [2024-11-18 00:40:57.649440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.128 qpair failed and we were unable to recover it. 00:35:34.128 [2024-11-18 00:40:57.649652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.128 [2024-11-18 00:40:57.649706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.128 qpair failed and we were unable to recover it. 00:35:34.128 [2024-11-18 00:40:57.649872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.128 [2024-11-18 00:40:57.649916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.128 qpair failed and we were unable to recover it. 00:35:34.128 [2024-11-18 00:40:57.650056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.128 [2024-11-18 00:40:57.650099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.128 qpair failed and we were unable to recover it. 00:35:34.128 [2024-11-18 00:40:57.650268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.128 [2024-11-18 00:40:57.650324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.128 qpair failed and we were unable to recover it. 00:35:34.128 [2024-11-18 00:40:57.650499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.128 [2024-11-18 00:40:57.650543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.128 qpair failed and we were unable to recover it. 00:35:34.128 [2024-11-18 00:40:57.650748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.128 [2024-11-18 00:40:57.650791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.128 qpair failed and we were unable to recover it. 00:35:34.128 [2024-11-18 00:40:57.650967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.128 [2024-11-18 00:40:57.651011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.128 qpair failed and we were unable to recover it. 00:35:34.128 [2024-11-18 00:40:57.651172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.128 [2024-11-18 00:40:57.651216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.128 qpair failed and we were unable to recover it. 00:35:34.128 [2024-11-18 00:40:57.651390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.128 [2024-11-18 00:40:57.651441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.128 qpair failed and we were unable to recover it. 00:35:34.128 [2024-11-18 00:40:57.651572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.128 [2024-11-18 00:40:57.651616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.128 qpair failed and we were unable to recover it. 00:35:34.128 [2024-11-18 00:40:57.651790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.128 [2024-11-18 00:40:57.651833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.128 qpair failed and we were unable to recover it. 00:35:34.128 [2024-11-18 00:40:57.652048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.128 [2024-11-18 00:40:57.652091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.128 qpair failed and we were unable to recover it. 00:35:34.128 [2024-11-18 00:40:57.652274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.128 [2024-11-18 00:40:57.652328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.128 qpair failed and we were unable to recover it. 00:35:34.128 [2024-11-18 00:40:57.652492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.128 [2024-11-18 00:40:57.652535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.128 qpair failed and we were unable to recover it. 00:35:34.128 [2024-11-18 00:40:57.652706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.128 [2024-11-18 00:40:57.652750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.128 qpair failed and we were unable to recover it. 00:35:34.128 [2024-11-18 00:40:57.652916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.128 [2024-11-18 00:40:57.652960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.128 qpair failed and we were unable to recover it. 00:35:34.128 [2024-11-18 00:40:57.653135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.128 [2024-11-18 00:40:57.653183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.128 qpair failed and we were unable to recover it. 00:35:34.128 [2024-11-18 00:40:57.653408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.128 [2024-11-18 00:40:57.653452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.128 qpair failed and we were unable to recover it. 00:35:34.128 [2024-11-18 00:40:57.653663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.128 [2024-11-18 00:40:57.653706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.128 qpair failed and we were unable to recover it. 00:35:34.128 [2024-11-18 00:40:57.653876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.128 [2024-11-18 00:40:57.653920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.128 qpair failed and we were unable to recover it. 00:35:34.128 [2024-11-18 00:40:57.654133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.128 [2024-11-18 00:40:57.654176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.128 qpair failed and we were unable to recover it. 00:35:34.128 [2024-11-18 00:40:57.654366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.128 [2024-11-18 00:40:57.654411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.128 qpair failed and we were unable to recover it. 00:35:34.128 [2024-11-18 00:40:57.654567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.128 [2024-11-18 00:40:57.654612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.128 qpair failed and we were unable to recover it. 00:35:34.128 [2024-11-18 00:40:57.654777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.128 [2024-11-18 00:40:57.654819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.128 qpair failed and we were unable to recover it. 00:35:34.128 [2024-11-18 00:40:57.654994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.128 [2024-11-18 00:40:57.655037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.128 qpair failed and we were unable to recover it. 00:35:34.128 [2024-11-18 00:40:57.655248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.128 [2024-11-18 00:40:57.655291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.128 qpair failed and we were unable to recover it. 00:35:34.128 [2024-11-18 00:40:57.655476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.128 [2024-11-18 00:40:57.655520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.128 qpair failed and we were unable to recover it. 00:35:34.128 [2024-11-18 00:40:57.655667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.128 [2024-11-18 00:40:57.655711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.128 qpair failed and we were unable to recover it. 00:35:34.128 [2024-11-18 00:40:57.655885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.129 [2024-11-18 00:40:57.655928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.129 qpair failed and we were unable to recover it. 00:35:34.129 [2024-11-18 00:40:57.656065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.129 [2024-11-18 00:40:57.656107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.129 qpair failed and we were unable to recover it. 00:35:34.129 [2024-11-18 00:40:57.656237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.129 [2024-11-18 00:40:57.656281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.129 qpair failed and we were unable to recover it. 00:35:34.129 [2024-11-18 00:40:57.656475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.129 [2024-11-18 00:40:57.656519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.129 qpair failed and we were unable to recover it. 00:35:34.129 [2024-11-18 00:40:57.656648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.129 [2024-11-18 00:40:57.656691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.129 qpair failed and we were unable to recover it. 00:35:34.129 [2024-11-18 00:40:57.656873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.129 [2024-11-18 00:40:57.656916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.129 qpair failed and we were unable to recover it. 00:35:34.129 [2024-11-18 00:40:57.657109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.129 [2024-11-18 00:40:57.657158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.129 qpair failed and we were unable to recover it. 00:35:34.129 [2024-11-18 00:40:57.657358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.129 [2024-11-18 00:40:57.657414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.129 qpair failed and we were unable to recover it. 00:35:34.129 [2024-11-18 00:40:57.657601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.129 [2024-11-18 00:40:57.657655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.129 qpair failed and we were unable to recover it. 00:35:34.129 [2024-11-18 00:40:57.657834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.129 [2024-11-18 00:40:57.657878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.129 qpair failed and we were unable to recover it. 00:35:34.129 [2024-11-18 00:40:57.658038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.129 [2024-11-18 00:40:57.658080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.129 qpair failed and we were unable to recover it. 00:35:34.129 [2024-11-18 00:40:57.658232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.129 [2024-11-18 00:40:57.658276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.129 qpair failed and we were unable to recover it. 00:35:34.129 [2024-11-18 00:40:57.658471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.129 [2024-11-18 00:40:57.658515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.129 qpair failed and we were unable to recover it. 00:35:34.129 [2024-11-18 00:40:57.658650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.129 [2024-11-18 00:40:57.658693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.129 qpair failed and we were unable to recover it. 00:35:34.129 [2024-11-18 00:40:57.658902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.129 [2024-11-18 00:40:57.658945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.129 qpair failed and we were unable to recover it. 00:35:34.129 [2024-11-18 00:40:57.659123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.129 [2024-11-18 00:40:57.659166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.129 qpair failed and we were unable to recover it. 00:35:34.129 [2024-11-18 00:40:57.659309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.129 [2024-11-18 00:40:57.659365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.129 qpair failed and we were unable to recover it. 00:35:34.129 [2024-11-18 00:40:57.659541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.129 [2024-11-18 00:40:57.659585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.129 qpair failed and we were unable to recover it. 00:35:34.129 [2024-11-18 00:40:57.659748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.129 [2024-11-18 00:40:57.659791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.129 qpair failed and we were unable to recover it. 00:35:34.129 [2024-11-18 00:40:57.659958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.129 [2024-11-18 00:40:57.660003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.129 qpair failed and we were unable to recover it. 00:35:34.129 [2024-11-18 00:40:57.660182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.129 [2024-11-18 00:40:57.660225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.129 qpair failed and we were unable to recover it. 00:35:34.129 [2024-11-18 00:40:57.660435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.129 [2024-11-18 00:40:57.660479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.129 qpair failed and we were unable to recover it. 00:35:34.129 [2024-11-18 00:40:57.660646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.129 [2024-11-18 00:40:57.660689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.129 qpair failed and we were unable to recover it. 00:35:34.129 [2024-11-18 00:40:57.660879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.129 [2024-11-18 00:40:57.660923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.129 qpair failed and we were unable to recover it. 00:35:34.129 [2024-11-18 00:40:57.661097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.129 [2024-11-18 00:40:57.661140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.129 qpair failed and we were unable to recover it. 00:35:34.129 [2024-11-18 00:40:57.661276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.129 [2024-11-18 00:40:57.661332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.129 qpair failed and we were unable to recover it. 00:35:34.129 [2024-11-18 00:40:57.661520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.129 [2024-11-18 00:40:57.661564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.129 qpair failed and we were unable to recover it. 00:35:34.129 [2024-11-18 00:40:57.661727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.129 [2024-11-18 00:40:57.661771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.129 qpair failed and we were unable to recover it. 00:35:34.129 [2024-11-18 00:40:57.661911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.129 [2024-11-18 00:40:57.661954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.129 qpair failed and we were unable to recover it. 00:35:34.129 [2024-11-18 00:40:57.662122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.129 [2024-11-18 00:40:57.662165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.129 qpair failed and we were unable to recover it. 00:35:34.129 [2024-11-18 00:40:57.662336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.129 [2024-11-18 00:40:57.662380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.129 qpair failed and we were unable to recover it. 00:35:34.129 [2024-11-18 00:40:57.662513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.129 [2024-11-18 00:40:57.662556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.129 qpair failed and we were unable to recover it. 00:35:34.129 [2024-11-18 00:40:57.662689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.129 [2024-11-18 00:40:57.662734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.129 qpair failed and we were unable to recover it. 00:35:34.129 [2024-11-18 00:40:57.662857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.129 [2024-11-18 00:40:57.662900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.129 qpair failed and we were unable to recover it. 00:35:34.129 [2024-11-18 00:40:57.663042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.129 [2024-11-18 00:40:57.663093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.129 qpair failed and we were unable to recover it. 00:35:34.129 [2024-11-18 00:40:57.663228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.129 [2024-11-18 00:40:57.663271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.129 qpair failed and we were unable to recover it. 00:35:34.129 [2024-11-18 00:40:57.663446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.129 [2024-11-18 00:40:57.663489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.129 qpair failed and we were unable to recover it. 00:35:34.129 [2024-11-18 00:40:57.663635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.129 [2024-11-18 00:40:57.663678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.129 qpair failed and we were unable to recover it. 00:35:34.129 [2024-11-18 00:40:57.663878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.129 [2024-11-18 00:40:57.663921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.130 qpair failed and we were unable to recover it. 00:35:34.130 [2024-11-18 00:40:57.664121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.130 [2024-11-18 00:40:57.664163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.130 qpair failed and we were unable to recover it. 00:35:34.130 [2024-11-18 00:40:57.664354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.130 [2024-11-18 00:40:57.664398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.130 qpair failed and we were unable to recover it. 00:35:34.130 [2024-11-18 00:40:57.664568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.130 [2024-11-18 00:40:57.664611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.130 qpair failed and we were unable to recover it. 00:35:34.130 [2024-11-18 00:40:57.664814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.130 [2024-11-18 00:40:57.664857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.130 qpair failed and we were unable to recover it. 00:35:34.130 [2024-11-18 00:40:57.665023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.130 [2024-11-18 00:40:57.665066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.130 qpair failed and we were unable to recover it. 00:35:34.130 [2024-11-18 00:40:57.665278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.130 [2024-11-18 00:40:57.665331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.130 qpair failed and we were unable to recover it. 00:35:34.130 [2024-11-18 00:40:57.665500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.130 [2024-11-18 00:40:57.665543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.130 qpair failed and we were unable to recover it. 00:35:34.130 [2024-11-18 00:40:57.665737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.130 [2024-11-18 00:40:57.665780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.130 qpair failed and we were unable to recover it. 00:35:34.130 [2024-11-18 00:40:57.665993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.130 [2024-11-18 00:40:57.666037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.130 qpair failed and we were unable to recover it. 00:35:34.130 [2024-11-18 00:40:57.666231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.130 [2024-11-18 00:40:57.666274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.130 qpair failed and we were unable to recover it. 00:35:34.130 [2024-11-18 00:40:57.666536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.130 [2024-11-18 00:40:57.666601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.130 qpair failed and we were unable to recover it. 00:35:34.130 [2024-11-18 00:40:57.666805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.130 [2024-11-18 00:40:57.666855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.130 qpair failed and we were unable to recover it. 00:35:34.130 [2024-11-18 00:40:57.667034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.130 [2024-11-18 00:40:57.667081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.130 qpair failed and we were unable to recover it. 00:35:34.130 [2024-11-18 00:40:57.667262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.130 [2024-11-18 00:40:57.667325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.130 qpair failed and we were unable to recover it. 00:35:34.130 [2024-11-18 00:40:57.667521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.130 [2024-11-18 00:40:57.667569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.130 qpair failed and we were unable to recover it. 00:35:34.130 [2024-11-18 00:40:57.667709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.130 [2024-11-18 00:40:57.667756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.130 qpair failed and we were unable to recover it. 00:35:34.130 [2024-11-18 00:40:57.667876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.130 [2024-11-18 00:40:57.667922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.130 qpair failed and we were unable to recover it. 00:35:34.130 [2024-11-18 00:40:57.668146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.130 [2024-11-18 00:40:57.668198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.130 qpair failed and we were unable to recover it. 00:35:34.130 [2024-11-18 00:40:57.668376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.130 [2024-11-18 00:40:57.668423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.130 qpair failed and we were unable to recover it. 00:35:34.130 [2024-11-18 00:40:57.668602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.130 [2024-11-18 00:40:57.668649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.130 qpair failed and we were unable to recover it. 00:35:34.130 [2024-11-18 00:40:57.668834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.130 [2024-11-18 00:40:57.668881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.130 qpair failed and we were unable to recover it. 00:35:34.130 [2024-11-18 00:40:57.669064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.130 [2024-11-18 00:40:57.669110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.130 qpair failed and we were unable to recover it. 00:35:34.130 [2024-11-18 00:40:57.669369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.130 [2024-11-18 00:40:57.669426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.130 qpair failed and we were unable to recover it. 00:35:34.130 [2024-11-18 00:40:57.669669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.130 [2024-11-18 00:40:57.669717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.130 qpair failed and we were unable to recover it. 00:35:34.130 [2024-11-18 00:40:57.669914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.130 [2024-11-18 00:40:57.669960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.130 qpair failed and we were unable to recover it. 00:35:34.130 [2024-11-18 00:40:57.670136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.130 [2024-11-18 00:40:57.670182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.130 qpair failed and we were unable to recover it. 00:35:34.130 [2024-11-18 00:40:57.670399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.130 [2024-11-18 00:40:57.670447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.130 qpair failed and we were unable to recover it. 00:35:34.130 [2024-11-18 00:40:57.670596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.130 [2024-11-18 00:40:57.670642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.130 qpair failed and we were unable to recover it. 00:35:34.130 [2024-11-18 00:40:57.670826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.130 [2024-11-18 00:40:57.670873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.130 qpair failed and we were unable to recover it. 00:35:34.130 [2024-11-18 00:40:57.671092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.130 [2024-11-18 00:40:57.671138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.130 qpair failed and we were unable to recover it. 00:35:34.130 [2024-11-18 00:40:57.671290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.130 [2024-11-18 00:40:57.671365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.130 qpair failed and we were unable to recover it. 00:35:34.130 [2024-11-18 00:40:57.671533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.130 [2024-11-18 00:40:57.671582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.130 qpair failed and we were unable to recover it. 00:35:34.130 [2024-11-18 00:40:57.671830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.130 [2024-11-18 00:40:57.671876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.130 qpair failed and we were unable to recover it. 00:35:34.130 [2024-11-18 00:40:57.672088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.130 [2024-11-18 00:40:57.672133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.130 qpair failed and we were unable to recover it. 00:35:34.130 [2024-11-18 00:40:57.672298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.130 [2024-11-18 00:40:57.672357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.130 qpair failed and we were unable to recover it. 00:35:34.130 [2024-11-18 00:40:57.672529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.130 [2024-11-18 00:40:57.672575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.130 qpair failed and we were unable to recover it. 00:35:34.130 [2024-11-18 00:40:57.672760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.130 [2024-11-18 00:40:57.672806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.130 qpair failed and we were unable to recover it. 00:35:34.130 [2024-11-18 00:40:57.672991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.130 [2024-11-18 00:40:57.673037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.131 qpair failed and we were unable to recover it. 00:35:34.131 [2024-11-18 00:40:57.673276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.131 [2024-11-18 00:40:57.673334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.131 qpair failed and we were unable to recover it. 00:35:34.131 [2024-11-18 00:40:57.673553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.131 [2024-11-18 00:40:57.673599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.131 qpair failed and we were unable to recover it. 00:35:34.131 [2024-11-18 00:40:57.673748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.131 [2024-11-18 00:40:57.673794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.131 qpair failed and we were unable to recover it. 00:35:34.131 [2024-11-18 00:40:57.674017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.131 [2024-11-18 00:40:57.674064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.131 qpair failed and we were unable to recover it. 00:35:34.131 [2024-11-18 00:40:57.674253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.131 [2024-11-18 00:40:57.674299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.131 qpair failed and we were unable to recover it. 00:35:34.131 [2024-11-18 00:40:57.674498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.131 [2024-11-18 00:40:57.674545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.131 qpair failed and we were unable to recover it. 00:35:34.131 [2024-11-18 00:40:57.674736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.131 [2024-11-18 00:40:57.674782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.131 qpair failed and we were unable to recover it. 00:35:34.131 [2024-11-18 00:40:57.674994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.131 [2024-11-18 00:40:57.675039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.131 qpair failed and we were unable to recover it. 00:35:34.131 [2024-11-18 00:40:57.675254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.131 [2024-11-18 00:40:57.675300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.131 qpair failed and we were unable to recover it. 00:35:34.131 [2024-11-18 00:40:57.675509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.131 [2024-11-18 00:40:57.675555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.131 qpair failed and we were unable to recover it. 00:35:34.131 [2024-11-18 00:40:57.675741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.131 [2024-11-18 00:40:57.675787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.131 qpair failed and we were unable to recover it. 00:35:34.131 [2024-11-18 00:40:57.676030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.131 [2024-11-18 00:40:57.676098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.131 qpair failed and we were unable to recover it. 00:35:34.131 [2024-11-18 00:40:57.676339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.131 [2024-11-18 00:40:57.676409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.131 qpair failed and we were unable to recover it. 00:35:34.131 [2024-11-18 00:40:57.676588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.131 [2024-11-18 00:40:57.676636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.131 qpair failed and we were unable to recover it. 00:35:34.131 [2024-11-18 00:40:57.676771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.131 [2024-11-18 00:40:57.676819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.131 qpair failed and we were unable to recover it. 00:35:34.131 [2024-11-18 00:40:57.677009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.131 [2024-11-18 00:40:57.677055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.131 qpair failed and we were unable to recover it. 00:35:34.131 [2024-11-18 00:40:57.677211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.131 [2024-11-18 00:40:57.677257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.131 qpair failed and we were unable to recover it. 00:35:34.131 [2024-11-18 00:40:57.677432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.131 [2024-11-18 00:40:57.677481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.131 qpair failed and we were unable to recover it. 00:35:34.131 [2024-11-18 00:40:57.677654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.131 [2024-11-18 00:40:57.677700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.131 qpair failed and we were unable to recover it. 00:35:34.131 [2024-11-18 00:40:57.677913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.131 [2024-11-18 00:40:57.677959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.131 qpair failed and we were unable to recover it. 00:35:34.131 [2024-11-18 00:40:57.678172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.131 [2024-11-18 00:40:57.678218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.131 qpair failed and we were unable to recover it. 00:35:34.131 [2024-11-18 00:40:57.678402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.131 [2024-11-18 00:40:57.678450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.131 qpair failed and we were unable to recover it. 00:35:34.131 [2024-11-18 00:40:57.678635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.131 [2024-11-18 00:40:57.678681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.131 qpair failed and we were unable to recover it. 00:35:34.131 [2024-11-18 00:40:57.678827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.131 [2024-11-18 00:40:57.678877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.131 qpair failed and we were unable to recover it. 00:35:34.131 [2024-11-18 00:40:57.679061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.131 [2024-11-18 00:40:57.679109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.131 qpair failed and we were unable to recover it. 00:35:34.131 [2024-11-18 00:40:57.679361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.131 [2024-11-18 00:40:57.679411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.131 qpair failed and we were unable to recover it. 00:35:34.131 [2024-11-18 00:40:57.679592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.131 [2024-11-18 00:40:57.679639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.131 qpair failed and we were unable to recover it. 00:35:34.131 [2024-11-18 00:40:57.679855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.131 [2024-11-18 00:40:57.679902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.131 qpair failed and we were unable to recover it. 00:35:34.131 [2024-11-18 00:40:57.680045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.131 [2024-11-18 00:40:57.680091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.131 qpair failed and we were unable to recover it. 00:35:34.131 [2024-11-18 00:40:57.680239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.131 [2024-11-18 00:40:57.680285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.131 qpair failed and we were unable to recover it. 00:35:34.131 [2024-11-18 00:40:57.680519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.131 [2024-11-18 00:40:57.680565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.131 qpair failed and we were unable to recover it. 00:35:34.131 [2024-11-18 00:40:57.680776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.131 [2024-11-18 00:40:57.680823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.131 qpair failed and we were unable to recover it. 00:35:34.131 [2024-11-18 00:40:57.681040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.131 [2024-11-18 00:40:57.681085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.131 qpair failed and we were unable to recover it. 00:35:34.131 [2024-11-18 00:40:57.681218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.132 [2024-11-18 00:40:57.681263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.132 qpair failed and we were unable to recover it. 00:35:34.132 [2024-11-18 00:40:57.681466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.132 [2024-11-18 00:40:57.681512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.132 qpair failed and we were unable to recover it. 00:35:34.132 [2024-11-18 00:40:57.681684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.132 [2024-11-18 00:40:57.681730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.132 qpair failed and we were unable to recover it. 00:35:34.132 [2024-11-18 00:40:57.681903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.132 [2024-11-18 00:40:57.681948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.132 qpair failed and we were unable to recover it. 00:35:34.132 [2024-11-18 00:40:57.682162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.132 [2024-11-18 00:40:57.682208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.132 qpair failed and we were unable to recover it. 00:35:34.132 [2024-11-18 00:40:57.682434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.132 [2024-11-18 00:40:57.682482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.132 qpair failed and we were unable to recover it. 00:35:34.132 [2024-11-18 00:40:57.682624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.132 [2024-11-18 00:40:57.682670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.132 qpair failed and we were unable to recover it. 00:35:34.132 [2024-11-18 00:40:57.682888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.132 [2024-11-18 00:40:57.682934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.132 qpair failed and we were unable to recover it. 00:35:34.132 [2024-11-18 00:40:57.683134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.132 [2024-11-18 00:40:57.683183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.132 qpair failed and we were unable to recover it. 00:35:34.132 [2024-11-18 00:40:57.683428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.132 [2024-11-18 00:40:57.683474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.132 qpair failed and we were unable to recover it. 00:35:34.132 [2024-11-18 00:40:57.683663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.132 [2024-11-18 00:40:57.683709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.132 qpair failed and we were unable to recover it. 00:35:34.132 [2024-11-18 00:40:57.683879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.132 [2024-11-18 00:40:57.683925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.132 qpair failed and we were unable to recover it. 00:35:34.132 [2024-11-18 00:40:57.684065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.132 [2024-11-18 00:40:57.684127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.132 qpair failed and we were unable to recover it. 00:35:34.132 [2024-11-18 00:40:57.684338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.132 [2024-11-18 00:40:57.684385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.132 qpair failed and we were unable to recover it. 00:35:34.132 [2024-11-18 00:40:57.684563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.132 [2024-11-18 00:40:57.684610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.132 qpair failed and we were unable to recover it. 00:35:34.132 [2024-11-18 00:40:57.684746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.132 [2024-11-18 00:40:57.684793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.132 qpair failed and we were unable to recover it. 00:35:34.132 [2024-11-18 00:40:57.684986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.132 [2024-11-18 00:40:57.685032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.132 qpair failed and we were unable to recover it. 00:35:34.132 [2024-11-18 00:40:57.685215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.132 [2024-11-18 00:40:57.685261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.132 qpair failed and we were unable to recover it. 00:35:34.132 [2024-11-18 00:40:57.685440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.132 [2024-11-18 00:40:57.685487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.132 qpair failed and we were unable to recover it. 00:35:34.132 [2024-11-18 00:40:57.685644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.132 [2024-11-18 00:40:57.685690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.132 qpair failed and we were unable to recover it. 00:35:34.132 [2024-11-18 00:40:57.685906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.132 [2024-11-18 00:40:57.685952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.132 qpair failed and we were unable to recover it. 00:35:34.132 [2024-11-18 00:40:57.686103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.132 [2024-11-18 00:40:57.686149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.132 qpair failed and we were unable to recover it. 00:35:34.132 [2024-11-18 00:40:57.686344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.132 [2024-11-18 00:40:57.686391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.132 qpair failed and we were unable to recover it. 00:35:34.132 [2024-11-18 00:40:57.686569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.132 [2024-11-18 00:40:57.686615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.132 qpair failed and we were unable to recover it. 00:35:34.132 [2024-11-18 00:40:57.686801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.132 [2024-11-18 00:40:57.686847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.132 qpair failed and we were unable to recover it. 00:35:34.132 [2024-11-18 00:40:57.687063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.132 [2024-11-18 00:40:57.687109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.132 qpair failed and we were unable to recover it. 00:35:34.132 [2024-11-18 00:40:57.687297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.132 [2024-11-18 00:40:57.687357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.132 qpair failed and we were unable to recover it. 00:35:34.132 [2024-11-18 00:40:57.687551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.132 [2024-11-18 00:40:57.687597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.132 qpair failed and we were unable to recover it. 00:35:34.132 [2024-11-18 00:40:57.687736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.132 [2024-11-18 00:40:57.687782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.132 qpair failed and we were unable to recover it. 00:35:34.132 [2024-11-18 00:40:57.687918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.132 [2024-11-18 00:40:57.687963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.132 qpair failed and we were unable to recover it. 00:35:34.132 [2024-11-18 00:40:57.688134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.132 [2024-11-18 00:40:57.688180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.132 qpair failed and we were unable to recover it. 00:35:34.132 [2024-11-18 00:40:57.688298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.132 [2024-11-18 00:40:57.688359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.132 qpair failed and we were unable to recover it. 00:35:34.132 [2024-11-18 00:40:57.688576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.132 [2024-11-18 00:40:57.688629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.132 qpair failed and we were unable to recover it. 00:35:34.132 [2024-11-18 00:40:57.688804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.132 [2024-11-18 00:40:57.688851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.132 qpair failed and we were unable to recover it. 00:35:34.132 [2024-11-18 00:40:57.689046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.132 [2024-11-18 00:40:57.689092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.132 qpair failed and we were unable to recover it. 00:35:34.132 [2024-11-18 00:40:57.689268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.132 [2024-11-18 00:40:57.689328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.132 qpair failed and we were unable to recover it. 00:35:34.132 [2024-11-18 00:40:57.689518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.132 [2024-11-18 00:40:57.689565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.132 qpair failed and we were unable to recover it. 00:35:34.132 [2024-11-18 00:40:57.689706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.132 [2024-11-18 00:40:57.689754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.132 qpair failed and we were unable to recover it. 00:35:34.132 [2024-11-18 00:40:57.689937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.132 [2024-11-18 00:40:57.689982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.133 qpair failed and we were unable to recover it. 00:35:34.133 [2024-11-18 00:40:57.690137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.133 [2024-11-18 00:40:57.690183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.133 qpair failed and we were unable to recover it. 00:35:34.133 [2024-11-18 00:40:57.690402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.133 [2024-11-18 00:40:57.690449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.133 qpair failed and we were unable to recover it. 00:35:34.133 [2024-11-18 00:40:57.690670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.133 [2024-11-18 00:40:57.690716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.133 qpair failed and we were unable to recover it. 00:35:34.133 [2024-11-18 00:40:57.690901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.133 [2024-11-18 00:40:57.690948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.133 qpair failed and we were unable to recover it. 00:35:34.133 [2024-11-18 00:40:57.691143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.133 [2024-11-18 00:40:57.691189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.133 qpair failed and we were unable to recover it. 00:35:34.133 [2024-11-18 00:40:57.691338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.133 [2024-11-18 00:40:57.691385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.133 qpair failed and we were unable to recover it. 00:35:34.133 [2024-11-18 00:40:57.691537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.133 [2024-11-18 00:40:57.691584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.133 qpair failed and we were unable to recover it. 00:35:34.133 [2024-11-18 00:40:57.691806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.133 [2024-11-18 00:40:57.691862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.133 qpair failed and we were unable to recover it. 00:35:34.133 [2024-11-18 00:40:57.692080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.133 [2024-11-18 00:40:57.692126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.133 qpair failed and we were unable to recover it. 00:35:34.133 [2024-11-18 00:40:57.692264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.133 [2024-11-18 00:40:57.692321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.133 qpair failed and we were unable to recover it. 00:35:34.133 [2024-11-18 00:40:57.692537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.133 [2024-11-18 00:40:57.692583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.133 qpair failed and we were unable to recover it. 00:35:34.133 [2024-11-18 00:40:57.692760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.133 [2024-11-18 00:40:57.692805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.133 qpair failed and we were unable to recover it. 00:35:34.133 [2024-11-18 00:40:57.693001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.133 [2024-11-18 00:40:57.693048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.133 qpair failed and we were unable to recover it. 00:35:34.133 [2024-11-18 00:40:57.693262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.133 [2024-11-18 00:40:57.693307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.133 qpair failed and we were unable to recover it. 00:35:34.133 [2024-11-18 00:40:57.693536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.133 [2024-11-18 00:40:57.693582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.133 qpair failed and we were unable to recover it. 00:35:34.133 [2024-11-18 00:40:57.693773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.133 [2024-11-18 00:40:57.693819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.133 qpair failed and we were unable to recover it. 00:35:34.133 [2024-11-18 00:40:57.694034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.133 [2024-11-18 00:40:57.694079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.133 qpair failed and we were unable to recover it. 00:35:34.133 [2024-11-18 00:40:57.694253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.133 [2024-11-18 00:40:57.694299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.133 qpair failed and we were unable to recover it. 00:35:34.133 [2024-11-18 00:40:57.694524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.133 [2024-11-18 00:40:57.694570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.133 qpair failed and we were unable to recover it. 00:35:34.133 [2024-11-18 00:40:57.694753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.133 [2024-11-18 00:40:57.694798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.133 qpair failed and we were unable to recover it. 00:35:34.133 [2024-11-18 00:40:57.694978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.133 [2024-11-18 00:40:57.695032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.133 qpair failed and we were unable to recover it. 00:35:34.133 [2024-11-18 00:40:57.695217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.133 [2024-11-18 00:40:57.695264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.133 qpair failed and we were unable to recover it. 00:35:34.133 [2024-11-18 00:40:57.695428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.133 [2024-11-18 00:40:57.695474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.133 qpair failed and we were unable to recover it. 00:35:34.133 [2024-11-18 00:40:57.695644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.133 [2024-11-18 00:40:57.695689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.133 qpair failed and we were unable to recover it. 00:35:34.133 [2024-11-18 00:40:57.695873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.133 [2024-11-18 00:40:57.695919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.133 qpair failed and we were unable to recover it. 00:35:34.133 [2024-11-18 00:40:57.696099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.133 [2024-11-18 00:40:57.696143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.133 qpair failed and we were unable to recover it. 00:35:34.133 [2024-11-18 00:40:57.696302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.133 [2024-11-18 00:40:57.696360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.133 qpair failed and we were unable to recover it. 00:35:34.133 [2024-11-18 00:40:57.696516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.133 [2024-11-18 00:40:57.696563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.133 qpair failed and we were unable to recover it. 00:35:34.133 [2024-11-18 00:40:57.696738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.133 [2024-11-18 00:40:57.696783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.133 qpair failed and we were unable to recover it. 00:35:34.133 [2024-11-18 00:40:57.696931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.133 [2024-11-18 00:40:57.696977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.133 qpair failed and we were unable to recover it. 00:35:34.133 [2024-11-18 00:40:57.697114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.133 [2024-11-18 00:40:57.697161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.133 qpair failed and we were unable to recover it. 00:35:34.133 [2024-11-18 00:40:57.697337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.133 [2024-11-18 00:40:57.697383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.133 qpair failed and we were unable to recover it. 00:35:34.133 [2024-11-18 00:40:57.697600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.133 [2024-11-18 00:40:57.697646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.133 qpair failed and we were unable to recover it. 00:35:34.133 [2024-11-18 00:40:57.697796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.133 [2024-11-18 00:40:57.697844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.133 qpair failed and we were unable to recover it. 00:35:34.133 [2024-11-18 00:40:57.698066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.133 [2024-11-18 00:40:57.698112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.133 qpair failed and we were unable to recover it. 00:35:34.133 [2024-11-18 00:40:57.698250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.133 [2024-11-18 00:40:57.698296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.133 qpair failed and we were unable to recover it. 00:35:34.133 [2024-11-18 00:40:57.698531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.133 [2024-11-18 00:40:57.698578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.133 qpair failed and we were unable to recover it. 00:35:34.133 [2024-11-18 00:40:57.698762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.133 [2024-11-18 00:40:57.698808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.133 qpair failed and we were unable to recover it. 00:35:34.133 [2024-11-18 00:40:57.698945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.134 [2024-11-18 00:40:57.698991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.134 qpair failed and we were unable to recover it. 00:35:34.134 [2024-11-18 00:40:57.699237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.134 [2024-11-18 00:40:57.699287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.134 qpair failed and we were unable to recover it. 00:35:34.134 [2024-11-18 00:40:57.699476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.134 [2024-11-18 00:40:57.699523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.134 qpair failed and we were unable to recover it. 00:35:34.134 [2024-11-18 00:40:57.699676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.134 [2024-11-18 00:40:57.699722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.134 qpair failed and we were unable to recover it. 00:35:34.134 [2024-11-18 00:40:57.699944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.134 [2024-11-18 00:40:57.699989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.134 qpair failed and we were unable to recover it. 00:35:34.134 [2024-11-18 00:40:57.700189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.134 [2024-11-18 00:40:57.700238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.134 qpair failed and we were unable to recover it. 00:35:34.134 [2024-11-18 00:40:57.700411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.134 [2024-11-18 00:40:57.700458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.134 qpair failed and we were unable to recover it. 00:35:34.134 [2024-11-18 00:40:57.700649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.134 [2024-11-18 00:40:57.700694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.134 qpair failed and we were unable to recover it. 00:35:34.134 [2024-11-18 00:40:57.700875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.134 [2024-11-18 00:40:57.700921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.134 qpair failed and we were unable to recover it. 00:35:34.134 [2024-11-18 00:40:57.701066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.134 [2024-11-18 00:40:57.701111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.134 qpair failed and we were unable to recover it. 00:35:34.134 [2024-11-18 00:40:57.701300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.134 [2024-11-18 00:40:57.701358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.134 qpair failed and we were unable to recover it. 00:35:34.134 [2024-11-18 00:40:57.701573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.134 [2024-11-18 00:40:57.701619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.134 qpair failed and we were unable to recover it. 00:35:34.134 [2024-11-18 00:40:57.701832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.134 [2024-11-18 00:40:57.701878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.134 qpair failed and we were unable to recover it. 00:35:34.134 [2024-11-18 00:40:57.702051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.134 [2024-11-18 00:40:57.702096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.134 qpair failed and we were unable to recover it. 00:35:34.134 [2024-11-18 00:40:57.702272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.134 [2024-11-18 00:40:57.702340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.134 qpair failed and we were unable to recover it. 00:35:34.134 [2024-11-18 00:40:57.702563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.134 [2024-11-18 00:40:57.702609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.134 qpair failed and we were unable to recover it. 00:35:34.134 [2024-11-18 00:40:57.702784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.134 [2024-11-18 00:40:57.702830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.134 qpair failed and we were unable to recover it. 00:35:34.134 [2024-11-18 00:40:57.703056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.134 [2024-11-18 00:40:57.703101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.134 qpair failed and we were unable to recover it. 00:35:34.134 [2024-11-18 00:40:57.703290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.134 [2024-11-18 00:40:57.703351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.134 qpair failed and we were unable to recover it. 00:35:34.134 [2024-11-18 00:40:57.703569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.134 [2024-11-18 00:40:57.703615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.134 qpair failed and we were unable to recover it. 00:35:34.134 [2024-11-18 00:40:57.703799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.134 [2024-11-18 00:40:57.703844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.134 qpair failed and we were unable to recover it. 00:35:34.134 [2024-11-18 00:40:57.704014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.134 [2024-11-18 00:40:57.704060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.134 qpair failed and we were unable to recover it. 00:35:34.134 [2024-11-18 00:40:57.704236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.134 [2024-11-18 00:40:57.704283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.134 qpair failed and we were unable to recover it. 00:35:34.134 [2024-11-18 00:40:57.704453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.134 [2024-11-18 00:40:57.704499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.134 qpair failed and we were unable to recover it. 00:35:34.134 [2024-11-18 00:40:57.704684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.134 [2024-11-18 00:40:57.704730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.134 qpair failed and we were unable to recover it. 00:35:34.134 [2024-11-18 00:40:57.704904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.134 [2024-11-18 00:40:57.704950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.134 qpair failed and we were unable to recover it. 00:35:34.134 [2024-11-18 00:40:57.705128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.134 [2024-11-18 00:40:57.705174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.134 qpair failed and we were unable to recover it. 00:35:34.134 [2024-11-18 00:40:57.705389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.134 [2024-11-18 00:40:57.705436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.134 qpair failed and we were unable to recover it. 00:35:34.134 [2024-11-18 00:40:57.705613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.134 [2024-11-18 00:40:57.705658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.134 qpair failed and we were unable to recover it. 00:35:34.134 [2024-11-18 00:40:57.705841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.134 [2024-11-18 00:40:57.705886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.134 qpair failed and we were unable to recover it. 00:35:34.134 [2024-11-18 00:40:57.706028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.134 [2024-11-18 00:40:57.706073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.134 qpair failed and we were unable to recover it. 00:35:34.134 [2024-11-18 00:40:57.706291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.134 [2024-11-18 00:40:57.706356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.134 qpair failed and we were unable to recover it. 00:35:34.134 [2024-11-18 00:40:57.706575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.134 [2024-11-18 00:40:57.706621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.134 qpair failed and we were unable to recover it. 00:35:34.134 [2024-11-18 00:40:57.706807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.134 [2024-11-18 00:40:57.706852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.134 qpair failed and we were unable to recover it. 00:35:34.134 [2024-11-18 00:40:57.706997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.134 [2024-11-18 00:40:57.707043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.134 qpair failed and we were unable to recover it. 00:35:34.134 [2024-11-18 00:40:57.707230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.134 [2024-11-18 00:40:57.707276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.134 qpair failed and we were unable to recover it. 00:35:34.134 [2024-11-18 00:40:57.707506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.134 [2024-11-18 00:40:57.707551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.134 qpair failed and we were unable to recover it. 00:35:34.134 [2024-11-18 00:40:57.707696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.134 [2024-11-18 00:40:57.707743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.134 qpair failed and we were unable to recover it. 00:35:34.134 [2024-11-18 00:40:57.707954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.135 [2024-11-18 00:40:57.708000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.135 qpair failed and we were unable to recover it. 00:35:34.135 [2024-11-18 00:40:57.708148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.135 [2024-11-18 00:40:57.708193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.135 qpair failed and we were unable to recover it. 00:35:34.135 [2024-11-18 00:40:57.708330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.135 [2024-11-18 00:40:57.708377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.135 qpair failed and we were unable to recover it. 00:35:34.135 [2024-11-18 00:40:57.708530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.135 [2024-11-18 00:40:57.708575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.135 qpair failed and we were unable to recover it. 00:35:34.135 [2024-11-18 00:40:57.708801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.135 [2024-11-18 00:40:57.708846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.135 qpair failed and we were unable to recover it. 00:35:34.135 [2024-11-18 00:40:57.709027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.135 [2024-11-18 00:40:57.709073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.135 qpair failed and we were unable to recover it. 00:35:34.135 [2024-11-18 00:40:57.709248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.135 [2024-11-18 00:40:57.709294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.135 qpair failed and we were unable to recover it. 00:35:34.135 [2024-11-18 00:40:57.709535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.135 [2024-11-18 00:40:57.709581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.135 qpair failed and we were unable to recover it. 00:35:34.135 [2024-11-18 00:40:57.709811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.135 [2024-11-18 00:40:57.709857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.135 qpair failed and we were unable to recover it. 00:35:34.135 [2024-11-18 00:40:57.710073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.135 [2024-11-18 00:40:57.710118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.135 qpair failed and we were unable to recover it. 00:35:34.135 [2024-11-18 00:40:57.710245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.135 [2024-11-18 00:40:57.710309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.135 qpair failed and we were unable to recover it. 00:35:34.135 [2024-11-18 00:40:57.710497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.135 [2024-11-18 00:40:57.710544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.135 qpair failed and we were unable to recover it. 00:35:34.135 [2024-11-18 00:40:57.710684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.135 [2024-11-18 00:40:57.710738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.135 qpair failed and we were unable to recover it. 00:35:34.135 [2024-11-18 00:40:57.710935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.135 [2024-11-18 00:40:57.710981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.135 qpair failed and we were unable to recover it. 00:35:34.135 [2024-11-18 00:40:57.711222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.135 [2024-11-18 00:40:57.711272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.135 qpair failed and we were unable to recover it. 00:35:34.135 [2024-11-18 00:40:57.711524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.135 [2024-11-18 00:40:57.711570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.135 qpair failed and we were unable to recover it. 00:35:34.135 [2024-11-18 00:40:57.711770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.135 [2024-11-18 00:40:57.711815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.135 qpair failed and we were unable to recover it. 00:35:34.135 [2024-11-18 00:40:57.712045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.135 [2024-11-18 00:40:57.712092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.135 qpair failed and we were unable to recover it. 00:35:34.135 [2024-11-18 00:40:57.712351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.135 [2024-11-18 00:40:57.712398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.135 qpair failed and we were unable to recover it. 00:35:34.135 [2024-11-18 00:40:57.712538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.135 [2024-11-18 00:40:57.712585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.135 qpair failed and we were unable to recover it. 00:35:34.135 [2024-11-18 00:40:57.712745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.135 [2024-11-18 00:40:57.712792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.135 qpair failed and we were unable to recover it. 00:35:34.135 [2024-11-18 00:40:57.712967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.135 [2024-11-18 00:40:57.713013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.135 qpair failed and we were unable to recover it. 00:35:34.135 [2024-11-18 00:40:57.713185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.135 [2024-11-18 00:40:57.713231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.135 qpair failed and we were unable to recover it. 00:35:34.135 [2024-11-18 00:40:57.713430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.135 [2024-11-18 00:40:57.713476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.135 qpair failed and we were unable to recover it. 00:35:34.135 [2024-11-18 00:40:57.713598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.135 [2024-11-18 00:40:57.713643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.135 qpair failed and we were unable to recover it. 00:35:34.135 [2024-11-18 00:40:57.713792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.135 [2024-11-18 00:40:57.713840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.135 qpair failed and we were unable to recover it. 00:35:34.135 [2024-11-18 00:40:57.714026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.135 [2024-11-18 00:40:57.714073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.135 qpair failed and we were unable to recover it. 00:35:34.135 [2024-11-18 00:40:57.714249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.135 [2024-11-18 00:40:57.714294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.135 qpair failed and we were unable to recover it. 00:35:34.135 [2024-11-18 00:40:57.714500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.135 [2024-11-18 00:40:57.714546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.135 qpair failed and we were unable to recover it. 00:35:34.135 [2024-11-18 00:40:57.714729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.135 [2024-11-18 00:40:57.714775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.135 qpair failed and we were unable to recover it. 00:35:34.135 [2024-11-18 00:40:57.714915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.135 [2024-11-18 00:40:57.714962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.135 qpair failed and we were unable to recover it. 00:35:34.135 [2024-11-18 00:40:57.715164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.135 [2024-11-18 00:40:57.715214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.135 qpair failed and we were unable to recover it. 00:35:34.135 [2024-11-18 00:40:57.715471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.135 [2024-11-18 00:40:57.715518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.135 qpair failed and we were unable to recover it. 00:35:34.135 [2024-11-18 00:40:57.715713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.135 [2024-11-18 00:40:57.715759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.135 qpair failed and we were unable to recover it. 00:35:34.135 [2024-11-18 00:40:57.715970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.136 [2024-11-18 00:40:57.716016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.136 qpair failed and we were unable to recover it. 00:35:34.136 [2024-11-18 00:40:57.716228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.136 [2024-11-18 00:40:57.716274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.136 qpair failed and we were unable to recover it. 00:35:34.136 [2024-11-18 00:40:57.716435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.136 [2024-11-18 00:40:57.716480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.136 qpair failed and we were unable to recover it. 00:35:34.136 [2024-11-18 00:40:57.716642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.136 [2024-11-18 00:40:57.716688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.136 qpair failed and we were unable to recover it. 00:35:34.136 [2024-11-18 00:40:57.716911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.136 [2024-11-18 00:40:57.716956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.136 qpair failed and we were unable to recover it. 00:35:34.136 [2024-11-18 00:40:57.717176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.136 [2024-11-18 00:40:57.717229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.136 qpair failed and we were unable to recover it. 00:35:34.136 [2024-11-18 00:40:57.717505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.136 [2024-11-18 00:40:57.717555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.136 qpair failed and we were unable to recover it. 00:35:34.136 [2024-11-18 00:40:57.717792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.136 [2024-11-18 00:40:57.717867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.136 qpair failed and we were unable to recover it. 00:35:34.136 [2024-11-18 00:40:57.718116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.136 [2024-11-18 00:40:57.718165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.136 qpair failed and we were unable to recover it. 00:35:34.136 [2024-11-18 00:40:57.718367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.136 [2024-11-18 00:40:57.718413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.136 qpair failed and we were unable to recover it. 00:35:34.136 [2024-11-18 00:40:57.718617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.136 [2024-11-18 00:40:57.718663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.136 qpair failed and we were unable to recover it. 00:35:34.136 [2024-11-18 00:40:57.718850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.136 [2024-11-18 00:40:57.718896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.136 qpair failed and we were unable to recover it. 00:35:34.136 [2024-11-18 00:40:57.719046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.136 [2024-11-18 00:40:57.719091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.136 qpair failed and we were unable to recover it. 00:35:34.136 [2024-11-18 00:40:57.719288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.136 [2024-11-18 00:40:57.719349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.136 qpair failed and we were unable to recover it. 00:35:34.136 [2024-11-18 00:40:57.719572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.136 [2024-11-18 00:40:57.719621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.136 qpair failed and we were unable to recover it. 00:35:34.136 [2024-11-18 00:40:57.719783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.136 [2024-11-18 00:40:57.719833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.136 qpair failed and we were unable to recover it. 00:35:34.136 [2024-11-18 00:40:57.720027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.136 [2024-11-18 00:40:57.720076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.136 qpair failed and we were unable to recover it. 00:35:34.136 [2024-11-18 00:40:57.720252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.136 [2024-11-18 00:40:57.720300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.136 qpair failed and we were unable to recover it. 00:35:34.136 [2024-11-18 00:40:57.720479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.136 [2024-11-18 00:40:57.720527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.136 qpair failed and we were unable to recover it. 00:35:34.136 [2024-11-18 00:40:57.720727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.136 [2024-11-18 00:40:57.720776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.136 qpair failed and we were unable to recover it. 00:35:34.136 [2024-11-18 00:40:57.721000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.136 [2024-11-18 00:40:57.721048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.136 qpair failed and we were unable to recover it. 00:35:34.136 [2024-11-18 00:40:57.721235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.136 [2024-11-18 00:40:57.721284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.136 qpair failed and we were unable to recover it. 00:35:34.136 [2024-11-18 00:40:57.721493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.136 [2024-11-18 00:40:57.721543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.136 qpair failed and we were unable to recover it. 00:35:34.136 [2024-11-18 00:40:57.721771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.136 [2024-11-18 00:40:57.721820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.136 qpair failed and we were unable to recover it. 00:35:34.136 [2024-11-18 00:40:57.721978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.136 [2024-11-18 00:40:57.722027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.136 qpair failed and we were unable to recover it. 00:35:34.136 [2024-11-18 00:40:57.722207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.136 [2024-11-18 00:40:57.722256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.136 qpair failed and we were unable to recover it. 00:35:34.136 [2024-11-18 00:40:57.722495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.136 [2024-11-18 00:40:57.722544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.136 qpair failed and we were unable to recover it. 00:35:34.136 [2024-11-18 00:40:57.722706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.136 [2024-11-18 00:40:57.722755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.136 qpair failed and we were unable to recover it. 00:35:34.136 [2024-11-18 00:40:57.722990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.136 [2024-11-18 00:40:57.723038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.136 qpair failed and we were unable to recover it. 00:35:34.136 [2024-11-18 00:40:57.723262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.136 [2024-11-18 00:40:57.723325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.136 qpair failed and we were unable to recover it. 00:35:34.136 [2024-11-18 00:40:57.723555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.136 [2024-11-18 00:40:57.723622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.136 qpair failed and we were unable to recover it. 00:35:34.136 [2024-11-18 00:40:57.723838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.136 [2024-11-18 00:40:57.723907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.136 qpair failed and we were unable to recover it. 00:35:34.136 [2024-11-18 00:40:57.724063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.136 [2024-11-18 00:40:57.724119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.136 qpair failed and we were unable to recover it. 00:35:34.136 [2024-11-18 00:40:57.724324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.136 [2024-11-18 00:40:57.724374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.136 qpair failed and we were unable to recover it. 00:35:34.136 [2024-11-18 00:40:57.724605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.136 [2024-11-18 00:40:57.724654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.136 qpair failed and we were unable to recover it. 00:35:34.136 [2024-11-18 00:40:57.724858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.136 [2024-11-18 00:40:57.724906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.136 qpair failed and we were unable to recover it. 00:35:34.136 [2024-11-18 00:40:57.725055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.136 [2024-11-18 00:40:57.725103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.136 qpair failed and we were unable to recover it. 00:35:34.136 [2024-11-18 00:40:57.725297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.136 [2024-11-18 00:40:57.725360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.136 qpair failed and we were unable to recover it. 00:35:34.136 [2024-11-18 00:40:57.725547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.136 [2024-11-18 00:40:57.725597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.137 qpair failed and we were unable to recover it. 00:35:34.137 [2024-11-18 00:40:57.725784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.137 [2024-11-18 00:40:57.725833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.137 qpair failed and we were unable to recover it. 00:35:34.137 [2024-11-18 00:40:57.726029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.137 [2024-11-18 00:40:57.726079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.137 qpair failed and we were unable to recover it. 00:35:34.137 [2024-11-18 00:40:57.726305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.137 [2024-11-18 00:40:57.726384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.137 qpair failed and we were unable to recover it. 00:35:34.137 [2024-11-18 00:40:57.726618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.137 [2024-11-18 00:40:57.726666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.137 qpair failed and we were unable to recover it. 00:35:34.137 [2024-11-18 00:40:57.726902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.137 [2024-11-18 00:40:57.726951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.137 qpair failed and we were unable to recover it. 00:35:34.137 [2024-11-18 00:40:57.727140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.137 [2024-11-18 00:40:57.727189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.137 qpair failed and we were unable to recover it. 00:35:34.137 [2024-11-18 00:40:57.727355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.137 [2024-11-18 00:40:57.727405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.137 qpair failed and we were unable to recover it. 00:35:34.137 [2024-11-18 00:40:57.727659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.137 [2024-11-18 00:40:57.727709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.137 qpair failed and we were unable to recover it. 00:35:34.137 [2024-11-18 00:40:57.727951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.137 [2024-11-18 00:40:57.728000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.137 qpair failed and we were unable to recover it. 00:35:34.137 [2024-11-18 00:40:57.728179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.137 [2024-11-18 00:40:57.728227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.137 qpair failed and we were unable to recover it. 00:35:34.137 [2024-11-18 00:40:57.728435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.137 [2024-11-18 00:40:57.728484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.137 qpair failed and we were unable to recover it. 00:35:34.137 [2024-11-18 00:40:57.728633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.137 [2024-11-18 00:40:57.728681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.137 qpair failed and we were unable to recover it. 00:35:34.137 [2024-11-18 00:40:57.728905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.137 [2024-11-18 00:40:57.728953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.137 qpair failed and we were unable to recover it. 00:35:34.137 [2024-11-18 00:40:57.729185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.137 [2024-11-18 00:40:57.729233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.137 qpair failed and we were unable to recover it. 00:35:34.137 [2024-11-18 00:40:57.729478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.137 [2024-11-18 00:40:57.729526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.137 qpair failed and we were unable to recover it. 00:35:34.137 [2024-11-18 00:40:57.729701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.137 [2024-11-18 00:40:57.729750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.137 qpair failed and we were unable to recover it. 00:35:34.137 [2024-11-18 00:40:57.729943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.137 [2024-11-18 00:40:57.729992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.137 qpair failed and we were unable to recover it. 00:35:34.137 [2024-11-18 00:40:57.730212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.137 [2024-11-18 00:40:57.730260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.137 qpair failed and we were unable to recover it. 00:35:34.137 [2024-11-18 00:40:57.730462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.137 [2024-11-18 00:40:57.730510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.137 qpair failed and we were unable to recover it. 00:35:34.137 [2024-11-18 00:40:57.730726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.137 [2024-11-18 00:40:57.730775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.137 qpair failed and we were unable to recover it. 00:35:34.137 [2024-11-18 00:40:57.730953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.137 [2024-11-18 00:40:57.731001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.137 qpair failed and we were unable to recover it. 00:35:34.137 [2024-11-18 00:40:57.731198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.137 [2024-11-18 00:40:57.731246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.137 qpair failed and we were unable to recover it. 00:35:34.137 [2024-11-18 00:40:57.731430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.137 [2024-11-18 00:40:57.731481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.137 qpair failed and we were unable to recover it. 00:35:34.137 [2024-11-18 00:40:57.731674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.137 [2024-11-18 00:40:57.731722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.137 qpair failed and we were unable to recover it. 00:35:34.137 [2024-11-18 00:40:57.731910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.137 [2024-11-18 00:40:57.731958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.137 qpair failed and we were unable to recover it. 00:35:34.137 [2024-11-18 00:40:57.732188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.137 [2024-11-18 00:40:57.732237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.137 qpair failed and we were unable to recover it. 00:35:34.137 [2024-11-18 00:40:57.732445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.137 [2024-11-18 00:40:57.732494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.137 qpair failed and we were unable to recover it. 00:35:34.137 [2024-11-18 00:40:57.732678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.137 [2024-11-18 00:40:57.732726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.137 qpair failed and we were unable to recover it. 00:35:34.137 [2024-11-18 00:40:57.732907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.137 [2024-11-18 00:40:57.732957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.137 qpair failed and we were unable to recover it. 00:35:34.137 [2024-11-18 00:40:57.733100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.137 [2024-11-18 00:40:57.733147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.137 qpair failed and we were unable to recover it. 00:35:34.137 [2024-11-18 00:40:57.733350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.137 [2024-11-18 00:40:57.733400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.137 qpair failed and we were unable to recover it. 00:35:34.137 [2024-11-18 00:40:57.733601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.137 [2024-11-18 00:40:57.733649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.137 qpair failed and we were unable to recover it. 00:35:34.137 [2024-11-18 00:40:57.733837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.137 [2024-11-18 00:40:57.733886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.137 qpair failed and we were unable to recover it. 00:35:34.137 [2024-11-18 00:40:57.734031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.137 [2024-11-18 00:40:57.734079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.137 qpair failed and we were unable to recover it. 00:35:34.137 [2024-11-18 00:40:57.734270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.137 [2024-11-18 00:40:57.734340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.137 qpair failed and we were unable to recover it. 00:35:34.137 [2024-11-18 00:40:57.734498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.137 [2024-11-18 00:40:57.734547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.137 qpair failed and we were unable to recover it. 00:35:34.137 [2024-11-18 00:40:57.734782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.137 [2024-11-18 00:40:57.734831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.137 qpair failed and we were unable to recover it. 00:35:34.138 [2024-11-18 00:40:57.734984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.138 [2024-11-18 00:40:57.735033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.138 qpair failed and we were unable to recover it. 00:35:34.138 [2024-11-18 00:40:57.735240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.138 [2024-11-18 00:40:57.735289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.138 qpair failed and we were unable to recover it. 00:35:34.138 [2024-11-18 00:40:57.735496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.138 [2024-11-18 00:40:57.735545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.138 qpair failed and we were unable to recover it. 00:35:34.138 [2024-11-18 00:40:57.735778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.138 [2024-11-18 00:40:57.735827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.138 qpair failed and we were unable to recover it. 00:35:34.138 [2024-11-18 00:40:57.736056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.138 [2024-11-18 00:40:57.736104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.138 qpair failed and we were unable to recover it. 00:35:34.138 [2024-11-18 00:40:57.736278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.138 [2024-11-18 00:40:57.736357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.138 qpair failed and we were unable to recover it. 00:35:34.138 [2024-11-18 00:40:57.736565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.138 [2024-11-18 00:40:57.736614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.138 qpair failed and we were unable to recover it. 00:35:34.138 [2024-11-18 00:40:57.736811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.138 [2024-11-18 00:40:57.736859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.138 qpair failed and we were unable to recover it. 00:35:34.138 [2024-11-18 00:40:57.737052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.138 [2024-11-18 00:40:57.737101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.138 qpair failed and we were unable to recover it. 00:35:34.138 [2024-11-18 00:40:57.737295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.138 [2024-11-18 00:40:57.737358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.138 qpair failed and we were unable to recover it. 00:35:34.138 [2024-11-18 00:40:57.737552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.138 [2024-11-18 00:40:57.737600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.138 qpair failed and we were unable to recover it. 00:35:34.138 [2024-11-18 00:40:57.737796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.138 [2024-11-18 00:40:57.737845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.138 qpair failed and we were unable to recover it. 00:35:34.138 [2024-11-18 00:40:57.738081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.138 [2024-11-18 00:40:57.738131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.138 qpair failed and we were unable to recover it. 00:35:34.138 [2024-11-18 00:40:57.738333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.138 [2024-11-18 00:40:57.738382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.138 qpair failed and we were unable to recover it. 00:35:34.138 [2024-11-18 00:40:57.738570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.138 [2024-11-18 00:40:57.738620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.138 qpair failed and we were unable to recover it. 00:35:34.138 [2024-11-18 00:40:57.738815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.138 [2024-11-18 00:40:57.738865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.138 qpair failed and we were unable to recover it. 00:35:34.138 [2024-11-18 00:40:57.739089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.138 [2024-11-18 00:40:57.739138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.138 qpair failed and we were unable to recover it. 00:35:34.138 [2024-11-18 00:40:57.739296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.138 [2024-11-18 00:40:57.739359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.138 qpair failed and we were unable to recover it. 00:35:34.138 [2024-11-18 00:40:57.739594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.138 [2024-11-18 00:40:57.739643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.138 qpair failed and we were unable to recover it. 00:35:34.138 [2024-11-18 00:40:57.739883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.138 [2024-11-18 00:40:57.739932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.138 qpair failed and we were unable to recover it. 00:35:34.138 [2024-11-18 00:40:57.740118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.138 [2024-11-18 00:40:57.740166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.138 qpair failed and we were unable to recover it. 00:35:34.138 [2024-11-18 00:40:57.740333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.138 [2024-11-18 00:40:57.740383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.138 qpair failed and we were unable to recover it. 00:35:34.138 [2024-11-18 00:40:57.740577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.138 [2024-11-18 00:40:57.740625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.138 qpair failed and we were unable to recover it. 00:35:34.138 [2024-11-18 00:40:57.740803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.138 [2024-11-18 00:40:57.740852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.138 qpair failed and we were unable to recover it. 00:35:34.138 [2024-11-18 00:40:57.741085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.138 [2024-11-18 00:40:57.741141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.138 qpair failed and we were unable to recover it. 00:35:34.138 [2024-11-18 00:40:57.741368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.138 [2024-11-18 00:40:57.741417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.138 qpair failed and we were unable to recover it. 00:35:34.138 [2024-11-18 00:40:57.741586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.138 [2024-11-18 00:40:57.741634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.138 qpair failed and we were unable to recover it. 00:35:34.138 [2024-11-18 00:40:57.741831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.138 [2024-11-18 00:40:57.741881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.138 qpair failed and we were unable to recover it. 00:35:34.138 [2024-11-18 00:40:57.742070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.138 [2024-11-18 00:40:57.742118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.138 qpair failed and we were unable to recover it. 00:35:34.138 [2024-11-18 00:40:57.742324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.138 [2024-11-18 00:40:57.742375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.138 qpair failed and we were unable to recover it. 00:35:34.138 [2024-11-18 00:40:57.742527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.138 [2024-11-18 00:40:57.742576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.138 qpair failed and we were unable to recover it. 00:35:34.138 [2024-11-18 00:40:57.742770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.138 [2024-11-18 00:40:57.742819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.138 qpair failed and we were unable to recover it. 00:35:34.138 [2024-11-18 00:40:57.743010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.138 [2024-11-18 00:40:57.743059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.138 qpair failed and we were unable to recover it. 00:35:34.138 [2024-11-18 00:40:57.743252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.138 [2024-11-18 00:40:57.743300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.138 qpair failed and we were unable to recover it. 00:35:34.138 [2024-11-18 00:40:57.743497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.138 [2024-11-18 00:40:57.743546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.138 qpair failed and we were unable to recover it. 00:35:34.138 [2024-11-18 00:40:57.743780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.138 [2024-11-18 00:40:57.743829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.138 qpair failed and we were unable to recover it. 00:35:34.138 [2024-11-18 00:40:57.744083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.138 [2024-11-18 00:40:57.744131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.138 qpair failed and we were unable to recover it. 00:35:34.138 [2024-11-18 00:40:57.744363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.138 [2024-11-18 00:40:57.744413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.138 qpair failed and we were unable to recover it. 00:35:34.138 [2024-11-18 00:40:57.744607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.139 [2024-11-18 00:40:57.744657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.139 qpair failed and we were unable to recover it. 00:35:34.139 [2024-11-18 00:40:57.744897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.139 [2024-11-18 00:40:57.744945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.139 qpair failed and we were unable to recover it. 00:35:34.139 [2024-11-18 00:40:57.745129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.139 [2024-11-18 00:40:57.745178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.139 qpair failed and we were unable to recover it. 00:35:34.139 [2024-11-18 00:40:57.745405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.139 [2024-11-18 00:40:57.745456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.139 qpair failed and we were unable to recover it. 00:35:34.139 [2024-11-18 00:40:57.745646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.139 [2024-11-18 00:40:57.745694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.139 qpair failed and we were unable to recover it. 00:35:34.139 [2024-11-18 00:40:57.745880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.139 [2024-11-18 00:40:57.745928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.139 qpair failed and we were unable to recover it. 00:35:34.139 [2024-11-18 00:40:57.746097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.139 [2024-11-18 00:40:57.746147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.139 qpair failed and we were unable to recover it. 00:35:34.139 [2024-11-18 00:40:57.746344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.139 [2024-11-18 00:40:57.746393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.139 qpair failed and we were unable to recover it. 00:35:34.139 [2024-11-18 00:40:57.746579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.139 [2024-11-18 00:40:57.746629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.139 qpair failed and we were unable to recover it. 00:35:34.139 [2024-11-18 00:40:57.746779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.139 [2024-11-18 00:40:57.746827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.139 qpair failed and we were unable to recover it. 00:35:34.139 [2024-11-18 00:40:57.747063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.139 [2024-11-18 00:40:57.747112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.139 qpair failed and we were unable to recover it. 00:35:34.139 [2024-11-18 00:40:57.747328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.139 [2024-11-18 00:40:57.747378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.139 qpair failed and we were unable to recover it. 00:35:34.139 [2024-11-18 00:40:57.747561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.139 [2024-11-18 00:40:57.747608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.139 qpair failed and we were unable to recover it. 00:35:34.139 [2024-11-18 00:40:57.747809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.139 [2024-11-18 00:40:57.747866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.139 qpair failed and we were unable to recover it. 00:35:34.139 [2024-11-18 00:40:57.748055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.139 [2024-11-18 00:40:57.748104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.139 qpair failed and we were unable to recover it. 00:35:34.139 [2024-11-18 00:40:57.748272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.139 [2024-11-18 00:40:57.748341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.139 qpair failed and we were unable to recover it. 00:35:34.139 [2024-11-18 00:40:57.748578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.139 [2024-11-18 00:40:57.748626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.139 qpair failed and we were unable to recover it. 00:35:34.139 [2024-11-18 00:40:57.748830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.139 [2024-11-18 00:40:57.748880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.139 qpair failed and we were unable to recover it. 00:35:34.139 [2024-11-18 00:40:57.749040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.139 [2024-11-18 00:40:57.749089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.139 qpair failed and we were unable to recover it. 00:35:34.139 [2024-11-18 00:40:57.749290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.139 [2024-11-18 00:40:57.749354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.139 qpair failed and we were unable to recover it. 00:35:34.139 [2024-11-18 00:40:57.749503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.139 [2024-11-18 00:40:57.749553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.139 qpair failed and we were unable to recover it. 00:35:34.139 [2024-11-18 00:40:57.749749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.139 [2024-11-18 00:40:57.749797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.139 qpair failed and we were unable to recover it. 00:35:34.139 [2024-11-18 00:40:57.749944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.139 [2024-11-18 00:40:57.749993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.139 qpair failed and we were unable to recover it. 00:35:34.139 [2024-11-18 00:40:57.750219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.139 [2024-11-18 00:40:57.750267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.139 qpair failed and we were unable to recover it. 00:35:34.139 [2024-11-18 00:40:57.750458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.139 [2024-11-18 00:40:57.750507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.139 qpair failed and we were unable to recover it. 00:35:34.139 [2024-11-18 00:40:57.750711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.139 [2024-11-18 00:40:57.750760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.139 qpair failed and we were unable to recover it. 00:35:34.139 [2024-11-18 00:40:57.750938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.139 [2024-11-18 00:40:57.750986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.139 qpair failed and we were unable to recover it. 00:35:34.139 [2024-11-18 00:40:57.751230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.139 [2024-11-18 00:40:57.751279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.139 qpair failed and we were unable to recover it. 00:35:34.139 [2024-11-18 00:40:57.751527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.139 [2024-11-18 00:40:57.751576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.139 qpair failed and we were unable to recover it. 00:35:34.139 [2024-11-18 00:40:57.751763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.139 [2024-11-18 00:40:57.751813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.139 qpair failed and we were unable to recover it. 00:35:34.139 [2024-11-18 00:40:57.752040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.139 [2024-11-18 00:40:57.752089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.139 qpair failed and we were unable to recover it. 00:35:34.139 [2024-11-18 00:40:57.752293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.139 [2024-11-18 00:40:57.752371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.139 qpair failed and we were unable to recover it. 00:35:34.139 [2024-11-18 00:40:57.752603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.139 [2024-11-18 00:40:57.752652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.139 qpair failed and we were unable to recover it. 00:35:34.139 [2024-11-18 00:40:57.752807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.139 [2024-11-18 00:40:57.752855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.139 qpair failed and we were unable to recover it. 00:35:34.139 [2024-11-18 00:40:57.753056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.139 [2024-11-18 00:40:57.753105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.139 qpair failed and we were unable to recover it. 00:35:34.139 [2024-11-18 00:40:57.753258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.139 [2024-11-18 00:40:57.753306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.139 qpair failed and we were unable to recover it. 00:35:34.139 [2024-11-18 00:40:57.753519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.139 [2024-11-18 00:40:57.753567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.139 qpair failed and we were unable to recover it. 00:35:34.139 [2024-11-18 00:40:57.753782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.139 [2024-11-18 00:40:57.753831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.139 qpair failed and we were unable to recover it. 00:35:34.139 [2024-11-18 00:40:57.754029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.140 [2024-11-18 00:40:57.754079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.140 qpair failed and we were unable to recover it. 00:35:34.140 [2024-11-18 00:40:57.754275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.140 [2024-11-18 00:40:57.754338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.140 qpair failed and we were unable to recover it. 00:35:34.140 [2024-11-18 00:40:57.754572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.140 [2024-11-18 00:40:57.754621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.140 qpair failed and we were unable to recover it. 00:35:34.140 [2024-11-18 00:40:57.754856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.140 [2024-11-18 00:40:57.754905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.140 qpair failed and we were unable to recover it. 00:35:34.140 [2024-11-18 00:40:57.755107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.140 [2024-11-18 00:40:57.755155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.140 qpair failed and we were unable to recover it. 00:35:34.140 [2024-11-18 00:40:57.755403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.140 [2024-11-18 00:40:57.755454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.140 qpair failed and we were unable to recover it. 00:35:34.140 [2024-11-18 00:40:57.755597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.140 [2024-11-18 00:40:57.755646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.140 qpair failed and we were unable to recover it. 00:35:34.140 [2024-11-18 00:40:57.755826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.140 [2024-11-18 00:40:57.755874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.140 qpair failed and we were unable to recover it. 00:35:34.140 [2024-11-18 00:40:57.756030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.140 [2024-11-18 00:40:57.756079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.140 qpair failed and we were unable to recover it. 00:35:34.140 [2024-11-18 00:40:57.756291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.140 [2024-11-18 00:40:57.756351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.140 qpair failed and we were unable to recover it. 00:35:34.140 [2024-11-18 00:40:57.756512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.140 [2024-11-18 00:40:57.756560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.140 qpair failed and we were unable to recover it. 00:35:34.140 [2024-11-18 00:40:57.756754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.140 [2024-11-18 00:40:57.756803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.140 qpair failed and we were unable to recover it. 00:35:34.140 [2024-11-18 00:40:57.757027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.140 [2024-11-18 00:40:57.757075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.140 qpair failed and we were unable to recover it. 00:35:34.140 [2024-11-18 00:40:57.757243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.140 [2024-11-18 00:40:57.757292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.140 qpair failed and we were unable to recover it. 00:35:34.140 [2024-11-18 00:40:57.757504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.140 [2024-11-18 00:40:57.757554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.140 qpair failed and we were unable to recover it. 00:35:34.140 [2024-11-18 00:40:57.757711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.140 [2024-11-18 00:40:57.757759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.140 qpair failed and we were unable to recover it. 00:35:34.140 [2024-11-18 00:40:57.757913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.140 [2024-11-18 00:40:57.757963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.140 qpair failed and we were unable to recover it. 00:35:34.140 [2024-11-18 00:40:57.758160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.140 [2024-11-18 00:40:57.758210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.140 qpair failed and we were unable to recover it. 00:35:34.140 [2024-11-18 00:40:57.758439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.140 [2024-11-18 00:40:57.758489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.140 qpair failed and we were unable to recover it. 00:35:34.140 [2024-11-18 00:40:57.758704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.140 [2024-11-18 00:40:57.758771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.140 qpair failed and we were unable to recover it. 00:35:34.140 [2024-11-18 00:40:57.758970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.140 [2024-11-18 00:40:57.759019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.140 qpair failed and we were unable to recover it. 00:35:34.140 [2024-11-18 00:40:57.759207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.140 [2024-11-18 00:40:57.759256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.140 qpair failed and we were unable to recover it. 00:35:34.140 [2024-11-18 00:40:57.759459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.140 [2024-11-18 00:40:57.759531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.140 qpair failed and we were unable to recover it. 00:35:34.140 [2024-11-18 00:40:57.759731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.140 [2024-11-18 00:40:57.759781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.140 qpair failed and we were unable to recover it. 00:35:34.140 [2024-11-18 00:40:57.759938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.140 [2024-11-18 00:40:57.759987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.140 qpair failed and we were unable to recover it. 00:35:34.140 [2024-11-18 00:40:57.760148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.140 [2024-11-18 00:40:57.760196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.140 qpair failed and we were unable to recover it. 00:35:34.140 [2024-11-18 00:40:57.760385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.140 [2024-11-18 00:40:57.760435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.140 qpair failed and we were unable to recover it. 00:35:34.140 [2024-11-18 00:40:57.760587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.140 [2024-11-18 00:40:57.760635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.140 qpair failed and we were unable to recover it. 00:35:34.140 [2024-11-18 00:40:57.760809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.140 [2024-11-18 00:40:57.760858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.140 qpair failed and we were unable to recover it. 00:35:34.140 [2024-11-18 00:40:57.761045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.140 [2024-11-18 00:40:57.761094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.140 qpair failed and we were unable to recover it. 00:35:34.140 [2024-11-18 00:40:57.761297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.140 [2024-11-18 00:40:57.761358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.140 qpair failed and we were unable to recover it. 00:35:34.140 [2024-11-18 00:40:57.761594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.140 [2024-11-18 00:40:57.761643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.140 qpair failed and we were unable to recover it. 00:35:34.140 [2024-11-18 00:40:57.761868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.140 [2024-11-18 00:40:57.761936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.140 qpair failed and we were unable to recover it. 00:35:34.140 [2024-11-18 00:40:57.762129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.140 [2024-11-18 00:40:57.762177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.140 qpair failed and we were unable to recover it. 00:35:34.140 [2024-11-18 00:40:57.762424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.141 [2024-11-18 00:40:57.762494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.141 qpair failed and we were unable to recover it. 00:35:34.141 [2024-11-18 00:40:57.762700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.141 [2024-11-18 00:40:57.762750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.141 qpair failed and we were unable to recover it. 00:35:34.141 [2024-11-18 00:40:57.762932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.141 [2024-11-18 00:40:57.762980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.141 qpair failed and we were unable to recover it. 00:35:34.141 [2024-11-18 00:40:57.763164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.141 [2024-11-18 00:40:57.763213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.141 qpair failed and we were unable to recover it. 00:35:34.141 [2024-11-18 00:40:57.763397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.141 [2024-11-18 00:40:57.763446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.141 qpair failed and we were unable to recover it. 00:35:34.141 [2024-11-18 00:40:57.763643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.141 [2024-11-18 00:40:57.763694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.141 qpair failed and we were unable to recover it. 00:35:34.141 [2024-11-18 00:40:57.763925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.141 [2024-11-18 00:40:57.763974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.141 qpair failed and we were unable to recover it. 00:35:34.141 [2024-11-18 00:40:57.764161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.141 [2024-11-18 00:40:57.764209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.141 qpair failed and we were unable to recover it. 00:35:34.141 [2024-11-18 00:40:57.764387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.141 [2024-11-18 00:40:57.764456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.141 qpair failed and we were unable to recover it. 00:35:34.141 [2024-11-18 00:40:57.764666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.141 [2024-11-18 00:40:57.764744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.141 qpair failed and we were unable to recover it. 00:35:34.141 [2024-11-18 00:40:57.764958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.141 [2024-11-18 00:40:57.765007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.141 qpair failed and we were unable to recover it. 00:35:34.141 [2024-11-18 00:40:57.765196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.141 [2024-11-18 00:40:57.765245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.141 qpair failed and we were unable to recover it. 00:35:34.141 [2024-11-18 00:40:57.765410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.141 [2024-11-18 00:40:57.765477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.141 qpair failed and we were unable to recover it. 00:35:34.141 [2024-11-18 00:40:57.765718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.141 [2024-11-18 00:40:57.765767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.141 qpair failed and we were unable to recover it. 00:35:34.141 [2024-11-18 00:40:57.765992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.141 [2024-11-18 00:40:57.766040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.141 qpair failed and we were unable to recover it. 00:35:34.141 [2024-11-18 00:40:57.766240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.141 [2024-11-18 00:40:57.766287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.141 qpair failed and we were unable to recover it. 00:35:34.141 [2024-11-18 00:40:57.766499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.141 [2024-11-18 00:40:57.766548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.141 qpair failed and we were unable to recover it. 00:35:34.141 [2024-11-18 00:40:57.766749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.141 [2024-11-18 00:40:57.766817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.141 qpair failed and we were unable to recover it. 00:35:34.141 [2024-11-18 00:40:57.767046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.141 [2024-11-18 00:40:57.767094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.141 qpair failed and we were unable to recover it. 00:35:34.141 [2024-11-18 00:40:57.767240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.141 [2024-11-18 00:40:57.767289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.141 qpair failed and we were unable to recover it. 00:35:34.141 [2024-11-18 00:40:57.767489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.141 [2024-11-18 00:40:57.767538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.141 qpair failed and we were unable to recover it. 00:35:34.141 [2024-11-18 00:40:57.767697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.141 [2024-11-18 00:40:57.767745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.141 qpair failed and we were unable to recover it. 00:35:34.141 [2024-11-18 00:40:57.767891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.141 [2024-11-18 00:40:57.767941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.141 qpair failed and we were unable to recover it. 00:35:34.141 [2024-11-18 00:40:57.768175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.141 [2024-11-18 00:40:57.768224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.141 qpair failed and we were unable to recover it. 00:35:34.141 [2024-11-18 00:40:57.768422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.141 [2024-11-18 00:40:57.768472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.141 qpair failed and we were unable to recover it. 00:35:34.141 [2024-11-18 00:40:57.768670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.141 [2024-11-18 00:40:57.768719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.141 qpair failed and we were unable to recover it. 00:35:34.141 [2024-11-18 00:40:57.768902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.141 [2024-11-18 00:40:57.768951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.141 qpair failed and we were unable to recover it. 00:35:34.141 [2024-11-18 00:40:57.769142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.141 [2024-11-18 00:40:57.769191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.141 qpair failed and we were unable to recover it. 00:35:34.141 [2024-11-18 00:40:57.769417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.141 [2024-11-18 00:40:57.769486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.141 qpair failed and we were unable to recover it. 00:35:34.141 [2024-11-18 00:40:57.769703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.141 [2024-11-18 00:40:57.769776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.141 qpair failed and we were unable to recover it. 00:35:34.141 [2024-11-18 00:40:57.770013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.141 [2024-11-18 00:40:57.770062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.141 qpair failed and we were unable to recover it. 00:35:34.141 [2024-11-18 00:40:57.770260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.141 [2024-11-18 00:40:57.770325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.141 qpair failed and we were unable to recover it. 00:35:34.141 [2024-11-18 00:40:57.770586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.141 [2024-11-18 00:40:57.770661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.141 qpair failed and we were unable to recover it. 00:35:34.141 [2024-11-18 00:40:57.770882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.141 [2024-11-18 00:40:57.770950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.141 qpair failed and we were unable to recover it. 00:35:34.141 [2024-11-18 00:40:57.771194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.141 [2024-11-18 00:40:57.771242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.141 qpair failed and we were unable to recover it. 00:35:34.141 [2024-11-18 00:40:57.771519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.141 [2024-11-18 00:40:57.771588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.141 qpair failed and we were unable to recover it. 00:35:34.141 [2024-11-18 00:40:57.771854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.141 [2024-11-18 00:40:57.771931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.141 qpair failed and we were unable to recover it. 00:35:34.141 [2024-11-18 00:40:57.772129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.141 [2024-11-18 00:40:57.772178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.141 qpair failed and we were unable to recover it. 00:35:34.141 [2024-11-18 00:40:57.772403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.141 [2024-11-18 00:40:57.772474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.142 qpair failed and we were unable to recover it. 00:35:34.142 [2024-11-18 00:40:57.772714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.142 [2024-11-18 00:40:57.772780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.142 qpair failed and we were unable to recover it. 00:35:34.142 [2024-11-18 00:40:57.772963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.142 [2024-11-18 00:40:57.773036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.142 qpair failed and we were unable to recover it. 00:35:34.142 [2024-11-18 00:40:57.773239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.142 [2024-11-18 00:40:57.773288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.142 qpair failed and we were unable to recover it. 00:35:34.142 [2024-11-18 00:40:57.773590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.142 [2024-11-18 00:40:57.773659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.142 qpair failed and we were unable to recover it. 00:35:34.142 [2024-11-18 00:40:57.773946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.142 [2024-11-18 00:40:57.774014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.142 qpair failed and we were unable to recover it. 00:35:34.142 [2024-11-18 00:40:57.774239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.142 [2024-11-18 00:40:57.774288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.142 qpair failed and we were unable to recover it. 00:35:34.142 [2024-11-18 00:40:57.774529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.142 [2024-11-18 00:40:57.774579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.142 qpair failed and we were unable to recover it. 00:35:34.142 [2024-11-18 00:40:57.774819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.142 [2024-11-18 00:40:57.774868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.142 qpair failed and we were unable to recover it. 00:35:34.142 [2024-11-18 00:40:57.775115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.142 [2024-11-18 00:40:57.775182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.142 qpair failed and we were unable to recover it. 00:35:34.142 [2024-11-18 00:40:57.775377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.142 [2024-11-18 00:40:57.775427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.142 qpair failed and we were unable to recover it. 00:35:34.142 [2024-11-18 00:40:57.775669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.142 [2024-11-18 00:40:57.775719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.142 qpair failed and we were unable to recover it. 00:35:34.142 [2024-11-18 00:40:57.775935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.142 [2024-11-18 00:40:57.776000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.142 qpair failed and we were unable to recover it. 00:35:34.142 [2024-11-18 00:40:57.776193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.142 [2024-11-18 00:40:57.776242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.142 qpair failed and we were unable to recover it. 00:35:34.142 [2024-11-18 00:40:57.776496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.142 [2024-11-18 00:40:57.776574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.142 qpair failed and we were unable to recover it. 00:35:34.142 [2024-11-18 00:40:57.776758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.142 [2024-11-18 00:40:57.776832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.142 qpair failed and we were unable to recover it. 00:35:34.142 [2024-11-18 00:40:57.776974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.142 [2024-11-18 00:40:57.777021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.142 qpair failed and we were unable to recover it. 00:35:34.142 [2024-11-18 00:40:57.777263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.142 [2024-11-18 00:40:57.777325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.142 qpair failed and we were unable to recover it. 00:35:34.142 [2024-11-18 00:40:57.777572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.142 [2024-11-18 00:40:57.777639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.142 qpair failed and we were unable to recover it. 00:35:34.142 [2024-11-18 00:40:57.777879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.142 [2024-11-18 00:40:57.777946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.142 qpair failed and we were unable to recover it. 00:35:34.142 [2024-11-18 00:40:57.778175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.142 [2024-11-18 00:40:57.778224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.142 qpair failed and we were unable to recover it. 00:35:34.142 [2024-11-18 00:40:57.778467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.142 [2024-11-18 00:40:57.778536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.142 qpair failed and we were unable to recover it. 00:35:34.142 [2024-11-18 00:40:57.778744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.142 [2024-11-18 00:40:57.778810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.142 qpair failed and we were unable to recover it. 00:35:34.142 [2024-11-18 00:40:57.778983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.142 [2024-11-18 00:40:57.779050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.142 qpair failed and we were unable to recover it. 00:35:34.142 [2024-11-18 00:40:57.779285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.142 [2024-11-18 00:40:57.779349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.142 qpair failed and we were unable to recover it. 00:35:34.142 [2024-11-18 00:40:57.779562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.142 [2024-11-18 00:40:57.779636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.142 qpair failed and we were unable to recover it. 00:35:34.142 [2024-11-18 00:40:57.779884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.142 [2024-11-18 00:40:57.779949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.142 qpair failed and we were unable to recover it. 00:35:34.142 [2024-11-18 00:40:57.780140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.142 [2024-11-18 00:40:57.780189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.142 qpair failed and we were unable to recover it. 00:35:34.142 [2024-11-18 00:40:57.780416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.142 [2024-11-18 00:40:57.780483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.142 qpair failed and we were unable to recover it. 00:35:34.142 [2024-11-18 00:40:57.780679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.142 [2024-11-18 00:40:57.780728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.142 qpair failed and we were unable to recover it. 00:35:34.142 [2024-11-18 00:40:57.780922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.142 [2024-11-18 00:40:57.780970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.142 qpair failed and we were unable to recover it. 00:35:34.142 [2024-11-18 00:40:57.781168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.142 [2024-11-18 00:40:57.781216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.142 qpair failed and we were unable to recover it. 00:35:34.142 [2024-11-18 00:40:57.781411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.142 [2024-11-18 00:40:57.781461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.142 qpair failed and we were unable to recover it. 00:35:34.142 [2024-11-18 00:40:57.781640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.142 [2024-11-18 00:40:57.781689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.142 qpair failed and we were unable to recover it. 00:35:34.142 [2024-11-18 00:40:57.781850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.142 [2024-11-18 00:40:57.781899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.142 qpair failed and we were unable to recover it. 00:35:34.142 [2024-11-18 00:40:57.782094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.142 [2024-11-18 00:40:57.782143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.142 qpair failed and we were unable to recover it. 00:35:34.142 [2024-11-18 00:40:57.782351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.142 [2024-11-18 00:40:57.782401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.142 qpair failed and we were unable to recover it. 00:35:34.142 [2024-11-18 00:40:57.782631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.142 [2024-11-18 00:40:57.782679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.142 qpair failed and we were unable to recover it. 00:35:34.142 [2024-11-18 00:40:57.782911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.142 [2024-11-18 00:40:57.782960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.142 qpair failed and we were unable to recover it. 00:35:34.143 [2024-11-18 00:40:57.783133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.143 [2024-11-18 00:40:57.783182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.143 qpair failed and we were unable to recover it. 00:35:34.143 [2024-11-18 00:40:57.783381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.143 [2024-11-18 00:40:57.783440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.143 qpair failed and we were unable to recover it. 00:35:34.143 [2024-11-18 00:40:57.783626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.143 [2024-11-18 00:40:57.783675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.143 qpair failed and we were unable to recover it. 00:35:34.143 [2024-11-18 00:40:57.783873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.143 [2024-11-18 00:40:57.783922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.143 qpair failed and we were unable to recover it. 00:35:34.143 [2024-11-18 00:40:57.784080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.143 [2024-11-18 00:40:57.784128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.143 qpair failed and we were unable to recover it. 00:35:34.143 [2024-11-18 00:40:57.784348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.143 [2024-11-18 00:40:57.784399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.143 qpair failed and we were unable to recover it. 00:35:34.143 [2024-11-18 00:40:57.784585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.143 [2024-11-18 00:40:57.784639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.143 qpair failed and we were unable to recover it. 00:35:34.143 [2024-11-18 00:40:57.784840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.143 [2024-11-18 00:40:57.784888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.143 qpair failed and we were unable to recover it. 00:35:34.143 [2024-11-18 00:40:57.785085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.143 [2024-11-18 00:40:57.785133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.143 qpair failed and we were unable to recover it. 00:35:34.143 [2024-11-18 00:40:57.785336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.143 [2024-11-18 00:40:57.785387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.143 qpair failed and we were unable to recover it. 00:35:34.143 [2024-11-18 00:40:57.785570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.143 [2024-11-18 00:40:57.785619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.143 qpair failed and we were unable to recover it. 00:35:34.143 [2024-11-18 00:40:57.785825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.143 [2024-11-18 00:40:57.785873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.143 qpair failed and we were unable to recover it. 00:35:34.143 [2024-11-18 00:40:57.786063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.143 [2024-11-18 00:40:57.786112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.143 qpair failed and we were unable to recover it. 00:35:34.143 [2024-11-18 00:40:57.786300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.143 [2024-11-18 00:40:57.786361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.143 qpair failed and we were unable to recover it. 00:35:34.143 [2024-11-18 00:40:57.786570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.143 [2024-11-18 00:40:57.786619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.143 qpair failed and we were unable to recover it. 00:35:34.143 [2024-11-18 00:40:57.786798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.143 [2024-11-18 00:40:57.786847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.143 qpair failed and we were unable to recover it. 00:35:34.143 [2024-11-18 00:40:57.787040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.143 [2024-11-18 00:40:57.787089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.143 qpair failed and we were unable to recover it. 00:35:34.143 [2024-11-18 00:40:57.787230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.143 [2024-11-18 00:40:57.787278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.143 qpair failed and we were unable to recover it. 00:35:34.143 [2024-11-18 00:40:57.787503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.143 [2024-11-18 00:40:57.787552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.143 qpair failed and we were unable to recover it. 00:35:34.143 [2024-11-18 00:40:57.787777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.143 [2024-11-18 00:40:57.787825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.143 qpair failed and we were unable to recover it. 00:35:34.143 [2024-11-18 00:40:57.788018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.143 [2024-11-18 00:40:57.788066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.143 qpair failed and we were unable to recover it. 00:35:34.143 [2024-11-18 00:40:57.788257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.143 [2024-11-18 00:40:57.788306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.143 qpair failed and we were unable to recover it. 00:35:34.143 [2024-11-18 00:40:57.788550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.143 [2024-11-18 00:40:57.788599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.143 qpair failed and we were unable to recover it. 00:35:34.143 [2024-11-18 00:40:57.788743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.143 [2024-11-18 00:40:57.788791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.143 qpair failed and we were unable to recover it. 00:35:34.143 [2024-11-18 00:40:57.788977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.143 [2024-11-18 00:40:57.789026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.143 qpair failed and we were unable to recover it. 00:35:34.143 [2024-11-18 00:40:57.789188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.143 [2024-11-18 00:40:57.789237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.143 qpair failed and we were unable to recover it. 00:35:34.143 [2024-11-18 00:40:57.789423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.143 [2024-11-18 00:40:57.789472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.143 qpair failed and we were unable to recover it. 00:35:34.143 [2024-11-18 00:40:57.789659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.143 [2024-11-18 00:40:57.789722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.143 qpair failed and we were unable to recover it. 00:35:34.143 [2024-11-18 00:40:57.789918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.143 [2024-11-18 00:40:57.789967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.143 qpair failed and we were unable to recover it. 00:35:34.143 [2024-11-18 00:40:57.790201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.143 [2024-11-18 00:40:57.790250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.143 qpair failed and we were unable to recover it. 00:35:34.143 [2024-11-18 00:40:57.790425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.143 [2024-11-18 00:40:57.790475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.143 qpair failed and we were unable to recover it. 00:35:34.143 [2024-11-18 00:40:57.790630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.143 [2024-11-18 00:40:57.790678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.143 qpair failed and we were unable to recover it. 00:35:34.143 [2024-11-18 00:40:57.790864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.143 [2024-11-18 00:40:57.790912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.143 qpair failed and we were unable to recover it. 00:35:34.143 [2024-11-18 00:40:57.791108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.143 [2024-11-18 00:40:57.791159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.143 qpair failed and we were unable to recover it. 00:35:34.143 [2024-11-18 00:40:57.791326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.143 [2024-11-18 00:40:57.791375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.143 qpair failed and we were unable to recover it. 00:35:34.143 [2024-11-18 00:40:57.791596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.143 [2024-11-18 00:40:57.791643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.143 qpair failed and we were unable to recover it. 00:35:34.143 [2024-11-18 00:40:57.791713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18ca970 (9): Bad file descriptor 00:35:34.143 [2024-11-18 00:40:57.792107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.143 [2024-11-18 00:40:57.792179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.143 qpair failed and we were unable to recover it. 00:35:34.143 [2024-11-18 00:40:57.792418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.143 [2024-11-18 00:40:57.792499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.143 qpair failed and we were unable to recover it. 00:35:34.144 [2024-11-18 00:40:57.792705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.144 [2024-11-18 00:40:57.792756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.144 qpair failed and we were unable to recover it. 00:35:34.144 [2024-11-18 00:40:57.793026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.144 [2024-11-18 00:40:57.793094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.144 qpair failed and we were unable to recover it. 00:35:34.144 [2024-11-18 00:40:57.793292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.144 [2024-11-18 00:40:57.793364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.144 qpair failed and we were unable to recover it. 00:35:34.144 [2024-11-18 00:40:57.793556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.144 [2024-11-18 00:40:57.793605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.144 qpair failed and we were unable to recover it. 00:35:34.144 [2024-11-18 00:40:57.793799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.144 [2024-11-18 00:40:57.793847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.144 qpair failed and we were unable to recover it. 00:35:34.144 [2024-11-18 00:40:57.794048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.144 [2024-11-18 00:40:57.794098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.144 qpair failed and we were unable to recover it. 00:35:34.144 [2024-11-18 00:40:57.794290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.144 [2024-11-18 00:40:57.794355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.144 qpair failed and we were unable to recover it. 00:35:34.144 [2024-11-18 00:40:57.794581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.144 [2024-11-18 00:40:57.794648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.144 qpair failed and we were unable to recover it. 00:35:34.144 [2024-11-18 00:40:57.794838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.144 [2024-11-18 00:40:57.794888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.144 qpair failed and we were unable to recover it. 00:35:34.144 [2024-11-18 00:40:57.795050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.144 [2024-11-18 00:40:57.795098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.144 qpair failed and we were unable to recover it. 00:35:34.144 [2024-11-18 00:40:57.795236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.144 [2024-11-18 00:40:57.795284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.144 qpair failed and we were unable to recover it. 00:35:34.144 [2024-11-18 00:40:57.795541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.144 [2024-11-18 00:40:57.795591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.144 qpair failed and we were unable to recover it. 00:35:34.144 [2024-11-18 00:40:57.795741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.144 [2024-11-18 00:40:57.795790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.144 qpair failed and we were unable to recover it. 00:35:34.144 [2024-11-18 00:40:57.795986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.144 [2024-11-18 00:40:57.796052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.144 qpair failed and we were unable to recover it. 00:35:34.144 [2024-11-18 00:40:57.796260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.144 [2024-11-18 00:40:57.796308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.144 qpair failed and we were unable to recover it. 00:35:34.144 [2024-11-18 00:40:57.796538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.144 [2024-11-18 00:40:57.796587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.144 qpair failed and we were unable to recover it. 00:35:34.144 [2024-11-18 00:40:57.796797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.144 [2024-11-18 00:40:57.796863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.144 qpair failed and we were unable to recover it. 00:35:34.144 [2024-11-18 00:40:57.797062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.144 [2024-11-18 00:40:57.797112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.144 qpair failed and we were unable to recover it. 00:35:34.144 [2024-11-18 00:40:57.797308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.144 [2024-11-18 00:40:57.797393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.144 qpair failed and we were unable to recover it. 00:35:34.144 [2024-11-18 00:40:57.797540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.144 [2024-11-18 00:40:57.797589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.144 qpair failed and we were unable to recover it. 00:35:34.144 [2024-11-18 00:40:57.797785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.144 [2024-11-18 00:40:57.797835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.144 qpair failed and we were unable to recover it. 00:35:34.144 [2024-11-18 00:40:57.798015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.144 [2024-11-18 00:40:57.798063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.144 qpair failed and we were unable to recover it. 00:35:34.144 [2024-11-18 00:40:57.798256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.144 [2024-11-18 00:40:57.798304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.144 qpair failed and we were unable to recover it. 00:35:34.144 [2024-11-18 00:40:57.798518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.144 [2024-11-18 00:40:57.798569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.144 qpair failed and we were unable to recover it. 00:35:34.144 [2024-11-18 00:40:57.798734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.144 [2024-11-18 00:40:57.798783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.144 qpair failed and we were unable to recover it. 00:35:34.144 [2024-11-18 00:40:57.799007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.144 [2024-11-18 00:40:57.799055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.144 qpair failed and we were unable to recover it. 00:35:34.144 [2024-11-18 00:40:57.799249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.144 [2024-11-18 00:40:57.799298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.144 qpair failed and we were unable to recover it. 00:35:34.144 [2024-11-18 00:40:57.799510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.144 [2024-11-18 00:40:57.799559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.144 qpair failed and we were unable to recover it. 00:35:34.144 [2024-11-18 00:40:57.799740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.144 [2024-11-18 00:40:57.799789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.144 qpair failed and we were unable to recover it. 00:35:34.144 [2024-11-18 00:40:57.800032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.144 [2024-11-18 00:40:57.800089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.144 qpair failed and we were unable to recover it. 00:35:34.144 [2024-11-18 00:40:57.800228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.144 [2024-11-18 00:40:57.800278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.144 qpair failed and we were unable to recover it. 00:35:34.144 [2024-11-18 00:40:57.800504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.144 [2024-11-18 00:40:57.800553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.144 qpair failed and we were unable to recover it. 00:35:34.144 [2024-11-18 00:40:57.800713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.144 [2024-11-18 00:40:57.800765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.144 qpair failed and we were unable to recover it. 00:35:34.144 [2024-11-18 00:40:57.800994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.144 [2024-11-18 00:40:57.801042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.144 qpair failed and we were unable to recover it. 00:35:34.144 [2024-11-18 00:40:57.801231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.144 [2024-11-18 00:40:57.801289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.144 qpair failed and we were unable to recover it. 00:35:34.144 [2024-11-18 00:40:57.801486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.144 [2024-11-18 00:40:57.801536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.144 qpair failed and we were unable to recover it. 00:35:34.144 [2024-11-18 00:40:57.801729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.144 [2024-11-18 00:40:57.801777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.144 qpair failed and we were unable to recover it. 00:35:34.144 [2024-11-18 00:40:57.801963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.144 [2024-11-18 00:40:57.802012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.144 qpair failed and we were unable to recover it. 00:35:34.144 [2024-11-18 00:40:57.802236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.145 [2024-11-18 00:40:57.802284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.145 qpair failed and we were unable to recover it. 00:35:34.145 [2024-11-18 00:40:57.802483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.145 [2024-11-18 00:40:57.802533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.145 qpair failed and we were unable to recover it. 00:35:34.145 [2024-11-18 00:40:57.802766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.145 [2024-11-18 00:40:57.802815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.145 qpair failed and we were unable to recover it. 00:35:34.145 [2024-11-18 00:40:57.803018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.145 [2024-11-18 00:40:57.803066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.145 qpair failed and we were unable to recover it. 00:35:34.145 [2024-11-18 00:40:57.803293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.145 [2024-11-18 00:40:57.803392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.145 qpair failed and we were unable to recover it. 00:35:34.145 [2024-11-18 00:40:57.803640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.145 [2024-11-18 00:40:57.803690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.145 qpair failed and we were unable to recover it. 00:35:34.145 [2024-11-18 00:40:57.803887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.145 [2024-11-18 00:40:57.803946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.145 qpair failed and we were unable to recover it. 00:35:34.145 [2024-11-18 00:40:57.804172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.145 [2024-11-18 00:40:57.804220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.145 qpair failed and we were unable to recover it. 00:35:34.145 [2024-11-18 00:40:57.804390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.145 [2024-11-18 00:40:57.804438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.145 qpair failed and we were unable to recover it. 00:35:34.145 [2024-11-18 00:40:57.804596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.145 [2024-11-18 00:40:57.804644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.145 qpair failed and we were unable to recover it. 00:35:34.145 [2024-11-18 00:40:57.804877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.145 [2024-11-18 00:40:57.804944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.145 qpair failed and we were unable to recover it. 00:35:34.145 [2024-11-18 00:40:57.805154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.145 [2024-11-18 00:40:57.805202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.145 qpair failed and we were unable to recover it. 00:35:34.145 [2024-11-18 00:40:57.805384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.145 [2024-11-18 00:40:57.805434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.145 qpair failed and we were unable to recover it. 00:35:34.145 [2024-11-18 00:40:57.805632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.145 [2024-11-18 00:40:57.805706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.145 qpair failed and we were unable to recover it. 00:35:34.145 [2024-11-18 00:40:57.805920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.145 [2024-11-18 00:40:57.805987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.145 qpair failed and we were unable to recover it. 00:35:34.145 [2024-11-18 00:40:57.806136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.145 [2024-11-18 00:40:57.806186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.145 qpair failed and we were unable to recover it. 00:35:34.145 [2024-11-18 00:40:57.806398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.145 [2024-11-18 00:40:57.806446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.145 qpair failed and we were unable to recover it. 00:35:34.145 [2024-11-18 00:40:57.806601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.145 [2024-11-18 00:40:57.806650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.145 qpair failed and we were unable to recover it. 00:35:34.145 [2024-11-18 00:40:57.806866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.145 [2024-11-18 00:40:57.806942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.145 qpair failed and we were unable to recover it. 00:35:34.145 [2024-11-18 00:40:57.807169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.145 [2024-11-18 00:40:57.807218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.145 qpair failed and we were unable to recover it. 00:35:34.145 [2024-11-18 00:40:57.807465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.145 [2024-11-18 00:40:57.807515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.145 qpair failed and we were unable to recover it. 00:35:34.145 [2024-11-18 00:40:57.807690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.145 [2024-11-18 00:40:57.807738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.145 qpair failed and we were unable to recover it. 00:35:34.145 [2024-11-18 00:40:57.807880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.145 [2024-11-18 00:40:57.807928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.145 qpair failed and we were unable to recover it. 00:35:34.145 [2024-11-18 00:40:57.808126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.145 [2024-11-18 00:40:57.808175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.145 qpair failed and we were unable to recover it. 00:35:34.145 [2024-11-18 00:40:57.808371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.145 [2024-11-18 00:40:57.808422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.145 qpair failed and we were unable to recover it. 00:35:34.145 [2024-11-18 00:40:57.808647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.145 [2024-11-18 00:40:57.808695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.145 qpair failed and we were unable to recover it. 00:35:34.145 [2024-11-18 00:40:57.808881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.145 [2024-11-18 00:40:57.808930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.145 qpair failed and we were unable to recover it. 00:35:34.145 [2024-11-18 00:40:57.809125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.145 [2024-11-18 00:40:57.809181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.145 qpair failed and we were unable to recover it. 00:35:34.145 [2024-11-18 00:40:57.809375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.145 [2024-11-18 00:40:57.809425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.145 qpair failed and we were unable to recover it. 00:35:34.145 [2024-11-18 00:40:57.809654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.145 [2024-11-18 00:40:57.809703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.145 qpair failed and we were unable to recover it. 00:35:34.145 [2024-11-18 00:40:57.809875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.145 [2024-11-18 00:40:57.809930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.145 qpair failed and we were unable to recover it. 00:35:34.145 [2024-11-18 00:40:57.810085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.145 [2024-11-18 00:40:57.810133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.145 qpair failed and we were unable to recover it. 00:35:34.145 [2024-11-18 00:40:57.810333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.145 [2024-11-18 00:40:57.810384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.145 qpair failed and we were unable to recover it. 00:35:34.145 [2024-11-18 00:40:57.810571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.145 [2024-11-18 00:40:57.810620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.145 qpair failed and we were unable to recover it. 00:35:34.145 [2024-11-18 00:40:57.810808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.146 [2024-11-18 00:40:57.810856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.146 qpair failed and we were unable to recover it. 00:35:34.146 [2024-11-18 00:40:57.811009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.146 [2024-11-18 00:40:57.811058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.146 qpair failed and we were unable to recover it. 00:35:34.146 [2024-11-18 00:40:57.811286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.146 [2024-11-18 00:40:57.811344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.146 qpair failed and we were unable to recover it. 00:35:34.146 [2024-11-18 00:40:57.811556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.146 [2024-11-18 00:40:57.811622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.146 qpair failed and we were unable to recover it. 00:35:34.146 [2024-11-18 00:40:57.811855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.146 [2024-11-18 00:40:57.811904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.146 qpair failed and we were unable to recover it. 00:35:34.146 [2024-11-18 00:40:57.812057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.146 [2024-11-18 00:40:57.812106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.146 qpair failed and we were unable to recover it. 00:35:34.146 [2024-11-18 00:40:57.812251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.146 [2024-11-18 00:40:57.812300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.146 qpair failed and we were unable to recover it. 00:35:34.146 [2024-11-18 00:40:57.812539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.146 [2024-11-18 00:40:57.812606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.146 qpair failed and we were unable to recover it. 00:35:34.146 [2024-11-18 00:40:57.812737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.146 [2024-11-18 00:40:57.812786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.146 qpair failed and we were unable to recover it. 00:35:34.146 [2024-11-18 00:40:57.812979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.146 [2024-11-18 00:40:57.813028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.146 qpair failed and we were unable to recover it. 00:35:34.146 [2024-11-18 00:40:57.813227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.146 [2024-11-18 00:40:57.813276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.146 qpair failed and we were unable to recover it. 00:35:34.146 [2024-11-18 00:40:57.813491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.146 [2024-11-18 00:40:57.813557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.146 qpair failed and we were unable to recover it. 00:35:34.146 [2024-11-18 00:40:57.813784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.146 [2024-11-18 00:40:57.813850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.146 qpair failed and we were unable to recover it. 00:35:34.146 [2024-11-18 00:40:57.814040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.146 [2024-11-18 00:40:57.814089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.146 qpair failed and we were unable to recover it. 00:35:34.146 [2024-11-18 00:40:57.814274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.146 [2024-11-18 00:40:57.814338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.146 qpair failed and we were unable to recover it. 00:35:34.146 [2024-11-18 00:40:57.814576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.146 [2024-11-18 00:40:57.814644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.146 qpair failed and we were unable to recover it. 00:35:34.146 [2024-11-18 00:40:57.814793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.146 [2024-11-18 00:40:57.814842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.146 qpair failed and we were unable to recover it. 00:35:34.146 [2024-11-18 00:40:57.815028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.146 [2024-11-18 00:40:57.815077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.146 qpair failed and we were unable to recover it. 00:35:34.146 [2024-11-18 00:40:57.815266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.146 [2024-11-18 00:40:57.815328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.146 qpair failed and we were unable to recover it. 00:35:34.146 [2024-11-18 00:40:57.815593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.146 [2024-11-18 00:40:57.815668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.146 qpair failed and we were unable to recover it. 00:35:34.146 [2024-11-18 00:40:57.815886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.146 [2024-11-18 00:40:57.815953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.146 qpair failed and we were unable to recover it. 00:35:34.146 [2024-11-18 00:40:57.816107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.146 [2024-11-18 00:40:57.816156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.146 qpair failed and we were unable to recover it. 00:35:34.146 [2024-11-18 00:40:57.816364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.146 [2024-11-18 00:40:57.816414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.146 qpair failed and we were unable to recover it. 00:35:34.146 [2024-11-18 00:40:57.816615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.146 [2024-11-18 00:40:57.816682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.146 qpair failed and we were unable to recover it. 00:35:34.146 [2024-11-18 00:40:57.816930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.146 [2024-11-18 00:40:57.816999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.146 qpair failed and we were unable to recover it. 00:35:34.146 [2024-11-18 00:40:57.817216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.146 [2024-11-18 00:40:57.817267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.146 qpair failed and we were unable to recover it. 00:35:34.146 [2024-11-18 00:40:57.817532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.146 [2024-11-18 00:40:57.817598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.146 qpair failed and we were unable to recover it. 00:35:34.146 [2024-11-18 00:40:57.817806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.146 [2024-11-18 00:40:57.817872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.146 qpair failed and we were unable to recover it. 00:35:34.146 [2024-11-18 00:40:57.818031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.146 [2024-11-18 00:40:57.818085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.146 qpair failed and we were unable to recover it. 00:35:34.146 [2024-11-18 00:40:57.818331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.146 [2024-11-18 00:40:57.818380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.146 qpair failed and we were unable to recover it. 00:35:34.146 [2024-11-18 00:40:57.818583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.146 [2024-11-18 00:40:57.818648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.146 qpair failed and we were unable to recover it. 00:35:34.146 [2024-11-18 00:40:57.818845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.146 [2024-11-18 00:40:57.818912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.146 qpair failed and we were unable to recover it. 00:35:34.146 [2024-11-18 00:40:57.819137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.146 [2024-11-18 00:40:57.819185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.146 qpair failed and we were unable to recover it. 00:35:34.146 [2024-11-18 00:40:57.819448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.146 [2024-11-18 00:40:57.819514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.146 qpair failed and we were unable to recover it. 00:35:34.146 [2024-11-18 00:40:57.819762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.146 [2024-11-18 00:40:57.819829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.146 qpair failed and we were unable to recover it. 00:35:34.146 [2024-11-18 00:40:57.820096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.146 [2024-11-18 00:40:57.820165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.146 qpair failed and we were unable to recover it. 00:35:34.146 [2024-11-18 00:40:57.820377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.146 [2024-11-18 00:40:57.820446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.146 qpair failed and we were unable to recover it. 00:35:34.146 [2024-11-18 00:40:57.820666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.146 [2024-11-18 00:40:57.820732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.146 qpair failed and we were unable to recover it. 00:35:34.146 [2024-11-18 00:40:57.820947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.146 [2024-11-18 00:40:57.821012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.147 qpair failed and we were unable to recover it. 00:35:34.147 [2024-11-18 00:40:57.821215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.147 [2024-11-18 00:40:57.821266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.147 qpair failed and we were unable to recover it. 00:35:34.147 [2024-11-18 00:40:57.821496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.147 [2024-11-18 00:40:57.821564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.147 qpair failed and we were unable to recover it. 00:35:34.147 [2024-11-18 00:40:57.821727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.147 [2024-11-18 00:40:57.821795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.147 qpair failed and we were unable to recover it. 00:35:34.147 [2024-11-18 00:40:57.822003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.147 [2024-11-18 00:40:57.822068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.147 qpair failed and we were unable to recover it. 00:35:34.147 [2024-11-18 00:40:57.822267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.147 [2024-11-18 00:40:57.822328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.147 qpair failed and we were unable to recover it. 00:35:34.147 [2024-11-18 00:40:57.822576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.147 [2024-11-18 00:40:57.822642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.147 qpair failed and we were unable to recover it. 00:35:34.147 [2024-11-18 00:40:57.822840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.147 [2024-11-18 00:40:57.822889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.147 qpair failed and we were unable to recover it. 00:35:34.147 [2024-11-18 00:40:57.823094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.147 [2024-11-18 00:40:57.823143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.147 qpair failed and we were unable to recover it. 00:35:34.147 [2024-11-18 00:40:57.823368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.147 [2024-11-18 00:40:57.823418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.147 qpair failed and we were unable to recover it. 00:35:34.147 [2024-11-18 00:40:57.823598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.147 [2024-11-18 00:40:57.823664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.147 qpair failed and we were unable to recover it. 00:35:34.147 [2024-11-18 00:40:57.823843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.147 [2024-11-18 00:40:57.823894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.147 qpair failed and we were unable to recover it. 00:35:34.147 [2024-11-18 00:40:57.824088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.147 [2024-11-18 00:40:57.824136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.147 qpair failed and we were unable to recover it. 00:35:34.147 [2024-11-18 00:40:57.824355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.147 [2024-11-18 00:40:57.824404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.147 qpair failed and we were unable to recover it. 00:35:34.147 [2024-11-18 00:40:57.824615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.147 [2024-11-18 00:40:57.824672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.147 qpair failed and we were unable to recover it. 00:35:34.147 [2024-11-18 00:40:57.824898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.147 [2024-11-18 00:40:57.824947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.147 qpair failed and we were unable to recover it. 00:35:34.147 [2024-11-18 00:40:57.825171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.147 [2024-11-18 00:40:57.825219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.147 qpair failed and we were unable to recover it. 00:35:34.147 [2024-11-18 00:40:57.825451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.147 [2024-11-18 00:40:57.825501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.147 qpair failed and we were unable to recover it. 00:35:34.147 [2024-11-18 00:40:57.825659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.147 [2024-11-18 00:40:57.825707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.147 qpair failed and we were unable to recover it. 00:35:34.147 [2024-11-18 00:40:57.825857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.147 [2024-11-18 00:40:57.825907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.147 qpair failed and we were unable to recover it. 00:35:34.147 [2024-11-18 00:40:57.826107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.147 [2024-11-18 00:40:57.826156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.147 qpair failed and we were unable to recover it. 00:35:34.147 [2024-11-18 00:40:57.826304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.147 [2024-11-18 00:40:57.826365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.147 qpair failed and we were unable to recover it. 00:35:34.147 [2024-11-18 00:40:57.826563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.147 [2024-11-18 00:40:57.826636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.147 qpair failed and we were unable to recover it. 00:35:34.147 [2024-11-18 00:40:57.826898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.147 [2024-11-18 00:40:57.826964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.147 qpair failed and we were unable to recover it. 00:35:34.147 [2024-11-18 00:40:57.827114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.147 [2024-11-18 00:40:57.827163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.147 qpair failed and we were unable to recover it. 00:35:34.147 [2024-11-18 00:40:57.827364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.147 [2024-11-18 00:40:57.827414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.147 qpair failed and we were unable to recover it. 00:35:34.147 [2024-11-18 00:40:57.827648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.147 [2024-11-18 00:40:57.827697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.147 qpair failed and we were unable to recover it. 00:35:34.147 [2024-11-18 00:40:57.827889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.147 [2024-11-18 00:40:57.827938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.147 qpair failed and we were unable to recover it. 00:35:34.147 [2024-11-18 00:40:57.828109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.147 [2024-11-18 00:40:57.828158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.147 qpair failed and we were unable to recover it. 00:35:34.147 [2024-11-18 00:40:57.828325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.147 [2024-11-18 00:40:57.828376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.147 qpair failed and we were unable to recover it. 00:35:34.147 [2024-11-18 00:40:57.828560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.147 [2024-11-18 00:40:57.828608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.147 qpair failed and we were unable to recover it. 00:35:34.147 [2024-11-18 00:40:57.828799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.147 [2024-11-18 00:40:57.828847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.147 qpair failed and we were unable to recover it. 00:35:34.147 [2024-11-18 00:40:57.828999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.147 [2024-11-18 00:40:57.829049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.147 qpair failed and we were unable to recover it. 00:35:34.147 [2024-11-18 00:40:57.829267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.147 [2024-11-18 00:40:57.829338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.147 qpair failed and we were unable to recover it. 00:35:34.147 [2024-11-18 00:40:57.829607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.147 [2024-11-18 00:40:57.829673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.147 qpair failed and we were unable to recover it. 00:35:34.147 [2024-11-18 00:40:57.829832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.147 [2024-11-18 00:40:57.829900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.147 qpair failed and we were unable to recover it. 00:35:34.147 [2024-11-18 00:40:57.830075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.147 [2024-11-18 00:40:57.830124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.147 qpair failed and we were unable to recover it. 00:35:34.147 [2024-11-18 00:40:57.830331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.147 [2024-11-18 00:40:57.830381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.147 qpair failed and we were unable to recover it. 00:35:34.147 [2024-11-18 00:40:57.830566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.147 [2024-11-18 00:40:57.830642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.147 qpair failed and we were unable to recover it. 00:35:34.148 [2024-11-18 00:40:57.830846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.148 [2024-11-18 00:40:57.830913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.148 qpair failed and we were unable to recover it. 00:35:34.148 [2024-11-18 00:40:57.831109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.148 [2024-11-18 00:40:57.831162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.148 qpair failed and we were unable to recover it. 00:35:34.148 [2024-11-18 00:40:57.831378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.148 [2024-11-18 00:40:57.831455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.148 qpair failed and we were unable to recover it. 00:35:34.148 [2024-11-18 00:40:57.831685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.148 [2024-11-18 00:40:57.831734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.148 qpair failed and we were unable to recover it. 00:35:34.148 [2024-11-18 00:40:57.831966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.148 [2024-11-18 00:40:57.832015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.148 qpair failed and we were unable to recover it. 00:35:34.148 [2024-11-18 00:40:57.832164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.148 [2024-11-18 00:40:57.832213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.148 qpair failed and we were unable to recover it. 00:35:34.148 [2024-11-18 00:40:57.832412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.148 [2024-11-18 00:40:57.832478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.148 qpair failed and we were unable to recover it. 00:35:34.148 [2024-11-18 00:40:57.832686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.148 [2024-11-18 00:40:57.832751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.148 qpair failed and we were unable to recover it. 00:35:34.148 [2024-11-18 00:40:57.832958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.148 [2024-11-18 00:40:57.833007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.148 qpair failed and we were unable to recover it. 00:35:34.148 [2024-11-18 00:40:57.833241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.148 [2024-11-18 00:40:57.833289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.148 qpair failed and we were unable to recover it. 00:35:34.148 [2024-11-18 00:40:57.833483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.148 [2024-11-18 00:40:57.833552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.148 qpair failed and we were unable to recover it. 00:35:34.148 [2024-11-18 00:40:57.833707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.148 [2024-11-18 00:40:57.833756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.148 qpair failed and we were unable to recover it. 00:35:34.148 [2024-11-18 00:40:57.833928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.148 [2024-11-18 00:40:57.833977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.148 qpair failed and we were unable to recover it. 00:35:34.148 [2024-11-18 00:40:57.834197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.148 [2024-11-18 00:40:57.834246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.148 qpair failed and we were unable to recover it. 00:35:34.148 [2024-11-18 00:40:57.834463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.148 [2024-11-18 00:40:57.834513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.148 qpair failed and we were unable to recover it. 00:35:34.148 [2024-11-18 00:40:57.834675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.148 [2024-11-18 00:40:57.834723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.148 qpair failed and we were unable to recover it. 00:35:34.148 [2024-11-18 00:40:57.834958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.148 [2024-11-18 00:40:57.835007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.148 qpair failed and we were unable to recover it. 00:35:34.148 [2024-11-18 00:40:57.835159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.148 [2024-11-18 00:40:57.835208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.148 qpair failed and we were unable to recover it. 00:35:34.148 [2024-11-18 00:40:57.835365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.148 [2024-11-18 00:40:57.835415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.148 qpair failed and we were unable to recover it. 00:35:34.148 [2024-11-18 00:40:57.835595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.148 [2024-11-18 00:40:57.835653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.148 qpair failed and we were unable to recover it. 00:35:34.148 [2024-11-18 00:40:57.835870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.148 [2024-11-18 00:40:57.835919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.148 qpair failed and we were unable to recover it. 00:35:34.148 [2024-11-18 00:40:57.836145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.148 [2024-11-18 00:40:57.836195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.148 qpair failed and we were unable to recover it. 00:35:34.148 [2024-11-18 00:40:57.836460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.148 [2024-11-18 00:40:57.836529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.148 qpair failed and we were unable to recover it. 00:35:34.148 [2024-11-18 00:40:57.836795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.148 [2024-11-18 00:40:57.836864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.148 qpair failed and we were unable to recover it. 00:35:34.148 [2024-11-18 00:40:57.837021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.148 [2024-11-18 00:40:57.837072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.148 qpair failed and we were unable to recover it. 00:35:34.148 [2024-11-18 00:40:57.837252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.148 [2024-11-18 00:40:57.837301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.148 qpair failed and we were unable to recover it. 00:35:34.148 [2024-11-18 00:40:57.837590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.148 [2024-11-18 00:40:57.837658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.148 qpair failed and we were unable to recover it. 00:35:34.148 [2024-11-18 00:40:57.837848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.148 [2024-11-18 00:40:57.837922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.148 qpair failed and we were unable to recover it. 00:35:34.148 [2024-11-18 00:40:57.838058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.148 [2024-11-18 00:40:57.838107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.148 qpair failed and we were unable to recover it. 00:35:34.148 [2024-11-18 00:40:57.838308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.148 [2024-11-18 00:40:57.838383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.148 qpair failed and we were unable to recover it. 00:35:34.148 [2024-11-18 00:40:57.838633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.148 [2024-11-18 00:40:57.838681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.148 qpair failed and we were unable to recover it. 00:35:34.148 [2024-11-18 00:40:57.838867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.148 [2024-11-18 00:40:57.838915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.148 qpair failed and we were unable to recover it. 00:35:34.148 [2024-11-18 00:40:57.839152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.148 [2024-11-18 00:40:57.839202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.148 qpair failed and we were unable to recover it. 00:35:34.148 [2024-11-18 00:40:57.839347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.148 [2024-11-18 00:40:57.839397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.148 qpair failed and we were unable to recover it. 00:35:34.148 [2024-11-18 00:40:57.839549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.148 [2024-11-18 00:40:57.839597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.148 qpair failed and we were unable to recover it. 00:35:34.148 [2024-11-18 00:40:57.839747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.148 [2024-11-18 00:40:57.839797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.148 qpair failed and we were unable to recover it. 00:35:34.148 [2024-11-18 00:40:57.839956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.148 [2024-11-18 00:40:57.840004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.148 qpair failed and we were unable to recover it. 00:35:34.148 [2024-11-18 00:40:57.840149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.149 [2024-11-18 00:40:57.840202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.149 qpair failed and we were unable to recover it. 00:35:34.149 [2024-11-18 00:40:57.840386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.149 [2024-11-18 00:40:57.840437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.149 qpair failed and we were unable to recover it. 00:35:34.149 [2024-11-18 00:40:57.840597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.149 [2024-11-18 00:40:57.840645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.149 qpair failed and we were unable to recover it. 00:35:34.149 [2024-11-18 00:40:57.840790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.149 [2024-11-18 00:40:57.840848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.149 qpair failed and we were unable to recover it. 00:35:34.149 [2024-11-18 00:40:57.841032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.149 [2024-11-18 00:40:57.841082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.149 qpair failed and we were unable to recover it. 00:35:34.149 [2024-11-18 00:40:57.841266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.149 [2024-11-18 00:40:57.841327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.149 qpair failed and we were unable to recover it. 00:35:34.149 [2024-11-18 00:40:57.841558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.149 [2024-11-18 00:40:57.841633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.149 qpair failed and we were unable to recover it. 00:35:34.149 [2024-11-18 00:40:57.841866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.149 [2024-11-18 00:40:57.841920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.149 qpair failed and we were unable to recover it. 00:35:34.149 [2024-11-18 00:40:57.842162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.149 [2024-11-18 00:40:57.842214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.149 qpair failed and we were unable to recover it. 00:35:34.149 [2024-11-18 00:40:57.842381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.149 [2024-11-18 00:40:57.842434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.149 qpair failed and we were unable to recover it. 00:35:34.149 [2024-11-18 00:40:57.842672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.149 [2024-11-18 00:40:57.842722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.149 qpair failed and we were unable to recover it. 00:35:34.149 [2024-11-18 00:40:57.842913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.149 [2024-11-18 00:40:57.842962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.149 qpair failed and we were unable to recover it. 00:35:34.149 [2024-11-18 00:40:57.843155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.149 [2024-11-18 00:40:57.843207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.149 qpair failed and we were unable to recover it. 00:35:34.149 [2024-11-18 00:40:57.843428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.149 [2024-11-18 00:40:57.843479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.149 qpair failed and we were unable to recover it. 00:35:34.149 [2024-11-18 00:40:57.843664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.149 [2024-11-18 00:40:57.843713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.149 qpair failed and we were unable to recover it. 00:35:34.149 [2024-11-18 00:40:57.843868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.149 [2024-11-18 00:40:57.843919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.149 qpair failed and we were unable to recover it. 00:35:34.149 [2024-11-18 00:40:57.844120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.149 [2024-11-18 00:40:57.844170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.149 qpair failed and we were unable to recover it. 00:35:34.149 [2024-11-18 00:40:57.844404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.149 [2024-11-18 00:40:57.844454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.149 qpair failed and we were unable to recover it. 00:35:34.149 [2024-11-18 00:40:57.844600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.149 [2024-11-18 00:40:57.844649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.149 qpair failed and we were unable to recover it. 00:35:34.149 [2024-11-18 00:40:57.844885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.149 [2024-11-18 00:40:57.844947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.149 qpair failed and we were unable to recover it. 00:35:34.149 [2024-11-18 00:40:57.845169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.149 [2024-11-18 00:40:57.845219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.149 qpair failed and we were unable to recover it. 00:35:34.149 [2024-11-18 00:40:57.845450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.149 [2024-11-18 00:40:57.845500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.149 qpair failed and we were unable to recover it. 00:35:34.149 [2024-11-18 00:40:57.845703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.149 [2024-11-18 00:40:57.845752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.149 qpair failed and we were unable to recover it. 00:35:34.149 [2024-11-18 00:40:57.845944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.149 [2024-11-18 00:40:57.845994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.149 qpair failed and we were unable to recover it. 00:35:34.149 [2024-11-18 00:40:57.846156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.149 [2024-11-18 00:40:57.846206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.149 qpair failed and we were unable to recover it. 00:35:34.149 [2024-11-18 00:40:57.846406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.149 [2024-11-18 00:40:57.846456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.149 qpair failed and we were unable to recover it. 00:35:34.149 [2024-11-18 00:40:57.846645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.149 [2024-11-18 00:40:57.846694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.149 qpair failed and we were unable to recover it. 00:35:34.149 [2024-11-18 00:40:57.846882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.149 [2024-11-18 00:40:57.846932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.149 qpair failed and we were unable to recover it. 00:35:34.149 [2024-11-18 00:40:57.847134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.149 [2024-11-18 00:40:57.847184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.149 qpair failed and we were unable to recover it. 00:35:34.149 [2024-11-18 00:40:57.847376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.149 [2024-11-18 00:40:57.847426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.149 qpair failed and we were unable to recover it. 00:35:34.149 [2024-11-18 00:40:57.847602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.149 [2024-11-18 00:40:57.847650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.149 qpair failed and we were unable to recover it. 00:35:34.149 [2024-11-18 00:40:57.847883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.149 [2024-11-18 00:40:57.847932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.149 qpair failed and we were unable to recover it. 00:35:34.149 [2024-11-18 00:40:57.848122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.149 [2024-11-18 00:40:57.848175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.149 qpair failed and we were unable to recover it. 00:35:34.149 [2024-11-18 00:40:57.848363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.150 [2024-11-18 00:40:57.848413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.150 qpair failed and we were unable to recover it. 00:35:34.150 [2024-11-18 00:40:57.848625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.150 [2024-11-18 00:40:57.848676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.150 qpair failed and we were unable to recover it. 00:35:34.150 [2024-11-18 00:40:57.848897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.150 [2024-11-18 00:40:57.848946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.150 qpair failed and we were unable to recover it. 00:35:34.150 [2024-11-18 00:40:57.849135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.150 [2024-11-18 00:40:57.849183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.150 qpair failed and we were unable to recover it. 00:35:34.150 [2024-11-18 00:40:57.849348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.150 [2024-11-18 00:40:57.849399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.150 qpair failed and we were unable to recover it. 00:35:34.150 [2024-11-18 00:40:57.849588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.150 [2024-11-18 00:40:57.849637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.150 qpair failed and we were unable to recover it. 00:35:34.150 [2024-11-18 00:40:57.849863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.150 [2024-11-18 00:40:57.849912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.150 qpair failed and we were unable to recover it. 00:35:34.150 [2024-11-18 00:40:57.850079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.150 [2024-11-18 00:40:57.850128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.150 qpair failed and we were unable to recover it. 00:35:34.150 [2024-11-18 00:40:57.850332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.150 [2024-11-18 00:40:57.850381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.150 qpair failed and we were unable to recover it. 00:35:34.150 [2024-11-18 00:40:57.850570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.150 [2024-11-18 00:40:57.850620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.150 qpair failed and we were unable to recover it. 00:35:34.150 [2024-11-18 00:40:57.850854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.150 [2024-11-18 00:40:57.850904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.150 qpair failed and we were unable to recover it. 00:35:34.150 [2024-11-18 00:40:57.851090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.150 [2024-11-18 00:40:57.851162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.150 qpair failed and we were unable to recover it. 00:35:34.150 [2024-11-18 00:40:57.851406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.150 [2024-11-18 00:40:57.851456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.150 qpair failed and we were unable to recover it. 00:35:34.150 [2024-11-18 00:40:57.851695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.150 [2024-11-18 00:40:57.851745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.150 qpair failed and we were unable to recover it. 00:35:34.150 [2024-11-18 00:40:57.851925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.150 [2024-11-18 00:40:57.851974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.150 qpair failed and we were unable to recover it. 00:35:34.150 [2024-11-18 00:40:57.852160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.150 [2024-11-18 00:40:57.852209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.150 qpair failed and we were unable to recover it. 00:35:34.150 [2024-11-18 00:40:57.852450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.150 [2024-11-18 00:40:57.852502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.150 qpair failed and we were unable to recover it. 00:35:34.150 [2024-11-18 00:40:57.852692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.150 [2024-11-18 00:40:57.852740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.150 qpair failed and we were unable to recover it. 00:35:34.150 [2024-11-18 00:40:57.852963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.150 [2024-11-18 00:40:57.853012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.150 qpair failed and we were unable to recover it. 00:35:34.150 [2024-11-18 00:40:57.853194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.150 [2024-11-18 00:40:57.853245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.150 qpair failed and we were unable to recover it. 00:35:34.150 [2024-11-18 00:40:57.853444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.150 [2024-11-18 00:40:57.853493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.150 qpair failed and we were unable to recover it. 00:35:34.150 [2024-11-18 00:40:57.853684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.150 [2024-11-18 00:40:57.853735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.150 qpair failed and we were unable to recover it. 00:35:34.150 [2024-11-18 00:40:57.853909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.150 [2024-11-18 00:40:57.853960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.150 qpair failed and we were unable to recover it. 00:35:34.150 [2024-11-18 00:40:57.854142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.150 [2024-11-18 00:40:57.854190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.150 qpair failed and we were unable to recover it. 00:35:34.150 [2024-11-18 00:40:57.854388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.150 [2024-11-18 00:40:57.854438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.150 qpair failed and we were unable to recover it. 00:35:34.150 [2024-11-18 00:40:57.854651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.150 [2024-11-18 00:40:57.854702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.150 qpair failed and we were unable to recover it. 00:35:34.150 [2024-11-18 00:40:57.854928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.150 [2024-11-18 00:40:57.854987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.150 qpair failed and we were unable to recover it. 00:35:34.150 [2024-11-18 00:40:57.855280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.150 [2024-11-18 00:40:57.855377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.150 qpair failed and we were unable to recover it. 00:35:34.150 [2024-11-18 00:40:57.855601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.150 [2024-11-18 00:40:57.855652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.150 qpair failed and we were unable to recover it. 00:35:34.150 [2024-11-18 00:40:57.855876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.150 [2024-11-18 00:40:57.855925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.150 qpair failed and we were unable to recover it. 00:35:34.150 [2024-11-18 00:40:57.856160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.150 [2024-11-18 00:40:57.856208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.150 qpair failed and we were unable to recover it. 00:35:34.150 [2024-11-18 00:40:57.856442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.150 [2024-11-18 00:40:57.856493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.150 qpair failed and we were unable to recover it. 00:35:34.150 [2024-11-18 00:40:57.856725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.150 [2024-11-18 00:40:57.856774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.150 qpair failed and we were unable to recover it. 00:35:34.150 [2024-11-18 00:40:57.856953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.150 [2024-11-18 00:40:57.857001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.150 qpair failed and we were unable to recover it. 00:35:34.150 [2024-11-18 00:40:57.857194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.150 [2024-11-18 00:40:57.857243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.150 qpair failed and we were unable to recover it. 00:35:34.150 [2024-11-18 00:40:57.857444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.150 [2024-11-18 00:40:57.857495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.150 qpair failed and we were unable to recover it. 00:35:34.150 [2024-11-18 00:40:57.857654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.150 [2024-11-18 00:40:57.857702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.150 qpair failed and we were unable to recover it. 00:35:34.150 [2024-11-18 00:40:57.857932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.150 [2024-11-18 00:40:57.857981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.150 qpair failed and we were unable to recover it. 00:35:34.150 [2024-11-18 00:40:57.858173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.151 [2024-11-18 00:40:57.858223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.151 qpair failed and we were unable to recover it. 00:35:34.151 [2024-11-18 00:40:57.858462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.151 [2024-11-18 00:40:57.858511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.151 qpair failed and we were unable to recover it. 00:35:34.151 [2024-11-18 00:40:57.858673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.151 [2024-11-18 00:40:57.858723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.151 qpair failed and we were unable to recover it. 00:35:34.151 [2024-11-18 00:40:57.858954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.151 [2024-11-18 00:40:57.859002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.151 qpair failed and we were unable to recover it. 00:35:34.151 [2024-11-18 00:40:57.859171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.151 [2024-11-18 00:40:57.859250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.151 qpair failed and we were unable to recover it. 00:35:34.151 [2024-11-18 00:40:57.859500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.151 [2024-11-18 00:40:57.859550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.151 qpair failed and we were unable to recover it. 00:35:34.151 [2024-11-18 00:40:57.859802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.151 [2024-11-18 00:40:57.859853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.151 qpair failed and we were unable to recover it. 00:35:34.151 [2024-11-18 00:40:57.860084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.151 [2024-11-18 00:40:57.860136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.151 qpair failed and we were unable to recover it. 00:35:34.151 [2024-11-18 00:40:57.860360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.151 [2024-11-18 00:40:57.860411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.151 qpair failed and we were unable to recover it. 00:35:34.151 [2024-11-18 00:40:57.860642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.151 [2024-11-18 00:40:57.860691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.151 qpair failed and we were unable to recover it. 00:35:34.151 [2024-11-18 00:40:57.860914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.151 [2024-11-18 00:40:57.860963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.151 qpair failed and we were unable to recover it. 00:35:34.151 [2024-11-18 00:40:57.861153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.151 [2024-11-18 00:40:57.861203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.151 qpair failed and we were unable to recover it. 00:35:34.151 [2024-11-18 00:40:57.861407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.151 [2024-11-18 00:40:57.861458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.151 qpair failed and we were unable to recover it. 00:35:34.151 [2024-11-18 00:40:57.861649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.151 [2024-11-18 00:40:57.861700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.151 qpair failed and we were unable to recover it. 00:35:34.151 [2024-11-18 00:40:57.861873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.151 [2024-11-18 00:40:57.861926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.151 qpair failed and we were unable to recover it. 00:35:34.151 [2024-11-18 00:40:57.862166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.151 [2024-11-18 00:40:57.862245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.151 qpair failed and we were unable to recover it. 00:35:34.151 [2024-11-18 00:40:57.862531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.151 [2024-11-18 00:40:57.862585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.151 qpair failed and we were unable to recover it. 00:35:34.151 [2024-11-18 00:40:57.862856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.151 [2024-11-18 00:40:57.862909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.151 qpair failed and we were unable to recover it. 00:35:34.151 [2024-11-18 00:40:57.863085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.151 [2024-11-18 00:40:57.863137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.151 qpair failed and we were unable to recover it. 00:35:34.151 [2024-11-18 00:40:57.863376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.151 [2024-11-18 00:40:57.863432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.151 qpair failed and we were unable to recover it. 00:35:34.151 [2024-11-18 00:40:57.863640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.151 [2024-11-18 00:40:57.863693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.151 qpair failed and we were unable to recover it. 00:35:34.151 [2024-11-18 00:40:57.863906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.151 [2024-11-18 00:40:57.863955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.151 qpair failed and we were unable to recover it. 00:35:34.151 [2024-11-18 00:40:57.864155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.151 [2024-11-18 00:40:57.864204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.151 qpair failed and we were unable to recover it. 00:35:34.151 [2024-11-18 00:40:57.864411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.151 [2024-11-18 00:40:57.864462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.151 qpair failed and we were unable to recover it. 00:35:34.151 [2024-11-18 00:40:57.864651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.151 [2024-11-18 00:40:57.864701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.151 qpair failed and we were unable to recover it. 00:35:34.151 [2024-11-18 00:40:57.864933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.151 [2024-11-18 00:40:57.864981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.151 qpair failed and we were unable to recover it. 00:35:34.151 [2024-11-18 00:40:57.865185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.151 [2024-11-18 00:40:57.865236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.151 qpair failed and we were unable to recover it. 00:35:34.151 [2024-11-18 00:40:57.865479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.151 [2024-11-18 00:40:57.865553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.151 qpair failed and we were unable to recover it. 00:35:34.151 [2024-11-18 00:40:57.865758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.151 [2024-11-18 00:40:57.865822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.151 qpair failed and we were unable to recover it. 00:35:34.151 [2024-11-18 00:40:57.866094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.151 [2024-11-18 00:40:57.866148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.151 qpair failed and we were unable to recover it. 00:35:34.151 [2024-11-18 00:40:57.866304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.151 [2024-11-18 00:40:57.866371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.151 qpair failed and we were unable to recover it. 00:35:34.151 [2024-11-18 00:40:57.866573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.151 [2024-11-18 00:40:57.866626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.151 qpair failed and we were unable to recover it. 00:35:34.151 [2024-11-18 00:40:57.866879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.151 [2024-11-18 00:40:57.866931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.151 qpair failed and we were unable to recover it. 00:35:34.151 [2024-11-18 00:40:57.867095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.151 [2024-11-18 00:40:57.867144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.151 qpair failed and we were unable to recover it. 00:35:34.151 [2024-11-18 00:40:57.867340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.151 [2024-11-18 00:40:57.867391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.151 qpair failed and we were unable to recover it. 00:35:34.151 [2024-11-18 00:40:57.867585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.151 [2024-11-18 00:40:57.867634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.151 qpair failed and we were unable to recover it. 00:35:34.151 [2024-11-18 00:40:57.867853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.151 [2024-11-18 00:40:57.867903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.151 qpair failed and we were unable to recover it. 00:35:34.151 [2024-11-18 00:40:57.868060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.151 [2024-11-18 00:40:57.868137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.151 qpair failed and we were unable to recover it. 00:35:34.152 [2024-11-18 00:40:57.868359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.152 [2024-11-18 00:40:57.868410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.152 qpair failed and we were unable to recover it. 00:35:34.152 [2024-11-18 00:40:57.868604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.152 [2024-11-18 00:40:57.868656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.152 qpair failed and we were unable to recover it. 00:35:34.152 [2024-11-18 00:40:57.868844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.152 [2024-11-18 00:40:57.868892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.152 qpair failed and we were unable to recover it. 00:35:34.152 [2024-11-18 00:40:57.869111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.152 [2024-11-18 00:40:57.869161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.152 qpair failed and we were unable to recover it. 00:35:34.152 [2024-11-18 00:40:57.869345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.152 [2024-11-18 00:40:57.869395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.152 qpair failed and we were unable to recover it. 00:35:34.152 [2024-11-18 00:40:57.869627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.152 [2024-11-18 00:40:57.869676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.152 qpair failed and we were unable to recover it. 00:35:34.152 [2024-11-18 00:40:57.869938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.152 [2024-11-18 00:40:57.869987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.152 qpair failed and we were unable to recover it. 00:35:34.152 [2024-11-18 00:40:57.870124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.152 [2024-11-18 00:40:57.870173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.152 qpair failed and we were unable to recover it. 00:35:34.152 [2024-11-18 00:40:57.870332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.152 [2024-11-18 00:40:57.870381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.152 qpair failed and we were unable to recover it. 00:35:34.152 [2024-11-18 00:40:57.870610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.152 [2024-11-18 00:40:57.870659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.152 qpair failed and we were unable to recover it. 00:35:34.152 [2024-11-18 00:40:57.870900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.152 [2024-11-18 00:40:57.870949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.152 qpair failed and we were unable to recover it. 00:35:34.152 [2024-11-18 00:40:57.871093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.152 [2024-11-18 00:40:57.871142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.152 qpair failed and we were unable to recover it. 00:35:34.152 [2024-11-18 00:40:57.871403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.152 [2024-11-18 00:40:57.871454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.152 qpair failed and we were unable to recover it. 00:35:34.152 [2024-11-18 00:40:57.871644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.152 [2024-11-18 00:40:57.871693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.152 qpair failed and we were unable to recover it. 00:35:34.152 [2024-11-18 00:40:57.871877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.152 [2024-11-18 00:40:57.871925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.152 qpair failed and we were unable to recover it. 00:35:34.152 [2024-11-18 00:40:57.872153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.152 [2024-11-18 00:40:57.872202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.152 qpair failed and we were unable to recover it. 00:35:34.152 [2024-11-18 00:40:57.872346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.152 [2024-11-18 00:40:57.872396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.152 qpair failed and we were unable to recover it. 00:35:34.152 [2024-11-18 00:40:57.872601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.152 [2024-11-18 00:40:57.872651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.152 qpair failed and we were unable to recover it. 00:35:34.152 [2024-11-18 00:40:57.872839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.152 [2024-11-18 00:40:57.872888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.152 qpair failed and we were unable to recover it. 00:35:34.152 [2024-11-18 00:40:57.873078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.152 [2024-11-18 00:40:57.873129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.152 qpair failed and we were unable to recover it. 00:35:34.152 [2024-11-18 00:40:57.873336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.152 [2024-11-18 00:40:57.873386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.152 qpair failed and we were unable to recover it. 00:35:34.152 [2024-11-18 00:40:57.873582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.152 [2024-11-18 00:40:57.873632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.152 qpair failed and we were unable to recover it. 00:35:34.152 [2024-11-18 00:40:57.873822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.152 [2024-11-18 00:40:57.873870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.152 qpair failed and we were unable to recover it. 00:35:34.152 [2024-11-18 00:40:57.874033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.152 [2024-11-18 00:40:57.874082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.152 qpair failed and we were unable to recover it. 00:35:34.152 [2024-11-18 00:40:57.874281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.152 [2024-11-18 00:40:57.874346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.152 qpair failed and we were unable to recover it. 00:35:34.152 [2024-11-18 00:40:57.874579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.152 [2024-11-18 00:40:57.874627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.152 qpair failed and we were unable to recover it. 00:35:34.152 [2024-11-18 00:40:57.874818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.152 [2024-11-18 00:40:57.874866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.152 qpair failed and we were unable to recover it. 00:35:34.152 [2024-11-18 00:40:57.875047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.152 [2024-11-18 00:40:57.875097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.152 qpair failed and we were unable to recover it. 00:35:34.152 [2024-11-18 00:40:57.875298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.152 [2024-11-18 00:40:57.875359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.152 qpair failed and we were unable to recover it. 00:35:34.152 [2024-11-18 00:40:57.875551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.152 [2024-11-18 00:40:57.875601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.152 qpair failed and we were unable to recover it. 00:35:34.152 [2024-11-18 00:40:57.875805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.152 [2024-11-18 00:40:57.875855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.152 qpair failed and we were unable to recover it. 00:35:34.152 [2024-11-18 00:40:57.876016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.152 [2024-11-18 00:40:57.876067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.152 qpair failed and we were unable to recover it. 00:35:34.152 [2024-11-18 00:40:57.876221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.152 [2024-11-18 00:40:57.876272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.152 qpair failed and we were unable to recover it. 00:35:34.152 [2024-11-18 00:40:57.876485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.152 [2024-11-18 00:40:57.876535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.152 qpair failed and we were unable to recover it. 00:35:34.152 [2024-11-18 00:40:57.876729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.152 [2024-11-18 00:40:57.876778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.152 qpair failed and we were unable to recover it. 00:35:34.152 [2024-11-18 00:40:57.876969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.152 [2024-11-18 00:40:57.877017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.152 qpair failed and we were unable to recover it. 00:35:34.152 [2024-11-18 00:40:57.877252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.152 [2024-11-18 00:40:57.877302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.152 qpair failed and we were unable to recover it. 00:35:34.152 [2024-11-18 00:40:57.877495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.152 [2024-11-18 00:40:57.877543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.153 qpair failed and we were unable to recover it. 00:35:34.153 [2024-11-18 00:40:57.877770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.153 [2024-11-18 00:40:57.877819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.153 qpair failed and we were unable to recover it. 00:35:34.153 [2024-11-18 00:40:57.878058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.153 [2024-11-18 00:40:57.878107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.153 qpair failed and we were unable to recover it. 00:35:34.153 [2024-11-18 00:40:57.878298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.153 [2024-11-18 00:40:57.878360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.153 qpair failed and we were unable to recover it. 00:35:34.153 [2024-11-18 00:40:57.878592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.153 [2024-11-18 00:40:57.878640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.153 qpair failed and we were unable to recover it. 00:35:34.153 [2024-11-18 00:40:57.878881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.153 [2024-11-18 00:40:57.878931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.153 qpair failed and we were unable to recover it. 00:35:34.153 [2024-11-18 00:40:57.879158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.153 [2024-11-18 00:40:57.879206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.153 qpair failed and we were unable to recover it. 00:35:34.153 [2024-11-18 00:40:57.879359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.153 [2024-11-18 00:40:57.879409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.153 qpair failed and we were unable to recover it. 00:35:34.153 [2024-11-18 00:40:57.879609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.153 [2024-11-18 00:40:57.879658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.153 qpair failed and we were unable to recover it. 00:35:34.153 [2024-11-18 00:40:57.879843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.153 [2024-11-18 00:40:57.879891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.153 qpair failed and we were unable to recover it. 00:35:34.153 [2024-11-18 00:40:57.880091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.153 [2024-11-18 00:40:57.880143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.153 qpair failed and we were unable to recover it. 00:35:34.153 [2024-11-18 00:40:57.880325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.153 [2024-11-18 00:40:57.880379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.153 qpair failed and we were unable to recover it. 00:35:34.153 [2024-11-18 00:40:57.880624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.153 [2024-11-18 00:40:57.880676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.153 qpair failed and we were unable to recover it. 00:35:34.153 [2024-11-18 00:40:57.880911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.153 [2024-11-18 00:40:57.880963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.153 qpair failed and we were unable to recover it. 00:35:34.153 [2024-11-18 00:40:57.881168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.153 [2024-11-18 00:40:57.881221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.153 qpair failed and we were unable to recover it. 00:35:34.153 [2024-11-18 00:40:57.881385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.153 [2024-11-18 00:40:57.881439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.153 qpair failed and we were unable to recover it. 00:35:34.153 [2024-11-18 00:40:57.881674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.153 [2024-11-18 00:40:57.881727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.153 qpair failed and we were unable to recover it. 00:35:34.153 [2024-11-18 00:40:57.881882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.153 [2024-11-18 00:40:57.881936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.153 qpair failed and we were unable to recover it. 00:35:34.153 [2024-11-18 00:40:57.882136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.153 [2024-11-18 00:40:57.882188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.153 qpair failed and we were unable to recover it. 00:35:34.153 [2024-11-18 00:40:57.882406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.153 [2024-11-18 00:40:57.882460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.153 qpair failed and we were unable to recover it. 00:35:34.153 [2024-11-18 00:40:57.882700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.153 [2024-11-18 00:40:57.882761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.153 qpair failed and we were unable to recover it. 00:35:34.153 [2024-11-18 00:40:57.882924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.153 [2024-11-18 00:40:57.882975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.153 qpair failed and we were unable to recover it. 00:35:34.153 [2024-11-18 00:40:57.883215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.153 [2024-11-18 00:40:57.883267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.153 qpair failed and we were unable to recover it. 00:35:34.153 [2024-11-18 00:40:57.883496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.153 [2024-11-18 00:40:57.883549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.153 qpair failed and we were unable to recover it. 00:35:34.153 [2024-11-18 00:40:57.883710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.153 [2024-11-18 00:40:57.883763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.153 qpair failed and we were unable to recover it. 00:35:34.153 [2024-11-18 00:40:57.883963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.153 [2024-11-18 00:40:57.884016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.153 qpair failed and we were unable to recover it. 00:35:34.153 [2024-11-18 00:40:57.884186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.153 [2024-11-18 00:40:57.884240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.153 qpair failed and we were unable to recover it. 00:35:34.153 [2024-11-18 00:40:57.884494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.153 [2024-11-18 00:40:57.884547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.153 qpair failed and we were unable to recover it. 00:35:34.153 [2024-11-18 00:40:57.884746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.153 [2024-11-18 00:40:57.884798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.153 qpair failed and we were unable to recover it. 00:35:34.153 [2024-11-18 00:40:57.884997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.153 [2024-11-18 00:40:57.885049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.153 qpair failed and we were unable to recover it. 00:35:34.153 [2024-11-18 00:40:57.885293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.153 [2024-11-18 00:40:57.885361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.153 qpair failed and we were unable to recover it. 00:35:34.153 [2024-11-18 00:40:57.885549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.153 [2024-11-18 00:40:57.885600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.153 qpair failed and we were unable to recover it. 00:35:34.153 [2024-11-18 00:40:57.885805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.153 [2024-11-18 00:40:57.885856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.153 qpair failed and we were unable to recover it. 00:35:34.153 [2024-11-18 00:40:57.886099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.153 [2024-11-18 00:40:57.886150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.153 qpair failed and we were unable to recover it. 00:35:34.153 [2024-11-18 00:40:57.886403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.153 [2024-11-18 00:40:57.886456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.153 qpair failed and we were unable to recover it. 00:35:34.153 [2024-11-18 00:40:57.886701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.153 [2024-11-18 00:40:57.886754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.153 qpair failed and we were unable to recover it. 00:35:34.153 [2024-11-18 00:40:57.886989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.153 [2024-11-18 00:40:57.887041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.153 qpair failed and we were unable to recover it. 00:35:34.153 [2024-11-18 00:40:57.887228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.153 [2024-11-18 00:40:57.887280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.153 qpair failed and we were unable to recover it. 00:35:34.153 [2024-11-18 00:40:57.887560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.154 [2024-11-18 00:40:57.887612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.154 qpair failed and we were unable to recover it. 00:35:34.154 [2024-11-18 00:40:57.887791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.154 [2024-11-18 00:40:57.887843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.154 qpair failed and we were unable to recover it. 00:35:34.154 [2024-11-18 00:40:57.888073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.154 [2024-11-18 00:40:57.888124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.154 qpair failed and we were unable to recover it. 00:35:34.154 [2024-11-18 00:40:57.888364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.154 [2024-11-18 00:40:57.888418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.154 qpair failed and we were unable to recover it. 00:35:34.154 [2024-11-18 00:40:57.888626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.154 [2024-11-18 00:40:57.888678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.154 qpair failed and we were unable to recover it. 00:35:34.154 [2024-11-18 00:40:57.888916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.154 [2024-11-18 00:40:57.888968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.154 qpair failed and we were unable to recover it. 00:35:34.154 [2024-11-18 00:40:57.889186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.154 [2024-11-18 00:40:57.889238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.154 qpair failed and we were unable to recover it. 00:35:34.154 [2024-11-18 00:40:57.889452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.154 [2024-11-18 00:40:57.889507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.154 qpair failed and we were unable to recover it. 00:35:34.154 [2024-11-18 00:40:57.889701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.154 [2024-11-18 00:40:57.889753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.154 qpair failed and we were unable to recover it. 00:35:34.154 [2024-11-18 00:40:57.890030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.154 [2024-11-18 00:40:57.890082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.154 qpair failed and we were unable to recover it. 00:35:34.154 [2024-11-18 00:40:57.890333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.154 [2024-11-18 00:40:57.890387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.154 qpair failed and we were unable to recover it. 00:35:34.154 [2024-11-18 00:40:57.890593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.154 [2024-11-18 00:40:57.890644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.154 qpair failed and we were unable to recover it. 00:35:34.154 [2024-11-18 00:40:57.890890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.154 [2024-11-18 00:40:57.890943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.154 qpair failed and we were unable to recover it. 00:35:34.154 [2024-11-18 00:40:57.891186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.154 [2024-11-18 00:40:57.891237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.154 qpair failed and we were unable to recover it. 00:35:34.154 [2024-11-18 00:40:57.891448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.154 [2024-11-18 00:40:57.891500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.154 qpair failed and we were unable to recover it. 00:35:34.154 [2024-11-18 00:40:57.891689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.154 [2024-11-18 00:40:57.891743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.154 qpair failed and we were unable to recover it. 00:35:34.154 [2024-11-18 00:40:57.891945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.154 [2024-11-18 00:40:57.891996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.154 qpair failed and we were unable to recover it. 00:35:34.154 [2024-11-18 00:40:57.892195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.154 [2024-11-18 00:40:57.892248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.154 qpair failed and we were unable to recover it. 00:35:34.154 [2024-11-18 00:40:57.892469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.154 [2024-11-18 00:40:57.892523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.154 qpair failed and we were unable to recover it. 00:35:34.154 [2024-11-18 00:40:57.892724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.154 [2024-11-18 00:40:57.892777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.154 qpair failed and we were unable to recover it. 00:35:34.154 [2024-11-18 00:40:57.893053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.154 [2024-11-18 00:40:57.893105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.154 qpair failed and we were unable to recover it. 00:35:34.154 [2024-11-18 00:40:57.893351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.154 [2024-11-18 00:40:57.893404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.154 qpair failed and we were unable to recover it. 00:35:34.154 [2024-11-18 00:40:57.893598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.154 [2024-11-18 00:40:57.893658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.154 qpair failed and we were unable to recover it. 00:35:34.154 [2024-11-18 00:40:57.893869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.154 [2024-11-18 00:40:57.893920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.154 qpair failed and we were unable to recover it. 00:35:34.154 [2024-11-18 00:40:57.894165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.154 [2024-11-18 00:40:57.894218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.154 qpair failed and we were unable to recover it. 00:35:34.154 [2024-11-18 00:40:57.894411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.154 [2024-11-18 00:40:57.894463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.154 qpair failed and we were unable to recover it. 00:35:34.154 [2024-11-18 00:40:57.894700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.154 [2024-11-18 00:40:57.894752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.154 qpair failed and we were unable to recover it. 00:35:34.154 [2024-11-18 00:40:57.894994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.154 [2024-11-18 00:40:57.895047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.154 qpair failed and we were unable to recover it. 00:35:34.154 [2024-11-18 00:40:57.895233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.154 [2024-11-18 00:40:57.895284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.154 qpair failed and we were unable to recover it. 00:35:34.154 [2024-11-18 00:40:57.895558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.154 [2024-11-18 00:40:57.895610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.154 qpair failed and we were unable to recover it. 00:35:34.154 [2024-11-18 00:40:57.895788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.154 [2024-11-18 00:40:57.895841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.154 qpair failed and we were unable to recover it. 00:35:34.154 [2024-11-18 00:40:57.896001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.154 [2024-11-18 00:40:57.896052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.154 qpair failed and we were unable to recover it. 00:35:34.154 [2024-11-18 00:40:57.896292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.154 [2024-11-18 00:40:57.896359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.154 qpair failed and we were unable to recover it. 00:35:34.154 [2024-11-18 00:40:57.896613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.154 [2024-11-18 00:40:57.896665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.154 qpair failed and we were unable to recover it. 00:35:34.154 [2024-11-18 00:40:57.896826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.155 [2024-11-18 00:40:57.896879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.155 qpair failed and we were unable to recover it. 00:35:34.155 [2024-11-18 00:40:57.897072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.155 [2024-11-18 00:40:57.897124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.155 qpair failed and we were unable to recover it. 00:35:34.155 [2024-11-18 00:40:57.897336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.155 [2024-11-18 00:40:57.897389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.155 qpair failed and we were unable to recover it. 00:35:34.155 [2024-11-18 00:40:57.897543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.155 [2024-11-18 00:40:57.897595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.155 qpair failed and we were unable to recover it. 00:35:34.155 [2024-11-18 00:40:57.897780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.155 [2024-11-18 00:40:57.897832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.155 qpair failed and we were unable to recover it. 00:35:34.155 [2024-11-18 00:40:57.898023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.155 [2024-11-18 00:40:57.898075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.155 qpair failed and we were unable to recover it. 00:35:34.155 [2024-11-18 00:40:57.898251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.155 [2024-11-18 00:40:57.898301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.155 qpair failed and we were unable to recover it. 00:35:34.155 [2024-11-18 00:40:57.898483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.155 [2024-11-18 00:40:57.898535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.155 qpair failed and we were unable to recover it. 00:35:34.155 [2024-11-18 00:40:57.898737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.155 [2024-11-18 00:40:57.898787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.155 qpair failed and we were unable to recover it. 00:35:34.155 [2024-11-18 00:40:57.898961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.155 [2024-11-18 00:40:57.899014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.155 qpair failed and we were unable to recover it. 00:35:34.155 [2024-11-18 00:40:57.899203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.155 [2024-11-18 00:40:57.899253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.155 qpair failed and we were unable to recover it. 00:35:34.155 [2024-11-18 00:40:57.899505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.155 [2024-11-18 00:40:57.899556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.155 qpair failed and we were unable to recover it. 00:35:34.155 [2024-11-18 00:40:57.899790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.155 [2024-11-18 00:40:57.899840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.155 qpair failed and we were unable to recover it. 00:35:34.155 [2024-11-18 00:40:57.900041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.155 [2024-11-18 00:40:57.900093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.155 qpair failed and we were unable to recover it. 00:35:34.155 [2024-11-18 00:40:57.900302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.155 [2024-11-18 00:40:57.900365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.155 qpair failed and we were unable to recover it. 00:35:34.155 [2024-11-18 00:40:57.900578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.155 [2024-11-18 00:40:57.900629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.155 qpair failed and we were unable to recover it. 00:35:34.155 [2024-11-18 00:40:57.900827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.155 [2024-11-18 00:40:57.900879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.155 qpair failed and we were unable to recover it. 00:35:34.155 [2024-11-18 00:40:57.901106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.155 [2024-11-18 00:40:57.901158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.155 qpair failed and we were unable to recover it. 00:35:34.155 [2024-11-18 00:40:57.901342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.155 [2024-11-18 00:40:57.901396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.155 qpair failed and we were unable to recover it. 00:35:34.155 [2024-11-18 00:40:57.901639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.155 [2024-11-18 00:40:57.901691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.155 qpair failed and we were unable to recover it. 00:35:34.155 [2024-11-18 00:40:57.901917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.155 [2024-11-18 00:40:57.901971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.155 qpair failed and we were unable to recover it. 00:35:34.155 [2024-11-18 00:40:57.902219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.155 [2024-11-18 00:40:57.902272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.155 qpair failed and we were unable to recover it. 00:35:34.155 [2024-11-18 00:40:57.902478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.155 [2024-11-18 00:40:57.902557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.155 qpair failed and we were unable to recover it. 00:35:34.155 [2024-11-18 00:40:57.902786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.155 [2024-11-18 00:40:57.902843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.155 qpair failed and we were unable to recover it. 00:35:34.155 [2024-11-18 00:40:57.903009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.155 [2024-11-18 00:40:57.903063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.155 qpair failed and we were unable to recover it. 00:35:34.155 [2024-11-18 00:40:57.903331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.155 [2024-11-18 00:40:57.903385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.155 qpair failed and we were unable to recover it. 00:35:34.155 [2024-11-18 00:40:57.903602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.155 [2024-11-18 00:40:57.903655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.155 qpair failed and we were unable to recover it. 00:35:34.155 [2024-11-18 00:40:57.903829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.155 [2024-11-18 00:40:57.903881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.155 qpair failed and we were unable to recover it. 00:35:34.155 [2024-11-18 00:40:57.904150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.155 [2024-11-18 00:40:57.904229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.155 qpair failed and we were unable to recover it. 00:35:34.155 [2024-11-18 00:40:57.904504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.155 [2024-11-18 00:40:57.904559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.155 qpair failed and we were unable to recover it. 00:35:34.155 [2024-11-18 00:40:57.904803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.155 [2024-11-18 00:40:57.904855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.155 qpair failed and we were unable to recover it. 00:35:34.155 [2024-11-18 00:40:57.905026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.155 [2024-11-18 00:40:57.905078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.155 qpair failed and we were unable to recover it. 00:35:34.155 [2024-11-18 00:40:57.905304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.155 [2024-11-18 00:40:57.905394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.155 qpair failed and we were unable to recover it. 00:35:34.155 [2024-11-18 00:40:57.905633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.155 [2024-11-18 00:40:57.905685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.155 qpair failed and we were unable to recover it. 00:35:34.155 [2024-11-18 00:40:57.905928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.155 [2024-11-18 00:40:57.905980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.155 qpair failed and we were unable to recover it. 00:35:34.155 [2024-11-18 00:40:57.906141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.155 [2024-11-18 00:40:57.906194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.155 qpair failed and we were unable to recover it. 00:35:34.155 [2024-11-18 00:40:57.906414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.155 [2024-11-18 00:40:57.906467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.155 qpair failed and we were unable to recover it. 00:35:34.155 [2024-11-18 00:40:57.906714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.155 [2024-11-18 00:40:57.906767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.155 qpair failed and we were unable to recover it. 00:35:34.155 [2024-11-18 00:40:57.906975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.156 [2024-11-18 00:40:57.907028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.156 qpair failed and we were unable to recover it. 00:35:34.156 [2024-11-18 00:40:57.907201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.156 [2024-11-18 00:40:57.907253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.156 qpair failed and we were unable to recover it. 00:35:34.156 [2024-11-18 00:40:57.907539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.156 [2024-11-18 00:40:57.907593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.156 qpair failed and we were unable to recover it. 00:35:34.156 [2024-11-18 00:40:57.907803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.156 [2024-11-18 00:40:57.907856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.156 qpair failed and we were unable to recover it. 00:35:34.156 [2024-11-18 00:40:57.908104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.156 [2024-11-18 00:40:57.908156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.156 qpair failed and we were unable to recover it. 00:35:34.156 [2024-11-18 00:40:57.908307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.156 [2024-11-18 00:40:57.908372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.156 qpair failed and we were unable to recover it. 00:35:34.156 [2024-11-18 00:40:57.908616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.156 [2024-11-18 00:40:57.908668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.156 qpair failed and we were unable to recover it. 00:35:34.156 [2024-11-18 00:40:57.908872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.156 [2024-11-18 00:40:57.908924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.156 qpair failed and we were unable to recover it. 00:35:34.156 [2024-11-18 00:40:57.909117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.156 [2024-11-18 00:40:57.909169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.156 qpair failed and we were unable to recover it. 00:35:34.156 [2024-11-18 00:40:57.909375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.156 [2024-11-18 00:40:57.909428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.156 qpair failed and we were unable to recover it. 00:35:34.156 [2024-11-18 00:40:57.909598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.156 [2024-11-18 00:40:57.909650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.156 qpair failed and we were unable to recover it. 00:35:34.156 [2024-11-18 00:40:57.909888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.156 [2024-11-18 00:40:57.909940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.156 qpair failed and we were unable to recover it. 00:35:34.156 [2024-11-18 00:40:57.910092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.156 [2024-11-18 00:40:57.910144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.156 qpair failed and we were unable to recover it. 00:35:34.156 [2024-11-18 00:40:57.910383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.156 [2024-11-18 00:40:57.910436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.156 qpair failed and we were unable to recover it. 00:35:34.156 [2024-11-18 00:40:57.910672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.156 [2024-11-18 00:40:57.910724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.156 qpair failed and we were unable to recover it. 00:35:34.156 [2024-11-18 00:40:57.910933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.156 [2024-11-18 00:40:57.910985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.156 qpair failed and we were unable to recover it. 00:35:34.156 [2024-11-18 00:40:57.911223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.156 [2024-11-18 00:40:57.911275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.156 qpair failed and we were unable to recover it. 00:35:34.156 [2024-11-18 00:40:57.911491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.156 [2024-11-18 00:40:57.911555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.156 qpair failed and we were unable to recover it. 00:35:34.156 [2024-11-18 00:40:57.911814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.156 [2024-11-18 00:40:57.911866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.156 qpair failed and we were unable to recover it. 00:35:34.156 [2024-11-18 00:40:57.912051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.156 [2024-11-18 00:40:57.912103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.156 qpair failed and we were unable to recover it. 00:35:34.156 [2024-11-18 00:40:57.912304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.156 [2024-11-18 00:40:57.912369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.156 qpair failed and we were unable to recover it. 00:35:34.156 [2024-11-18 00:40:57.912535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.156 [2024-11-18 00:40:57.912588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.156 qpair failed and we were unable to recover it. 00:35:34.156 [2024-11-18 00:40:57.912791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.156 [2024-11-18 00:40:57.912845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.156 qpair failed and we were unable to recover it. 00:35:34.156 [2024-11-18 00:40:57.913092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.156 [2024-11-18 00:40:57.913145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.156 qpair failed and we were unable to recover it. 00:35:34.156 [2024-11-18 00:40:57.913398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.156 [2024-11-18 00:40:57.913450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.156 qpair failed and we were unable to recover it. 00:35:34.156 [2024-11-18 00:40:57.913653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.156 [2024-11-18 00:40:57.913705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.156 qpair failed and we were unable to recover it. 00:35:34.156 [2024-11-18 00:40:57.913878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.156 [2024-11-18 00:40:57.913932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.156 qpair failed and we were unable to recover it. 00:35:34.156 [2024-11-18 00:40:57.914126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.156 [2024-11-18 00:40:57.914179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.156 qpair failed and we were unable to recover it. 00:35:34.156 [2024-11-18 00:40:57.914376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.156 [2024-11-18 00:40:57.914429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.156 qpair failed and we were unable to recover it. 00:35:34.156 [2024-11-18 00:40:57.914566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.156 [2024-11-18 00:40:57.914618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.156 qpair failed and we were unable to recover it. 00:35:34.156 [2024-11-18 00:40:57.914815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.156 [2024-11-18 00:40:57.914867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.156 qpair failed and we were unable to recover it. 00:35:34.156 [2024-11-18 00:40:57.915071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.156 [2024-11-18 00:40:57.915125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.156 qpair failed and we were unable to recover it. 00:35:34.156 [2024-11-18 00:40:57.915366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.156 [2024-11-18 00:40:57.915418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.156 qpair failed and we were unable to recover it. 00:35:34.156 [2024-11-18 00:40:57.915633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.156 [2024-11-18 00:40:57.915686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.156 qpair failed and we were unable to recover it. 00:35:34.156 [2024-11-18 00:40:57.915895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.156 [2024-11-18 00:40:57.915947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.156 qpair failed and we were unable to recover it. 00:35:34.156 [2024-11-18 00:40:57.916185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.156 [2024-11-18 00:40:57.916237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.156 qpair failed and we were unable to recover it. 00:35:34.156 [2024-11-18 00:40:57.916428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.156 [2024-11-18 00:40:57.916481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.156 qpair failed and we were unable to recover it. 00:35:34.156 [2024-11-18 00:40:57.916719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.156 [2024-11-18 00:40:57.916772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.156 qpair failed and we were unable to recover it. 00:35:34.156 [2024-11-18 00:40:57.916968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.157 [2024-11-18 00:40:57.917020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.157 qpair failed and we were unable to recover it. 00:35:34.157 [2024-11-18 00:40:57.917231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.157 [2024-11-18 00:40:57.917284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.157 qpair failed and we were unable to recover it. 00:35:34.157 [2024-11-18 00:40:57.917507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.157 [2024-11-18 00:40:57.917559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.157 qpair failed and we were unable to recover it. 00:35:34.157 [2024-11-18 00:40:57.917756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.157 [2024-11-18 00:40:57.917807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.157 qpair failed and we were unable to recover it. 00:35:34.157 [2024-11-18 00:40:57.918009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.157 [2024-11-18 00:40:57.918061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.157 qpair failed and we were unable to recover it. 00:35:34.157 [2024-11-18 00:40:57.918296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.157 [2024-11-18 00:40:57.918363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.157 qpair failed and we were unable to recover it. 00:35:34.157 [2024-11-18 00:40:57.918572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.157 [2024-11-18 00:40:57.918624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.157 qpair failed and we were unable to recover it. 00:35:34.157 [2024-11-18 00:40:57.918844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.157 [2024-11-18 00:40:57.918897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.157 qpair failed and we were unable to recover it. 00:35:34.157 [2024-11-18 00:40:57.919135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.157 [2024-11-18 00:40:57.919188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.157 qpair failed and we were unable to recover it. 00:35:34.157 [2024-11-18 00:40:57.919355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.157 [2024-11-18 00:40:57.919409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.157 qpair failed and we were unable to recover it. 00:35:34.157 [2024-11-18 00:40:57.919612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.157 [2024-11-18 00:40:57.919663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.157 qpair failed and we were unable to recover it. 00:35:34.157 [2024-11-18 00:40:57.919866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.157 [2024-11-18 00:40:57.919917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.157 qpair failed and we were unable to recover it. 00:35:34.157 [2024-11-18 00:40:57.920160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.157 [2024-11-18 00:40:57.920211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.157 qpair failed and we were unable to recover it. 00:35:34.157 [2024-11-18 00:40:57.920389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.157 [2024-11-18 00:40:57.920443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.157 qpair failed and we were unable to recover it. 00:35:34.157 [2024-11-18 00:40:57.920660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.157 [2024-11-18 00:40:57.920712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.157 qpair failed and we were unable to recover it. 00:35:34.157 [2024-11-18 00:40:57.920872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.157 [2024-11-18 00:40:57.920925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.157 qpair failed and we were unable to recover it. 00:35:34.157 [2024-11-18 00:40:57.921139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.157 [2024-11-18 00:40:57.921190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.157 qpair failed and we were unable to recover it. 00:35:34.157 [2024-11-18 00:40:57.921397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.157 [2024-11-18 00:40:57.921450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.157 qpair failed and we were unable to recover it. 00:35:34.157 [2024-11-18 00:40:57.921655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.157 [2024-11-18 00:40:57.921707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.157 qpair failed and we were unable to recover it. 00:35:34.157 [2024-11-18 00:40:57.921925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.157 [2024-11-18 00:40:57.921978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.157 qpair failed and we were unable to recover it. 00:35:34.157 [2024-11-18 00:40:57.922197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.157 [2024-11-18 00:40:57.922262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.157 qpair failed and we were unable to recover it. 00:35:34.157 [2024-11-18 00:40:57.922539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.157 [2024-11-18 00:40:57.922592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.157 qpair failed and we were unable to recover it. 00:35:34.157 [2024-11-18 00:40:57.922830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.157 [2024-11-18 00:40:57.922882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.157 qpair failed and we were unable to recover it. 00:35:34.157 [2024-11-18 00:40:57.923153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.157 [2024-11-18 00:40:57.923218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.157 qpair failed and we were unable to recover it. 00:35:34.157 [2024-11-18 00:40:57.923493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.157 [2024-11-18 00:40:57.923547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.157 qpair failed and we were unable to recover it. 00:35:34.157 [2024-11-18 00:40:57.923802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.157 [2024-11-18 00:40:57.923855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.157 qpair failed and we were unable to recover it. 00:35:34.157 [2024-11-18 00:40:57.924064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.157 [2024-11-18 00:40:57.924128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.157 qpair failed and we were unable to recover it. 00:35:34.157 [2024-11-18 00:40:57.924340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.157 [2024-11-18 00:40:57.924394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.157 qpair failed and we were unable to recover it. 00:35:34.157 [2024-11-18 00:40:57.924615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.157 [2024-11-18 00:40:57.924668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.157 qpair failed and we were unable to recover it. 00:35:34.157 [2024-11-18 00:40:57.924847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.157 [2024-11-18 00:40:57.924898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.157 qpair failed and we were unable to recover it. 00:35:34.157 [2024-11-18 00:40:57.925144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.157 [2024-11-18 00:40:57.925195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.441 qpair failed and we were unable to recover it. 00:35:34.441 [2024-11-18 00:40:57.925445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.441 [2024-11-18 00:40:57.925499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.441 qpair failed and we were unable to recover it. 00:35:34.441 [2024-11-18 00:40:57.925705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.441 [2024-11-18 00:40:57.925759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.441 qpair failed and we were unable to recover it. 00:35:34.441 [2024-11-18 00:40:57.925934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.441 [2024-11-18 00:40:57.925986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.441 qpair failed and we were unable to recover it. 00:35:34.441 [2024-11-18 00:40:57.926174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.442 [2024-11-18 00:40:57.926226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.442 qpair failed and we were unable to recover it. 00:35:34.442 [2024-11-18 00:40:57.926420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.442 [2024-11-18 00:40:57.926474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.442 qpair failed and we were unable to recover it. 00:35:34.442 [2024-11-18 00:40:57.926645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.442 [2024-11-18 00:40:57.926698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.442 qpair failed and we were unable to recover it. 00:35:34.442 [2024-11-18 00:40:57.926903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.442 [2024-11-18 00:40:57.926956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.442 qpair failed and we were unable to recover it. 00:35:34.442 [2024-11-18 00:40:57.927125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.442 [2024-11-18 00:40:57.927177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.442 qpair failed and we were unable to recover it. 00:35:34.442 [2024-11-18 00:40:57.927381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.442 [2024-11-18 00:40:57.927434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.442 qpair failed and we were unable to recover it. 00:35:34.442 [2024-11-18 00:40:57.927626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.442 [2024-11-18 00:40:57.927678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.442 qpair failed and we were unable to recover it. 00:35:34.442 [2024-11-18 00:40:57.927913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.442 [2024-11-18 00:40:57.927965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.442 qpair failed and we were unable to recover it. 00:35:34.442 [2024-11-18 00:40:57.928134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.442 [2024-11-18 00:40:57.928186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.442 qpair failed and we were unable to recover it. 00:35:34.442 [2024-11-18 00:40:57.928430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.442 [2024-11-18 00:40:57.928483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.442 qpair failed and we were unable to recover it. 00:35:34.442 [2024-11-18 00:40:57.928681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.442 [2024-11-18 00:40:57.928733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.442 qpair failed and we were unable to recover it. 00:35:34.442 [2024-11-18 00:40:57.928935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.442 [2024-11-18 00:40:57.928986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.442 qpair failed and we were unable to recover it. 00:35:34.442 [2024-11-18 00:40:57.929192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.442 [2024-11-18 00:40:57.929244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.442 qpair failed and we were unable to recover it. 00:35:34.442 [2024-11-18 00:40:57.929468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.442 [2024-11-18 00:40:57.929535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.442 qpair failed and we were unable to recover it. 00:35:34.442 [2024-11-18 00:40:57.929705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.442 [2024-11-18 00:40:57.929756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.442 qpair failed and we were unable to recover it. 00:35:34.442 [2024-11-18 00:40:57.929911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.442 [2024-11-18 00:40:57.929963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.442 qpair failed and we were unable to recover it. 00:35:34.442 [2024-11-18 00:40:57.930126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.442 [2024-11-18 00:40:57.930178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.442 qpair failed and we were unable to recover it. 00:35:34.442 [2024-11-18 00:40:57.930375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.442 [2024-11-18 00:40:57.930428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.442 qpair failed and we were unable to recover it. 00:35:34.442 [2024-11-18 00:40:57.930672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.442 [2024-11-18 00:40:57.930724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.442 qpair failed and we were unable to recover it. 00:35:34.442 [2024-11-18 00:40:57.930937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.442 [2024-11-18 00:40:57.930988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.442 qpair failed and we were unable to recover it. 00:35:34.442 [2024-11-18 00:40:57.931149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.442 [2024-11-18 00:40:57.931231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.442 qpair failed and we were unable to recover it. 00:35:34.442 [2024-11-18 00:40:57.931465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.442 [2024-11-18 00:40:57.931519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.442 qpair failed and we were unable to recover it. 00:35:34.442 [2024-11-18 00:40:57.931723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.442 [2024-11-18 00:40:57.931775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.442 qpair failed and we were unable to recover it. 00:35:34.442 [2024-11-18 00:40:57.931934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.442 [2024-11-18 00:40:57.931988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.442 qpair failed and we were unable to recover it. 00:35:34.442 [2024-11-18 00:40:57.932143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.442 [2024-11-18 00:40:57.932199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.442 qpair failed and we were unable to recover it. 00:35:34.442 [2024-11-18 00:40:57.932445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.442 [2024-11-18 00:40:57.932500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.442 qpair failed and we were unable to recover it. 00:35:34.442 [2024-11-18 00:40:57.932666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.442 [2024-11-18 00:40:57.932719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.442 qpair failed and we were unable to recover it. 00:35:34.442 [2024-11-18 00:40:57.932945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.442 [2024-11-18 00:40:57.932998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.442 qpair failed and we were unable to recover it. 00:35:34.442 [2024-11-18 00:40:57.933199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.442 [2024-11-18 00:40:57.933250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.442 qpair failed and we were unable to recover it. 00:35:34.442 [2024-11-18 00:40:57.933456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.442 [2024-11-18 00:40:57.933509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.442 qpair failed and we were unable to recover it. 00:35:34.442 [2024-11-18 00:40:57.933723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.442 [2024-11-18 00:40:57.933777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.442 qpair failed and we were unable to recover it. 00:35:34.442 [2024-11-18 00:40:57.933946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.442 [2024-11-18 00:40:57.933997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.442 qpair failed and we were unable to recover it. 00:35:34.442 [2024-11-18 00:40:57.934194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.442 [2024-11-18 00:40:57.934246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.442 qpair failed and we were unable to recover it. 00:35:34.442 [2024-11-18 00:40:57.934477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.442 [2024-11-18 00:40:57.934531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.442 qpair failed and we were unable to recover it. 00:35:34.442 [2024-11-18 00:40:57.934747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.442 [2024-11-18 00:40:57.934799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.442 qpair failed and we were unable to recover it. 00:35:34.442 [2024-11-18 00:40:57.935038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.442 [2024-11-18 00:40:57.935089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.442 qpair failed and we were unable to recover it. 00:35:34.442 [2024-11-18 00:40:57.935343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.442 [2024-11-18 00:40:57.935396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.442 qpair failed and we were unable to recover it. 00:35:34.442 [2024-11-18 00:40:57.935644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.442 [2024-11-18 00:40:57.935696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.442 qpair failed and we were unable to recover it. 00:35:34.442 [2024-11-18 00:40:57.935938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.443 [2024-11-18 00:40:57.935990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.443 qpair failed and we were unable to recover it. 00:35:34.443 [2024-11-18 00:40:57.936206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.443 [2024-11-18 00:40:57.936258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.443 qpair failed and we were unable to recover it. 00:35:34.443 [2024-11-18 00:40:57.936474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.443 [2024-11-18 00:40:57.936535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.443 qpair failed and we were unable to recover it. 00:35:34.443 [2024-11-18 00:40:57.936781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.443 [2024-11-18 00:40:57.936832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.443 qpair failed and we were unable to recover it. 00:35:34.443 [2024-11-18 00:40:57.937037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.443 [2024-11-18 00:40:57.937090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.443 qpair failed and we were unable to recover it. 00:35:34.443 [2024-11-18 00:40:57.937292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.443 [2024-11-18 00:40:57.937360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.443 qpair failed and we were unable to recover it. 00:35:34.443 [2024-11-18 00:40:57.937602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.443 [2024-11-18 00:40:57.937653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.443 qpair failed and we were unable to recover it. 00:35:34.443 [2024-11-18 00:40:57.937865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.443 [2024-11-18 00:40:57.937917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.443 qpair failed and we were unable to recover it. 00:35:34.443 [2024-11-18 00:40:57.938159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.443 [2024-11-18 00:40:57.938211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.443 qpair failed and we were unable to recover it. 00:35:34.443 [2024-11-18 00:40:57.938356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.443 [2024-11-18 00:40:57.938410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.443 qpair failed and we were unable to recover it. 00:35:34.443 [2024-11-18 00:40:57.938623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.443 [2024-11-18 00:40:57.938676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.443 qpair failed and we were unable to recover it. 00:35:34.443 [2024-11-18 00:40:57.938871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.443 [2024-11-18 00:40:57.938922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.443 qpair failed and we were unable to recover it. 00:35:34.443 [2024-11-18 00:40:57.939118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.443 [2024-11-18 00:40:57.939170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.443 qpair failed and we were unable to recover it. 00:35:34.443 [2024-11-18 00:40:57.939413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.443 [2024-11-18 00:40:57.939467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.443 qpair failed and we were unable to recover it. 00:35:34.443 [2024-11-18 00:40:57.939704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.443 [2024-11-18 00:40:57.939755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.443 qpair failed and we were unable to recover it. 00:35:34.443 [2024-11-18 00:40:57.939947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.443 [2024-11-18 00:40:57.940000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.443 qpair failed and we were unable to recover it. 00:35:34.443 [2024-11-18 00:40:57.940210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.443 [2024-11-18 00:40:57.940263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.443 qpair failed and we were unable to recover it. 00:35:34.443 [2024-11-18 00:40:57.940483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.443 [2024-11-18 00:40:57.940535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.443 qpair failed and we were unable to recover it. 00:35:34.443 [2024-11-18 00:40:57.940755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.443 [2024-11-18 00:40:57.940807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.443 qpair failed and we were unable to recover it. 00:35:34.443 [2024-11-18 00:40:57.941016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.443 [2024-11-18 00:40:57.941068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.443 qpair failed and we were unable to recover it. 00:35:34.443 [2024-11-18 00:40:57.941338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.443 [2024-11-18 00:40:57.941411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.443 qpair failed and we were unable to recover it. 00:35:34.443 [2024-11-18 00:40:57.941609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.443 [2024-11-18 00:40:57.941661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.443 qpair failed and we were unable to recover it. 00:35:34.443 [2024-11-18 00:40:57.941905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.443 [2024-11-18 00:40:57.941958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.443 qpair failed and we were unable to recover it. 00:35:34.443 [2024-11-18 00:40:57.942191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.443 [2024-11-18 00:40:57.942244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.443 qpair failed and we were unable to recover it. 00:35:34.443 [2024-11-18 00:40:57.942479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.443 [2024-11-18 00:40:57.942532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.443 qpair failed and we were unable to recover it. 00:35:34.443 [2024-11-18 00:40:57.942739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.443 [2024-11-18 00:40:57.942793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.443 qpair failed and we were unable to recover it. 00:35:34.443 [2024-11-18 00:40:57.942999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.443 [2024-11-18 00:40:57.943062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.443 qpair failed and we were unable to recover it. 00:35:34.443 [2024-11-18 00:40:57.943252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.443 [2024-11-18 00:40:57.943303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.443 qpair failed and we were unable to recover it. 00:35:34.443 [2024-11-18 00:40:57.943526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.443 [2024-11-18 00:40:57.943579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.443 qpair failed and we were unable to recover it. 00:35:34.443 [2024-11-18 00:40:57.943792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.443 [2024-11-18 00:40:57.943844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.443 qpair failed and we were unable to recover it. 00:35:34.443 [2024-11-18 00:40:57.944053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.443 [2024-11-18 00:40:57.944105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.443 qpair failed and we were unable to recover it. 00:35:34.443 [2024-11-18 00:40:57.944349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.443 [2024-11-18 00:40:57.944404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.443 qpair failed and we were unable to recover it. 00:35:34.443 [2024-11-18 00:40:57.944619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.443 [2024-11-18 00:40:57.944680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.443 qpair failed and we were unable to recover it. 00:35:34.443 [2024-11-18 00:40:57.944915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.443 [2024-11-18 00:40:57.944967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.443 qpair failed and we were unable to recover it. 00:35:34.443 [2024-11-18 00:40:57.945170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.443 [2024-11-18 00:40:57.945223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.443 qpair failed and we were unable to recover it. 00:35:34.443 [2024-11-18 00:40:57.945409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.443 [2024-11-18 00:40:57.945441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.443 qpair failed and we were unable to recover it. 00:35:34.443 [2024-11-18 00:40:57.945561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.443 [2024-11-18 00:40:57.945603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.443 qpair failed and we were unable to recover it. 00:35:34.443 [2024-11-18 00:40:57.945734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.444 [2024-11-18 00:40:57.945766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.444 qpair failed and we were unable to recover it. 00:35:34.444 [2024-11-18 00:40:57.945861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.444 [2024-11-18 00:40:57.945892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.444 qpair failed and we were unable to recover it. 00:35:34.444 [2024-11-18 00:40:57.945986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.444 [2024-11-18 00:40:57.946018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.444 qpair failed and we were unable to recover it. 00:35:34.444 [2024-11-18 00:40:57.946153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.444 [2024-11-18 00:40:57.946184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.444 qpair failed and we were unable to recover it. 00:35:34.444 [2024-11-18 00:40:57.946436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.444 [2024-11-18 00:40:57.946469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.444 qpair failed and we were unable to recover it. 00:35:34.444 [2024-11-18 00:40:57.946572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.444 [2024-11-18 00:40:57.946603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.444 qpair failed and we were unable to recover it. 00:35:34.444 [2024-11-18 00:40:57.946767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.444 [2024-11-18 00:40:57.946833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.444 qpair failed and we were unable to recover it. 00:35:34.444 [2024-11-18 00:40:57.947122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.444 [2024-11-18 00:40:57.947153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.444 qpair failed and we were unable to recover it. 00:35:34.444 [2024-11-18 00:40:57.947322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.444 [2024-11-18 00:40:57.947354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.444 qpair failed and we were unable to recover it. 00:35:34.444 [2024-11-18 00:40:57.947534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.444 [2024-11-18 00:40:57.947599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.444 qpair failed and we were unable to recover it. 00:35:34.444 [2024-11-18 00:40:57.947885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.444 [2024-11-18 00:40:57.947950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.444 qpair failed and we were unable to recover it. 00:35:34.444 [2024-11-18 00:40:57.948216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.444 [2024-11-18 00:40:57.948281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.444 qpair failed and we were unable to recover it. 00:35:34.444 [2024-11-18 00:40:57.948556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.444 [2024-11-18 00:40:57.948622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.444 qpair failed and we were unable to recover it. 00:35:34.444 [2024-11-18 00:40:57.948902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.444 [2024-11-18 00:40:57.948966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.444 qpair failed and we were unable to recover it. 00:35:34.444 [2024-11-18 00:40:57.949222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.444 [2024-11-18 00:40:57.949287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.444 qpair failed and we were unable to recover it. 00:35:34.444 [2024-11-18 00:40:57.949547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.444 [2024-11-18 00:40:57.949612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.444 qpair failed and we were unable to recover it. 00:35:34.444 [2024-11-18 00:40:57.949872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.444 [2024-11-18 00:40:57.949937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.444 qpair failed and we were unable to recover it. 00:35:34.444 [2024-11-18 00:40:57.950198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.444 [2024-11-18 00:40:57.950266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.444 qpair failed and we were unable to recover it. 00:35:34.444 [2024-11-18 00:40:57.950501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.444 [2024-11-18 00:40:57.950557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.444 qpair failed and we were unable to recover it. 00:35:34.444 [2024-11-18 00:40:57.950739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.444 [2024-11-18 00:40:57.950795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.444 qpair failed and we were unable to recover it. 00:35:34.444 [2024-11-18 00:40:57.951029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.444 [2024-11-18 00:40:57.951085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.444 qpair failed and we were unable to recover it. 00:35:34.444 [2024-11-18 00:40:57.951325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.444 [2024-11-18 00:40:57.951386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.444 qpair failed and we were unable to recover it. 00:35:34.444 [2024-11-18 00:40:57.951484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.444 [2024-11-18 00:40:57.951510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.444 qpair failed and we were unable to recover it. 00:35:34.444 [2024-11-18 00:40:57.951600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.444 [2024-11-18 00:40:57.951633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.444 qpair failed and we were unable to recover it. 00:35:34.444 [2024-11-18 00:40:57.951727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.444 [2024-11-18 00:40:57.951752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.444 qpair failed and we were unable to recover it. 00:35:34.444 [2024-11-18 00:40:57.951868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.444 [2024-11-18 00:40:57.951894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.444 qpair failed and we were unable to recover it. 00:35:34.444 [2024-11-18 00:40:57.952002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.444 [2024-11-18 00:40:57.952049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.444 qpair failed and we were unable to recover it. 00:35:34.444 [2024-11-18 00:40:57.952156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.444 [2024-11-18 00:40:57.952181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.444 qpair failed and we were unable to recover it. 00:35:34.444 [2024-11-18 00:40:57.952293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.444 [2024-11-18 00:40:57.952336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.444 qpair failed and we were unable to recover it. 00:35:34.444 [2024-11-18 00:40:57.952449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.444 [2024-11-18 00:40:57.952475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.444 qpair failed and we were unable to recover it. 00:35:34.444 [2024-11-18 00:40:57.952586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.444 [2024-11-18 00:40:57.952611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.444 qpair failed and we were unable to recover it. 00:35:34.444 [2024-11-18 00:40:57.952696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.444 [2024-11-18 00:40:57.952722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.444 qpair failed and we were unable to recover it. 00:35:34.444 [2024-11-18 00:40:57.952895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.444 [2024-11-18 00:40:57.952929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.444 qpair failed and we were unable to recover it. 00:35:34.444 [2024-11-18 00:40:57.953061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.444 [2024-11-18 00:40:57.953101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.444 qpair failed and we were unable to recover it. 00:35:34.444 [2024-11-18 00:40:57.953237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.444 [2024-11-18 00:40:57.953272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.444 qpair failed and we were unable to recover it. 00:35:34.444 [2024-11-18 00:40:57.953426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.444 [2024-11-18 00:40:57.953460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.444 qpair failed and we were unable to recover it. 00:35:34.444 [2024-11-18 00:40:57.953565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.444 [2024-11-18 00:40:57.953598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.444 qpair failed and we were unable to recover it. 00:35:34.444 [2024-11-18 00:40:57.953776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.444 [2024-11-18 00:40:57.953810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.444 qpair failed and we were unable to recover it. 00:35:34.445 [2024-11-18 00:40:57.953914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.445 [2024-11-18 00:40:57.953948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.445 qpair failed and we were unable to recover it. 00:35:34.445 [2024-11-18 00:40:57.954050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.445 [2024-11-18 00:40:57.954084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.445 qpair failed and we were unable to recover it. 00:35:34.445 [2024-11-18 00:40:57.954232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.445 [2024-11-18 00:40:57.954267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.445 qpair failed and we were unable to recover it. 00:35:34.445 [2024-11-18 00:40:57.954418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.445 [2024-11-18 00:40:57.954453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.445 qpair failed and we were unable to recover it. 00:35:34.445 [2024-11-18 00:40:57.954633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.445 [2024-11-18 00:40:57.954659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.445 qpair failed and we were unable to recover it. 00:35:34.445 [2024-11-18 00:40:57.954772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.445 [2024-11-18 00:40:57.954798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.445 qpair failed and we were unable to recover it. 00:35:34.445 [2024-11-18 00:40:57.954910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.445 [2024-11-18 00:40:57.954935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.445 qpair failed and we were unable to recover it. 00:35:34.445 [2024-11-18 00:40:57.955022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.445 [2024-11-18 00:40:57.955048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.445 qpair failed and we were unable to recover it. 00:35:34.445 [2024-11-18 00:40:57.955168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.445 [2024-11-18 00:40:57.955194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.445 qpair failed and we were unable to recover it. 00:35:34.445 [2024-11-18 00:40:57.955316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.445 [2024-11-18 00:40:57.955343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.445 qpair failed and we were unable to recover it. 00:35:34.445 [2024-11-18 00:40:57.955450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.445 [2024-11-18 00:40:57.955475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.445 qpair failed and we were unable to recover it. 00:35:34.445 [2024-11-18 00:40:57.955631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.445 [2024-11-18 00:40:57.955665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.445 qpair failed and we were unable to recover it. 00:35:34.445 [2024-11-18 00:40:57.955853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.445 [2024-11-18 00:40:57.955905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.445 qpair failed and we were unable to recover it. 00:35:34.445 [2024-11-18 00:40:57.956152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.445 [2024-11-18 00:40:57.956178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.445 qpair failed and we were unable to recover it. 00:35:34.445 [2024-11-18 00:40:57.956293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.445 [2024-11-18 00:40:57.956325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.445 qpair failed and we were unable to recover it. 00:35:34.445 [2024-11-18 00:40:57.956474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.445 [2024-11-18 00:40:57.956508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.445 qpair failed and we were unable to recover it. 00:35:34.445 [2024-11-18 00:40:57.956650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.445 [2024-11-18 00:40:57.956683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.445 qpair failed and we were unable to recover it. 00:35:34.445 [2024-11-18 00:40:57.956799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.445 [2024-11-18 00:40:57.956833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.445 qpair failed and we were unable to recover it. 00:35:34.445 [2024-11-18 00:40:57.957040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.445 [2024-11-18 00:40:57.957103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.445 qpair failed and we were unable to recover it. 00:35:34.445 [2024-11-18 00:40:57.957334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.445 [2024-11-18 00:40:57.957369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.445 qpair failed and we were unable to recover it. 00:35:34.445 [2024-11-18 00:40:57.957511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.445 [2024-11-18 00:40:57.957555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.445 qpair failed and we were unable to recover it. 00:35:34.445 [2024-11-18 00:40:57.957672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.445 [2024-11-18 00:40:57.957698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.445 qpair failed and we were unable to recover it. 00:35:34.445 [2024-11-18 00:40:57.957840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.445 [2024-11-18 00:40:57.957879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.445 qpair failed and we were unable to recover it. 00:35:34.445 [2024-11-18 00:40:57.958075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.445 [2024-11-18 00:40:57.958128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.445 qpair failed and we were unable to recover it. 00:35:34.445 [2024-11-18 00:40:57.958289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.445 [2024-11-18 00:40:57.958349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.445 qpair failed and we were unable to recover it. 00:35:34.445 [2024-11-18 00:40:57.958454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.445 [2024-11-18 00:40:57.958481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.445 qpair failed and we were unable to recover it. 00:35:34.445 [2024-11-18 00:40:57.958591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.445 [2024-11-18 00:40:57.958636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.445 qpair failed and we were unable to recover it. 00:35:34.445 [2024-11-18 00:40:57.958861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.445 [2024-11-18 00:40:57.958895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.445 qpair failed and we were unable to recover it. 00:35:34.445 [2024-11-18 00:40:57.959017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.445 [2024-11-18 00:40:57.959053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.445 qpair failed and we were unable to recover it. 00:35:34.445 [2024-11-18 00:40:57.959329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.445 [2024-11-18 00:40:57.959366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.445 qpair failed and we were unable to recover it. 00:35:34.445 [2024-11-18 00:40:57.959506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.445 [2024-11-18 00:40:57.959539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.445 qpair failed and we were unable to recover it. 00:35:34.445 [2024-11-18 00:40:57.959665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.445 [2024-11-18 00:40:57.959699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.445 qpair failed and we were unable to recover it. 00:35:34.445 [2024-11-18 00:40:57.959845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.445 [2024-11-18 00:40:57.959879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.445 qpair failed and we were unable to recover it. 00:35:34.445 [2024-11-18 00:40:57.960048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.445 [2024-11-18 00:40:57.960100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.445 qpair failed and we were unable to recover it. 00:35:34.445 [2024-11-18 00:40:57.960262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.445 [2024-11-18 00:40:57.960346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.445 qpair failed and we were unable to recover it. 00:35:34.445 [2024-11-18 00:40:57.960506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.445 [2024-11-18 00:40:57.960540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.445 qpair failed and we were unable to recover it. 00:35:34.445 [2024-11-18 00:40:57.960680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.445 [2024-11-18 00:40:57.960715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.445 qpair failed and we were unable to recover it. 00:35:34.445 [2024-11-18 00:40:57.960891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.445 [2024-11-18 00:40:57.960942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.445 qpair failed and we were unable to recover it. 00:35:34.445 [2024-11-18 00:40:57.961091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.446 [2024-11-18 00:40:57.961144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.446 qpair failed and we were unable to recover it. 00:35:34.446 [2024-11-18 00:40:57.961357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.446 [2024-11-18 00:40:57.961393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.446 qpair failed and we were unable to recover it. 00:35:34.446 [2024-11-18 00:40:57.961510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.446 [2024-11-18 00:40:57.961544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.446 qpair failed and we were unable to recover it. 00:35:34.446 [2024-11-18 00:40:57.961739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.446 [2024-11-18 00:40:57.961809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.446 qpair failed and we were unable to recover it. 00:35:34.446 [2024-11-18 00:40:57.962082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.446 [2024-11-18 00:40:57.962143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.446 qpair failed and we were unable to recover it. 00:35:34.446 [2024-11-18 00:40:57.962409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.446 [2024-11-18 00:40:57.962443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.446 qpair failed and we were unable to recover it. 00:35:34.446 [2024-11-18 00:40:57.962616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.446 [2024-11-18 00:40:57.962650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.446 qpair failed and we were unable to recover it. 00:35:34.446 [2024-11-18 00:40:57.962779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.446 [2024-11-18 00:40:57.962839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.446 qpair failed and we were unable to recover it. 00:35:34.446 [2024-11-18 00:40:57.963167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.446 [2024-11-18 00:40:57.963228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.446 qpair failed and we were unable to recover it. 00:35:34.446 [2024-11-18 00:40:57.963435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.446 [2024-11-18 00:40:57.963469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.446 qpair failed and we were unable to recover it. 00:35:34.446 [2024-11-18 00:40:57.963590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.446 [2024-11-18 00:40:57.963624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.446 qpair failed and we were unable to recover it. 00:35:34.446 [2024-11-18 00:40:57.963738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.446 [2024-11-18 00:40:57.963778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.446 qpair failed and we were unable to recover it. 00:35:34.446 [2024-11-18 00:40:57.963915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.446 [2024-11-18 00:40:57.963949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.446 qpair failed and we were unable to recover it. 00:35:34.446 [2024-11-18 00:40:57.964086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.446 [2024-11-18 00:40:57.964119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.446 qpair failed and we were unable to recover it. 00:35:34.446 [2024-11-18 00:40:57.964227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.446 [2024-11-18 00:40:57.964261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.446 qpair failed and we were unable to recover it. 00:35:34.446 [2024-11-18 00:40:57.964382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.446 [2024-11-18 00:40:57.964416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.446 qpair failed and we were unable to recover it. 00:35:34.446 [2024-11-18 00:40:57.964549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.446 [2024-11-18 00:40:57.964584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.446 qpair failed and we were unable to recover it. 00:35:34.446 [2024-11-18 00:40:57.964721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.446 [2024-11-18 00:40:57.964755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.446 qpair failed and we were unable to recover it. 00:35:34.446 [2024-11-18 00:40:57.964891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.446 [2024-11-18 00:40:57.964925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.446 qpair failed and we were unable to recover it. 00:35:34.446 [2024-11-18 00:40:57.965092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.446 [2024-11-18 00:40:57.965133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.446 qpair failed and we were unable to recover it. 00:35:34.446 [2024-11-18 00:40:57.965302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.446 [2024-11-18 00:40:57.965349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.446 qpair failed and we were unable to recover it. 00:35:34.446 [2024-11-18 00:40:57.965489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.446 [2024-11-18 00:40:57.965523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.446 qpair failed and we were unable to recover it. 00:35:34.446 [2024-11-18 00:40:57.965670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.446 [2024-11-18 00:40:57.965704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.446 qpair failed and we were unable to recover it. 00:35:34.446 [2024-11-18 00:40:57.965835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.446 [2024-11-18 00:40:57.965869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.446 qpair failed and we were unable to recover it. 00:35:34.446 [2024-11-18 00:40:57.966004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.446 [2024-11-18 00:40:57.966039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.446 qpair failed and we were unable to recover it. 00:35:34.446 [2024-11-18 00:40:57.966189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.446 [2024-11-18 00:40:57.966223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.446 qpair failed and we were unable to recover it. 00:35:34.446 [2024-11-18 00:40:57.966365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.446 [2024-11-18 00:40:57.966400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.446 qpair failed and we were unable to recover it. 00:35:34.446 [2024-11-18 00:40:57.966538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.446 [2024-11-18 00:40:57.966572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.446 qpair failed and we were unable to recover it. 00:35:34.446 [2024-11-18 00:40:57.966718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.446 [2024-11-18 00:40:57.966753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.446 qpair failed and we were unable to recover it. 00:35:34.446 [2024-11-18 00:40:57.966920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.446 [2024-11-18 00:40:57.966954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.446 qpair failed and we were unable to recover it. 00:35:34.446 [2024-11-18 00:40:57.967086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.446 [2024-11-18 00:40:57.967166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.446 qpair failed and we were unable to recover it. 00:35:34.446 [2024-11-18 00:40:57.967436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.446 [2024-11-18 00:40:57.967470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.446 qpair failed and we were unable to recover it. 00:35:34.447 [2024-11-18 00:40:57.967602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.447 [2024-11-18 00:40:57.967636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.447 qpair failed and we were unable to recover it. 00:35:34.447 [2024-11-18 00:40:57.967799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.447 [2024-11-18 00:40:57.967863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.447 qpair failed and we were unable to recover it. 00:35:34.447 [2024-11-18 00:40:57.968107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.447 [2024-11-18 00:40:57.968172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.447 qpair failed and we were unable to recover it. 00:35:34.447 [2024-11-18 00:40:57.968414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.447 [2024-11-18 00:40:57.968449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.447 qpair failed and we were unable to recover it. 00:35:34.447 [2024-11-18 00:40:57.968557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.447 [2024-11-18 00:40:57.968591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.447 qpair failed and we were unable to recover it. 00:35:34.447 [2024-11-18 00:40:57.968866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.447 [2024-11-18 00:40:57.968922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.447 qpair failed and we were unable to recover it. 00:35:34.447 [2024-11-18 00:40:57.969156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.447 [2024-11-18 00:40:57.969217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.447 qpair failed and we were unable to recover it. 00:35:34.447 [2024-11-18 00:40:57.969452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.447 [2024-11-18 00:40:57.969486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.447 qpair failed and we were unable to recover it. 00:35:34.447 [2024-11-18 00:40:57.969596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.447 [2024-11-18 00:40:57.969678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.447 qpair failed and we were unable to recover it. 00:35:34.447 [2024-11-18 00:40:57.969927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.447 [2024-11-18 00:40:57.969988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.447 qpair failed and we were unable to recover it. 00:35:34.447 [2024-11-18 00:40:57.970278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.447 [2024-11-18 00:40:57.970370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.447 qpair failed and we were unable to recover it. 00:35:34.447 [2024-11-18 00:40:57.970505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.447 [2024-11-18 00:40:57.970539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.447 qpair failed and we were unable to recover it. 00:35:34.447 [2024-11-18 00:40:57.970753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.447 [2024-11-18 00:40:57.970809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.447 qpair failed and we were unable to recover it. 00:35:34.447 [2024-11-18 00:40:57.971018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.447 [2024-11-18 00:40:57.971081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.447 qpair failed and we were unable to recover it. 00:35:34.447 [2024-11-18 00:40:57.971330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.447 [2024-11-18 00:40:57.971365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.447 qpair failed and we were unable to recover it. 00:35:34.447 [2024-11-18 00:40:57.971473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.447 [2024-11-18 00:40:57.971507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.447 qpair failed and we were unable to recover it. 00:35:34.447 [2024-11-18 00:40:57.971646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.447 [2024-11-18 00:40:57.971702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.447 qpair failed and we were unable to recover it. 00:35:34.447 [2024-11-18 00:40:57.971823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.447 [2024-11-18 00:40:57.971857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.447 qpair failed and we were unable to recover it. 00:35:34.447 [2024-11-18 00:40:57.971994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.447 [2024-11-18 00:40:57.972028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.447 qpair failed and we were unable to recover it. 00:35:34.447 [2024-11-18 00:40:57.972226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.447 [2024-11-18 00:40:57.972282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.447 qpair failed and we were unable to recover it. 00:35:34.447 [2024-11-18 00:40:57.972445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.447 [2024-11-18 00:40:57.972480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.447 qpair failed and we were unable to recover it. 00:35:34.447 [2024-11-18 00:40:57.972632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.447 [2024-11-18 00:40:57.972666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.447 qpair failed and we were unable to recover it. 00:35:34.447 [2024-11-18 00:40:57.972850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.447 [2024-11-18 00:40:57.972906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.447 qpair failed and we were unable to recover it. 00:35:34.447 [2024-11-18 00:40:57.973154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.447 [2024-11-18 00:40:57.973211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.447 qpair failed and we were unable to recover it. 00:35:34.447 [2024-11-18 00:40:57.973438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.447 [2024-11-18 00:40:57.973472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.447 qpair failed and we were unable to recover it. 00:35:34.447 [2024-11-18 00:40:57.973571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.447 [2024-11-18 00:40:57.973606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.447 qpair failed and we were unable to recover it. 00:35:34.447 [2024-11-18 00:40:57.973746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.447 [2024-11-18 00:40:57.973788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.447 qpair failed and we were unable to recover it. 00:35:34.447 [2024-11-18 00:40:57.973980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.447 [2024-11-18 00:40:57.974035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.447 qpair failed and we were unable to recover it. 00:35:34.447 [2024-11-18 00:40:57.974211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.447 [2024-11-18 00:40:57.974269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.447 qpair failed and we were unable to recover it. 00:35:34.447 [2024-11-18 00:40:57.974439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.447 [2024-11-18 00:40:57.974473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.447 qpair failed and we were unable to recover it. 00:35:34.447 [2024-11-18 00:40:57.974589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.447 [2024-11-18 00:40:57.974656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.447 qpair failed and we were unable to recover it. 00:35:34.447 [2024-11-18 00:40:57.974822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.447 [2024-11-18 00:40:57.974878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.447 qpair failed and we were unable to recover it. 00:35:34.447 [2024-11-18 00:40:57.975077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.447 [2024-11-18 00:40:57.975132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.447 qpair failed and we were unable to recover it. 00:35:34.447 [2024-11-18 00:40:57.975351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.447 [2024-11-18 00:40:57.975404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.447 qpair failed and we were unable to recover it. 00:35:34.447 [2024-11-18 00:40:57.975552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.447 [2024-11-18 00:40:57.975586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.447 qpair failed and we were unable to recover it. 00:35:34.447 [2024-11-18 00:40:57.975831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.447 [2024-11-18 00:40:57.975888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.447 qpair failed and we were unable to recover it. 00:35:34.447 [2024-11-18 00:40:57.976072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.447 [2024-11-18 00:40:57.976127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.447 qpair failed and we were unable to recover it. 00:35:34.447 [2024-11-18 00:40:57.976385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.447 [2024-11-18 00:40:57.976419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.447 qpair failed and we were unable to recover it. 00:35:34.447 [2024-11-18 00:40:57.976555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.448 [2024-11-18 00:40:57.976589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.448 qpair failed and we were unable to recover it. 00:35:34.448 [2024-11-18 00:40:57.976810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.448 [2024-11-18 00:40:57.976866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.448 qpair failed and we were unable to recover it. 00:35:34.448 [2024-11-18 00:40:57.977026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.448 [2024-11-18 00:40:57.977086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.448 qpair failed and we were unable to recover it. 00:35:34.448 [2024-11-18 00:40:57.977353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.448 [2024-11-18 00:40:57.977387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.448 qpair failed and we were unable to recover it. 00:35:34.448 [2024-11-18 00:40:57.977534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.448 [2024-11-18 00:40:57.977568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.448 qpair failed and we were unable to recover it. 00:35:34.448 [2024-11-18 00:40:57.977753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.448 [2024-11-18 00:40:57.977809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.448 qpair failed and we were unable to recover it. 00:35:34.448 [2024-11-18 00:40:57.978050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.448 [2024-11-18 00:40:57.978105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.448 qpair failed and we were unable to recover it. 00:35:34.448 [2024-11-18 00:40:57.978277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.448 [2024-11-18 00:40:57.978348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.448 qpair failed and we were unable to recover it. 00:35:34.448 [2024-11-18 00:40:57.978567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.448 [2024-11-18 00:40:57.978623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.448 qpair failed and we were unable to recover it. 00:35:34.448 [2024-11-18 00:40:57.978853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.448 [2024-11-18 00:40:57.978893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.448 qpair failed and we were unable to recover it. 00:35:34.448 [2024-11-18 00:40:57.979038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.448 [2024-11-18 00:40:57.979104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.448 qpair failed and we were unable to recover it. 00:35:34.448 [2024-11-18 00:40:57.979255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.448 [2024-11-18 00:40:57.979320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.448 qpair failed and we were unable to recover it. 00:35:34.448 [2024-11-18 00:40:57.979462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.448 [2024-11-18 00:40:57.979496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.448 qpair failed and we were unable to recover it. 00:35:34.448 [2024-11-18 00:40:57.979641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.448 [2024-11-18 00:40:57.979679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.448 qpair failed and we were unable to recover it. 00:35:34.448 [2024-11-18 00:40:57.979863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.448 [2024-11-18 00:40:57.979919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.448 qpair failed and we were unable to recover it. 00:35:34.448 [2024-11-18 00:40:57.980074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.448 [2024-11-18 00:40:57.980129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.448 qpair failed and we were unable to recover it. 00:35:34.448 [2024-11-18 00:40:57.980384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.448 [2024-11-18 00:40:57.980441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.448 qpair failed and we were unable to recover it. 00:35:34.448 [2024-11-18 00:40:57.980673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.448 [2024-11-18 00:40:57.980709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.448 qpair failed and we were unable to recover it. 00:35:34.448 [2024-11-18 00:40:57.980851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.448 [2024-11-18 00:40:57.980885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.448 qpair failed and we were unable to recover it. 00:35:34.448 [2024-11-18 00:40:57.981148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.448 [2024-11-18 00:40:57.981204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.448 qpair failed and we were unable to recover it. 00:35:34.448 [2024-11-18 00:40:57.981425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.448 [2024-11-18 00:40:57.981481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.448 qpair failed and we were unable to recover it. 00:35:34.448 [2024-11-18 00:40:57.981708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.448 [2024-11-18 00:40:57.981764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.448 qpair failed and we were unable to recover it. 00:35:34.448 [2024-11-18 00:40:57.981983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.448 [2024-11-18 00:40:57.982016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.448 qpair failed and we were unable to recover it. 00:35:34.448 [2024-11-18 00:40:57.982131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.448 [2024-11-18 00:40:57.982166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.448 qpair failed and we were unable to recover it. 00:35:34.448 [2024-11-18 00:40:57.982336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.448 [2024-11-18 00:40:57.982404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.448 qpair failed and we were unable to recover it. 00:35:34.448 [2024-11-18 00:40:57.982601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.448 [2024-11-18 00:40:57.982661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.448 qpair failed and we were unable to recover it. 00:35:34.448 [2024-11-18 00:40:57.982929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.448 [2024-11-18 00:40:57.982984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.448 qpair failed and we were unable to recover it. 00:35:34.448 [2024-11-18 00:40:57.983209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.448 [2024-11-18 00:40:57.983269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.448 qpair failed and we were unable to recover it. 00:35:34.448 [2024-11-18 00:40:57.983482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.448 [2024-11-18 00:40:57.983515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.448 qpair failed and we were unable to recover it. 00:35:34.448 [2024-11-18 00:40:57.983689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.448 [2024-11-18 00:40:57.983723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.448 qpair failed and we were unable to recover it. 00:35:34.448 [2024-11-18 00:40:57.983874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.448 [2024-11-18 00:40:57.983932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.448 qpair failed and we were unable to recover it. 00:35:34.448 [2024-11-18 00:40:57.984137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.448 [2024-11-18 00:40:57.984197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.448 qpair failed and we were unable to recover it. 00:35:34.448 [2024-11-18 00:40:57.984448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.448 [2024-11-18 00:40:57.984482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.448 qpair failed and we were unable to recover it. 00:35:34.448 [2024-11-18 00:40:57.984625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.448 [2024-11-18 00:40:57.984659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.448 qpair failed and we were unable to recover it. 00:35:34.448 [2024-11-18 00:40:57.984874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.448 [2024-11-18 00:40:57.984937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.448 qpair failed and we were unable to recover it. 00:35:34.448 [2024-11-18 00:40:57.985201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.448 [2024-11-18 00:40:57.985234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.448 qpair failed and we were unable to recover it. 00:35:34.448 [2024-11-18 00:40:57.985387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.448 [2024-11-18 00:40:57.985461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.448 qpair failed and we were unable to recover it. 00:35:34.448 [2024-11-18 00:40:57.985710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.449 [2024-11-18 00:40:57.985774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.449 qpair failed and we were unable to recover it. 00:35:34.449 [2024-11-18 00:40:57.986026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.449 [2024-11-18 00:40:57.986084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.449 qpair failed and we were unable to recover it. 00:35:34.449 [2024-11-18 00:40:57.986331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.449 [2024-11-18 00:40:57.986388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.449 qpair failed and we were unable to recover it. 00:35:34.449 [2024-11-18 00:40:57.986629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.449 [2024-11-18 00:40:57.986668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.449 qpair failed and we were unable to recover it. 00:35:34.449 [2024-11-18 00:40:57.986814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.449 [2024-11-18 00:40:57.986877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.449 qpair failed and we were unable to recover it. 00:35:34.449 [2024-11-18 00:40:57.987142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.449 [2024-11-18 00:40:57.987175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.449 qpair failed and we were unable to recover it. 00:35:34.449 [2024-11-18 00:40:57.987352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.449 [2024-11-18 00:40:57.987387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.449 qpair failed and we were unable to recover it. 00:35:34.449 [2024-11-18 00:40:57.987652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.449 [2024-11-18 00:40:57.987706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.449 qpair failed and we were unable to recover it. 00:35:34.449 [2024-11-18 00:40:57.987959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.449 [2024-11-18 00:40:57.988015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.449 qpair failed and we were unable to recover it. 00:35:34.449 [2024-11-18 00:40:57.988209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.449 [2024-11-18 00:40:57.988276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.449 qpair failed and we were unable to recover it. 00:35:34.449 [2024-11-18 00:40:57.988524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.449 [2024-11-18 00:40:57.988579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.449 qpair failed and we were unable to recover it. 00:35:34.449 [2024-11-18 00:40:57.988830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.449 [2024-11-18 00:40:57.988885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.449 qpair failed and we were unable to recover it. 00:35:34.449 [2024-11-18 00:40:57.989051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.449 [2024-11-18 00:40:57.989106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.449 qpair failed and we were unable to recover it. 00:35:34.449 [2024-11-18 00:40:57.989367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.449 [2024-11-18 00:40:57.989424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.449 qpair failed and we were unable to recover it. 00:35:34.449 [2024-11-18 00:40:57.989649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.449 [2024-11-18 00:40:57.989705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.449 qpair failed and we were unable to recover it. 00:35:34.449 [2024-11-18 00:40:57.989920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.449 [2024-11-18 00:40:57.989975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.449 qpair failed and we were unable to recover it. 00:35:34.449 [2024-11-18 00:40:57.990205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.449 [2024-11-18 00:40:57.990267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.449 qpair failed and we were unable to recover it. 00:35:34.449 [2024-11-18 00:40:57.990519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.449 [2024-11-18 00:40:57.990580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.449 qpair failed and we were unable to recover it. 00:35:34.449 [2024-11-18 00:40:57.990772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.449 [2024-11-18 00:40:57.990832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.449 qpair failed and we were unable to recover it. 00:35:34.449 [2024-11-18 00:40:57.991064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.449 [2024-11-18 00:40:57.991123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.449 qpair failed and we were unable to recover it. 00:35:34.449 [2024-11-18 00:40:57.991365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.449 [2024-11-18 00:40:57.991422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.449 qpair failed and we were unable to recover it. 00:35:34.449 [2024-11-18 00:40:57.991545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.449 [2024-11-18 00:40:57.991579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.449 qpair failed and we were unable to recover it. 00:35:34.449 [2024-11-18 00:40:57.991702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.449 [2024-11-18 00:40:57.991736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.449 qpair failed and we were unable to recover it. 00:35:34.449 [2024-11-18 00:40:57.991847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.449 [2024-11-18 00:40:57.991881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.449 qpair failed and we were unable to recover it. 00:35:34.449 [2024-11-18 00:40:57.991998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.449 [2024-11-18 00:40:57.992032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.449 qpair failed and we were unable to recover it. 00:35:34.449 [2024-11-18 00:40:57.992255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.449 [2024-11-18 00:40:57.992322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.449 qpair failed and we were unable to recover it. 00:35:34.449 [2024-11-18 00:40:57.992542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.449 [2024-11-18 00:40:57.992598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.449 qpair failed and we were unable to recover it. 00:35:34.449 [2024-11-18 00:40:57.992793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.449 [2024-11-18 00:40:57.992848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.449 qpair failed and we were unable to recover it. 00:35:34.449 [2024-11-18 00:40:57.993064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.449 [2024-11-18 00:40:57.993120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.449 qpair failed and we were unable to recover it. 00:35:34.449 [2024-11-18 00:40:57.993303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.449 [2024-11-18 00:40:57.993370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.449 qpair failed and we were unable to recover it. 00:35:34.449 [2024-11-18 00:40:57.993636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.449 [2024-11-18 00:40:57.993669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.449 qpair failed and we were unable to recover it. 00:35:34.449 [2024-11-18 00:40:57.993825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.449 [2024-11-18 00:40:57.993859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.449 qpair failed and we were unable to recover it. 00:35:34.449 [2024-11-18 00:40:57.994136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.449 [2024-11-18 00:40:57.994170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.449 qpair failed and we were unable to recover it. 00:35:34.449 [2024-11-18 00:40:57.994288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.449 [2024-11-18 00:40:57.994343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.449 qpair failed and we were unable to recover it. 00:35:34.449 [2024-11-18 00:40:57.994481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.449 [2024-11-18 00:40:57.994515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.449 qpair failed and we were unable to recover it. 00:35:34.449 [2024-11-18 00:40:57.994793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.449 [2024-11-18 00:40:57.994849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.449 qpair failed and we were unable to recover it. 00:35:34.449 [2024-11-18 00:40:57.995059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.449 [2024-11-18 00:40:57.995115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.449 qpair failed and we were unable to recover it. 00:35:34.449 [2024-11-18 00:40:57.995369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.449 [2024-11-18 00:40:57.995426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.449 qpair failed and we were unable to recover it. 00:35:34.449 [2024-11-18 00:40:57.995647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.450 [2024-11-18 00:40:57.995691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.450 qpair failed and we were unable to recover it. 00:35:34.450 [2024-11-18 00:40:57.995879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.450 [2024-11-18 00:40:57.995935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.450 qpair failed and we were unable to recover it. 00:35:34.450 [2024-11-18 00:40:57.996208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.450 [2024-11-18 00:40:57.996297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.450 qpair failed and we were unable to recover it. 00:35:34.450 [2024-11-18 00:40:57.996561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.450 [2024-11-18 00:40:57.996621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.450 qpair failed and we were unable to recover it. 00:35:34.450 [2024-11-18 00:40:57.996876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.450 [2024-11-18 00:40:57.996911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.450 qpair failed and we were unable to recover it. 00:35:34.450 [2024-11-18 00:40:57.997053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.450 [2024-11-18 00:40:57.997108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.450 qpair failed and we were unable to recover it. 00:35:34.450 [2024-11-18 00:40:57.997374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.450 [2024-11-18 00:40:57.997409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.450 qpair failed and we were unable to recover it. 00:35:34.450 [2024-11-18 00:40:57.997512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.450 [2024-11-18 00:40:57.997548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.450 qpair failed and we were unable to recover it. 00:35:34.450 [2024-11-18 00:40:57.997693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.450 [2024-11-18 00:40:57.997726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.450 qpair failed and we were unable to recover it. 00:35:34.450 [2024-11-18 00:40:57.997943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.450 [2024-11-18 00:40:57.997998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.450 qpair failed and we were unable to recover it. 00:35:34.450 [2024-11-18 00:40:57.998211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.450 [2024-11-18 00:40:57.998245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.450 qpair failed and we were unable to recover it. 00:35:34.450 [2024-11-18 00:40:57.998369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.450 [2024-11-18 00:40:57.998403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.450 qpair failed and we were unable to recover it. 00:35:34.450 [2024-11-18 00:40:57.998545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.450 [2024-11-18 00:40:57.998578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.450 qpair failed and we were unable to recover it. 00:35:34.450 [2024-11-18 00:40:57.998846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.450 [2024-11-18 00:40:57.998879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.450 qpair failed and we were unable to recover it. 00:35:34.450 [2024-11-18 00:40:57.999015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.450 [2024-11-18 00:40:57.999048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.450 qpair failed and we were unable to recover it. 00:35:34.450 [2024-11-18 00:40:57.999292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.450 [2024-11-18 00:40:57.999341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.450 qpair failed and we were unable to recover it. 00:35:34.450 [2024-11-18 00:40:57.999485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.450 [2024-11-18 00:40:57.999548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.450 qpair failed and we were unable to recover it. 00:35:34.450 [2024-11-18 00:40:57.999771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.450 [2024-11-18 00:40:57.999826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.450 qpair failed and we were unable to recover it. 00:35:34.450 [2024-11-18 00:40:58.000076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.450 [2024-11-18 00:40:58.000110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.450 qpair failed and we were unable to recover it. 00:35:34.450 [2024-11-18 00:40:58.000248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.450 [2024-11-18 00:40:58.000280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.450 qpair failed and we were unable to recover it. 00:35:34.450 [2024-11-18 00:40:58.000531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.450 [2024-11-18 00:40:58.000589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.450 qpair failed and we were unable to recover it. 00:35:34.450 [2024-11-18 00:40:58.000830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.450 [2024-11-18 00:40:58.000895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.450 qpair failed and we were unable to recover it. 00:35:34.450 [2024-11-18 00:40:58.001137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.450 [2024-11-18 00:40:58.001219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.450 qpair failed and we were unable to recover it. 00:35:34.450 [2024-11-18 00:40:58.001499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.450 [2024-11-18 00:40:58.001568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.450 qpair failed and we were unable to recover it. 00:35:34.450 [2024-11-18 00:40:58.001713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.450 [2024-11-18 00:40:58.001747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.450 qpair failed and we were unable to recover it. 00:35:34.450 [2024-11-18 00:40:58.001991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.450 [2024-11-18 00:40:58.002026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.450 qpair failed and we were unable to recover it. 00:35:34.450 [2024-11-18 00:40:58.002142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.450 [2024-11-18 00:40:58.002176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.450 qpair failed and we were unable to recover it. 00:35:34.450 [2024-11-18 00:40:58.002433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.450 [2024-11-18 00:40:58.002469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.450 qpair failed and we were unable to recover it. 00:35:34.450 [2024-11-18 00:40:58.002637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.450 [2024-11-18 00:40:58.002701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.450 qpair failed and we were unable to recover it. 00:35:34.450 [2024-11-18 00:40:58.002940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.450 [2024-11-18 00:40:58.002996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.450 qpair failed and we were unable to recover it. 00:35:34.450 [2024-11-18 00:40:58.003173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.450 [2024-11-18 00:40:58.003264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.450 qpair failed and we were unable to recover it. 00:35:34.450 [2024-11-18 00:40:58.003571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.450 [2024-11-18 00:40:58.003631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.450 qpair failed and we were unable to recover it. 00:35:34.450 [2024-11-18 00:40:58.003938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.450 [2024-11-18 00:40:58.004009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.450 qpair failed and we were unable to recover it. 00:35:34.450 [2024-11-18 00:40:58.004331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.450 [2024-11-18 00:40:58.004396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.450 qpair failed and we were unable to recover it. 00:35:34.450 [2024-11-18 00:40:58.004580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.450 [2024-11-18 00:40:58.004636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.450 qpair failed and we were unable to recover it. 00:35:34.450 [2024-11-18 00:40:58.004900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.450 [2024-11-18 00:40:58.004957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.450 qpair failed and we were unable to recover it. 00:35:34.450 [2024-11-18 00:40:58.005256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.450 [2024-11-18 00:40:58.005342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.450 qpair failed and we were unable to recover it. 00:35:34.450 [2024-11-18 00:40:58.005576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.451 [2024-11-18 00:40:58.005632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.451 qpair failed and we were unable to recover it. 00:35:34.451 [2024-11-18 00:40:58.005846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.451 [2024-11-18 00:40:58.005903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.451 qpair failed and we were unable to recover it. 00:35:34.451 [2024-11-18 00:40:58.006146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.451 [2024-11-18 00:40:58.006180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.451 qpair failed and we were unable to recover it. 00:35:34.451 [2024-11-18 00:40:58.006328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.451 [2024-11-18 00:40:58.006362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.451 qpair failed and we were unable to recover it. 00:35:34.451 [2024-11-18 00:40:58.006470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.451 [2024-11-18 00:40:58.006505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.451 qpair failed and we were unable to recover it. 00:35:34.451 [2024-11-18 00:40:58.006655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.451 [2024-11-18 00:40:58.006691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.451 qpair failed and we were unable to recover it. 00:35:34.451 [2024-11-18 00:40:58.007854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.451 [2024-11-18 00:40:58.007890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.451 qpair failed and we were unable to recover it. 00:35:34.451 [2024-11-18 00:40:58.008068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.451 [2024-11-18 00:40:58.008100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.451 qpair failed and we were unable to recover it. 00:35:34.451 [2024-11-18 00:40:58.008228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.451 [2024-11-18 00:40:58.008260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.451 qpair failed and we were unable to recover it. 00:35:34.451 [2024-11-18 00:40:58.008443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.451 [2024-11-18 00:40:58.008476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.451 qpair failed and we were unable to recover it. 00:35:34.451 [2024-11-18 00:40:58.008631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.451 [2024-11-18 00:40:58.008663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.451 qpair failed and we were unable to recover it. 00:35:34.451 [2024-11-18 00:40:58.008820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.451 [2024-11-18 00:40:58.008851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.451 qpair failed and we were unable to recover it. 00:35:34.451 [2024-11-18 00:40:58.008959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.451 [2024-11-18 00:40:58.008990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.451 qpair failed and we were unable to recover it. 00:35:34.451 [2024-11-18 00:40:58.009149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.451 [2024-11-18 00:40:58.009180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.451 qpair failed and we were unable to recover it. 00:35:34.451 [2024-11-18 00:40:58.009357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.451 [2024-11-18 00:40:58.009430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.451 qpair failed and we were unable to recover it. 00:35:34.451 [2024-11-18 00:40:58.009535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.451 [2024-11-18 00:40:58.009570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.451 qpair failed and we were unable to recover it. 00:35:34.451 [2024-11-18 00:40:58.009729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.451 [2024-11-18 00:40:58.009760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.451 qpair failed and we were unable to recover it. 00:35:34.451 [2024-11-18 00:40:58.009920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.451 [2024-11-18 00:40:58.009951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.451 qpair failed and we were unable to recover it. 00:35:34.451 [2024-11-18 00:40:58.010083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.451 [2024-11-18 00:40:58.010120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.451 qpair failed and we were unable to recover it. 00:35:34.451 [2024-11-18 00:40:58.010252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.451 [2024-11-18 00:40:58.010283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.451 qpair failed and we were unable to recover it. 00:35:34.451 [2024-11-18 00:40:58.010500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.451 [2024-11-18 00:40:58.010547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.451 qpair failed and we were unable to recover it. 00:35:34.451 [2024-11-18 00:40:58.010723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.451 [2024-11-18 00:40:58.010771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.451 qpair failed and we were unable to recover it. 00:35:34.451 [2024-11-18 00:40:58.010897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.451 [2024-11-18 00:40:58.010929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.451 qpair failed and we were unable to recover it. 00:35:34.451 [2024-11-18 00:40:58.011050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.451 [2024-11-18 00:40:58.011080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.451 qpair failed and we were unable to recover it. 00:35:34.451 [2024-11-18 00:40:58.011185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.451 [2024-11-18 00:40:58.011217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.451 qpair failed and we were unable to recover it. 00:35:34.451 [2024-11-18 00:40:58.011349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.451 [2024-11-18 00:40:58.011392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.451 qpair failed and we were unable to recover it. 00:35:34.451 [2024-11-18 00:40:58.011553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.451 [2024-11-18 00:40:58.011584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.451 qpair failed and we were unable to recover it. 00:35:34.451 [2024-11-18 00:40:58.011760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.451 [2024-11-18 00:40:58.011808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.451 qpair failed and we were unable to recover it. 00:35:34.451 [2024-11-18 00:40:58.012017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.451 [2024-11-18 00:40:58.012048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.451 qpair failed and we were unable to recover it. 00:35:34.451 [2024-11-18 00:40:58.012204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.451 [2024-11-18 00:40:58.012235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.451 qpair failed and we were unable to recover it. 00:35:34.452 [2024-11-18 00:40:58.012406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.452 [2024-11-18 00:40:58.012455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.452 qpair failed and we were unable to recover it. 00:35:34.452 [2024-11-18 00:40:58.012640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.452 [2024-11-18 00:40:58.012692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.452 qpair failed and we were unable to recover it. 00:35:34.452 [2024-11-18 00:40:58.012876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.452 [2024-11-18 00:40:58.012933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.452 qpair failed and we were unable to recover it. 00:35:34.452 [2024-11-18 00:40:58.013061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.452 [2024-11-18 00:40:58.013092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.452 qpair failed and we were unable to recover it. 00:35:34.452 [2024-11-18 00:40:58.013221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.452 [2024-11-18 00:40:58.013251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.452 qpair failed and we were unable to recover it. 00:35:34.452 [2024-11-18 00:40:58.013422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.452 [2024-11-18 00:40:58.013475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.452 qpair failed and we were unable to recover it. 00:35:34.452 [2024-11-18 00:40:58.013657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.452 [2024-11-18 00:40:58.013706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.452 qpair failed and we were unable to recover it. 00:35:34.452 [2024-11-18 00:40:58.013852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.452 [2024-11-18 00:40:58.013900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.452 qpair failed and we were unable to recover it. 00:35:34.452 [2024-11-18 00:40:58.014034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.452 [2024-11-18 00:40:58.014064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.452 qpair failed and we were unable to recover it. 00:35:34.452 [2024-11-18 00:40:58.014203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.452 [2024-11-18 00:40:58.014234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.452 qpair failed and we were unable to recover it. 00:35:34.452 [2024-11-18 00:40:58.014383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.452 [2024-11-18 00:40:58.014434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.452 qpair failed and we were unable to recover it. 00:35:34.452 [2024-11-18 00:40:58.014546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.452 [2024-11-18 00:40:58.014595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.452 qpair failed and we were unable to recover it. 00:35:34.452 [2024-11-18 00:40:58.014742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.452 [2024-11-18 00:40:58.014796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.452 qpair failed and we were unable to recover it. 00:35:34.452 [2024-11-18 00:40:58.014928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.452 [2024-11-18 00:40:58.014959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.452 qpair failed and we were unable to recover it. 00:35:34.452 [2024-11-18 00:40:58.015060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.452 [2024-11-18 00:40:58.015092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.452 qpair failed and we were unable to recover it. 00:35:34.452 [2024-11-18 00:40:58.015262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.452 [2024-11-18 00:40:58.015324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.452 qpair failed and we were unable to recover it. 00:35:34.452 [2024-11-18 00:40:58.015471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.452 [2024-11-18 00:40:58.015504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.452 qpair failed and we were unable to recover it. 00:35:34.452 [2024-11-18 00:40:58.015638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.452 [2024-11-18 00:40:58.015670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.452 qpair failed and we were unable to recover it. 00:35:34.452 [2024-11-18 00:40:58.015875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.452 [2024-11-18 00:40:58.015936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.452 qpair failed and we were unable to recover it. 00:35:34.452 [2024-11-18 00:40:58.016171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.452 [2024-11-18 00:40:58.016234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.452 qpair failed and we were unable to recover it. 00:35:34.452 [2024-11-18 00:40:58.016483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.452 [2024-11-18 00:40:58.016546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.452 qpair failed and we were unable to recover it. 00:35:34.452 [2024-11-18 00:40:58.016861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.452 [2024-11-18 00:40:58.016922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.452 qpair failed and we were unable to recover it. 00:35:34.452 [2024-11-18 00:40:58.017220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.452 [2024-11-18 00:40:58.017286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.452 qpair failed and we were unable to recover it. 00:35:34.452 [2024-11-18 00:40:58.017478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.452 [2024-11-18 00:40:58.017510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.452 qpair failed and we were unable to recover it. 00:35:34.452 [2024-11-18 00:40:58.017705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.452 [2024-11-18 00:40:58.017765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.452 qpair failed and we were unable to recover it. 00:35:34.452 [2024-11-18 00:40:58.017967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.452 [2024-11-18 00:40:58.018030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.452 qpair failed and we were unable to recover it. 00:35:34.452 [2024-11-18 00:40:58.018286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.452 [2024-11-18 00:40:58.018336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.452 qpair failed and we were unable to recover it. 00:35:34.452 [2024-11-18 00:40:58.018437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.452 [2024-11-18 00:40:58.018510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.452 qpair failed and we were unable to recover it. 00:35:34.452 [2024-11-18 00:40:58.018745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.452 [2024-11-18 00:40:58.018805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.452 qpair failed and we were unable to recover it. 00:35:34.452 [2024-11-18 00:40:58.019070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.452 [2024-11-18 00:40:58.019104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.452 qpair failed and we were unable to recover it. 00:35:34.452 [2024-11-18 00:40:58.019219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.452 [2024-11-18 00:40:58.019251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.452 qpair failed and we were unable to recover it. 00:35:34.452 [2024-11-18 00:40:58.019430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.452 [2024-11-18 00:40:58.019462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.452 qpair failed and we were unable to recover it. 00:35:34.452 [2024-11-18 00:40:58.019593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.452 [2024-11-18 00:40:58.019636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.452 qpair failed and we were unable to recover it. 00:35:34.452 [2024-11-18 00:40:58.019817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.452 [2024-11-18 00:40:58.019884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.452 qpair failed and we were unable to recover it. 00:35:34.452 [2024-11-18 00:40:58.020179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.452 [2024-11-18 00:40:58.020214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.452 qpair failed and we were unable to recover it. 00:35:34.452 [2024-11-18 00:40:58.020361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.452 [2024-11-18 00:40:58.020393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.452 qpair failed and we were unable to recover it. 00:35:34.452 [2024-11-18 00:40:58.020532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.452 [2024-11-18 00:40:58.020564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.452 qpair failed and we were unable to recover it. 00:35:34.452 [2024-11-18 00:40:58.020681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.452 [2024-11-18 00:40:58.020714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.452 qpair failed and we were unable to recover it. 00:35:34.452 [2024-11-18 00:40:58.020869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.452 [2024-11-18 00:40:58.020901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.452 qpair failed and we were unable to recover it. 00:35:34.452 [2024-11-18 00:40:58.021018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.452 [2024-11-18 00:40:58.021052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.452 qpair failed and we were unable to recover it. 00:35:34.452 [2024-11-18 00:40:58.021239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.452 [2024-11-18 00:40:58.021304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.452 qpair failed and we were unable to recover it. 00:35:34.452 [2024-11-18 00:40:58.021475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.452 [2024-11-18 00:40:58.021507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.452 qpair failed and we were unable to recover it. 00:35:34.452 [2024-11-18 00:40:58.021675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.452 [2024-11-18 00:40:58.021736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.452 qpair failed and we were unable to recover it. 00:35:34.452 [2024-11-18 00:40:58.021949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.452 [2024-11-18 00:40:58.022016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.452 qpair failed and we were unable to recover it. 00:35:34.452 [2024-11-18 00:40:58.022212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.452 [2024-11-18 00:40:58.022271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.452 qpair failed and we were unable to recover it. 00:35:34.452 [2024-11-18 00:40:58.022451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.452 [2024-11-18 00:40:58.022483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.452 qpair failed and we were unable to recover it. 00:35:34.452 [2024-11-18 00:40:58.022578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.452 [2024-11-18 00:40:58.022609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.452 qpair failed and we were unable to recover it. 00:35:34.452 [2024-11-18 00:40:58.022710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.452 [2024-11-18 00:40:58.022743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.452 qpair failed and we were unable to recover it. 00:35:34.452 [2024-11-18 00:40:58.022919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.452 [2024-11-18 00:40:58.022994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.452 qpair failed and we were unable to recover it. 00:35:34.452 [2024-11-18 00:40:58.023232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.452 [2024-11-18 00:40:58.023267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.452 qpair failed and we were unable to recover it. 00:35:34.452 [2024-11-18 00:40:58.023415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.452 [2024-11-18 00:40:58.023447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.452 qpair failed and we were unable to recover it. 00:35:34.452 [2024-11-18 00:40:58.023548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.452 [2024-11-18 00:40:58.023581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.452 qpair failed and we were unable to recover it. 00:35:34.452 [2024-11-18 00:40:58.023704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.452 [2024-11-18 00:40:58.023751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.452 qpair failed and we were unable to recover it. 00:35:34.452 [2024-11-18 00:40:58.023998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.452 [2024-11-18 00:40:58.024062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.452 qpair failed and we were unable to recover it. 00:35:34.452 [2024-11-18 00:40:58.024262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.452 [2024-11-18 00:40:58.024294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.452 qpair failed and we were unable to recover it. 00:35:34.452 [2024-11-18 00:40:58.024450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.452 [2024-11-18 00:40:58.024489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.452 qpair failed and we were unable to recover it. 00:35:34.452 [2024-11-18 00:40:58.024591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.452 [2024-11-18 00:40:58.024623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.453 qpair failed and we were unable to recover it. 00:35:34.453 [2024-11-18 00:40:58.024756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.453 [2024-11-18 00:40:58.024797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.453 qpair failed and we were unable to recover it. 00:35:34.453 [2024-11-18 00:40:58.024976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.453 [2024-11-18 00:40:58.025029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.453 qpair failed and we were unable to recover it. 00:35:34.453 [2024-11-18 00:40:58.025248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.453 [2024-11-18 00:40:58.025308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.453 qpair failed and we were unable to recover it. 00:35:34.453 [2024-11-18 00:40:58.025483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.453 [2024-11-18 00:40:58.025515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.453 qpair failed and we were unable to recover it. 00:35:34.453 [2024-11-18 00:40:58.025644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.453 [2024-11-18 00:40:58.025677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.453 qpair failed and we were unable to recover it. 00:35:34.453 [2024-11-18 00:40:58.025815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.453 [2024-11-18 00:40:58.025847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.453 qpair failed and we were unable to recover it. 00:35:34.453 [2024-11-18 00:40:58.026036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.453 [2024-11-18 00:40:58.026110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.453 qpair failed and we were unable to recover it. 00:35:34.453 [2024-11-18 00:40:58.026261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.453 [2024-11-18 00:40:58.026296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.453 qpair failed and we were unable to recover it. 00:35:34.453 [2024-11-18 00:40:58.026451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.453 [2024-11-18 00:40:58.026484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.453 qpair failed and we were unable to recover it. 00:35:34.453 [2024-11-18 00:40:58.026626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.453 [2024-11-18 00:40:58.026659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.453 qpair failed and we were unable to recover it. 00:35:34.453 [2024-11-18 00:40:58.026884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.453 [2024-11-18 00:40:58.026937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.453 qpair failed and we were unable to recover it. 00:35:34.453 [2024-11-18 00:40:58.027030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.453 [2024-11-18 00:40:58.027063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.453 qpair failed and we were unable to recover it. 00:35:34.453 [2024-11-18 00:40:58.027198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.453 [2024-11-18 00:40:58.027229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.453 qpair failed and we were unable to recover it. 00:35:34.453 [2024-11-18 00:40:58.027342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.453 [2024-11-18 00:40:58.027376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.453 qpair failed and we were unable to recover it. 00:35:34.453 [2024-11-18 00:40:58.027562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.453 [2024-11-18 00:40:58.027628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.453 qpair failed and we were unable to recover it. 00:35:34.453 [2024-11-18 00:40:58.027872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.453 [2024-11-18 00:40:58.027922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.453 qpair failed and we were unable to recover it. 00:35:34.453 [2024-11-18 00:40:58.028051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.453 [2024-11-18 00:40:58.028083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.453 qpair failed and we were unable to recover it. 00:35:34.453 [2024-11-18 00:40:58.028219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.453 [2024-11-18 00:40:58.028250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.453 qpair failed and we were unable to recover it. 00:35:34.453 [2024-11-18 00:40:58.028431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.453 [2024-11-18 00:40:58.028464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.453 qpair failed and we were unable to recover it. 00:35:34.453 [2024-11-18 00:40:58.028640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.453 [2024-11-18 00:40:58.028693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.453 qpair failed and we were unable to recover it. 00:35:34.453 [2024-11-18 00:40:58.028842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.453 [2024-11-18 00:40:58.028893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.453 qpair failed and we were unable to recover it. 00:35:34.453 [2024-11-18 00:40:58.029065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.453 [2024-11-18 00:40:58.029097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.453 qpair failed and we were unable to recover it. 00:35:34.453 [2024-11-18 00:40:58.029267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.453 [2024-11-18 00:40:58.029306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.453 qpair failed and we were unable to recover it. 00:35:34.453 [2024-11-18 00:40:58.029456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.453 [2024-11-18 00:40:58.029487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.453 qpair failed and we were unable to recover it. 00:35:34.453 [2024-11-18 00:40:58.029626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.453 [2024-11-18 00:40:58.029656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.453 qpair failed and we were unable to recover it. 00:35:34.453 [2024-11-18 00:40:58.029787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.453 [2024-11-18 00:40:58.029817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.453 qpair failed and we were unable to recover it. 00:35:34.453 [2024-11-18 00:40:58.030019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.453 [2024-11-18 00:40:58.030049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.453 qpair failed and we were unable to recover it. 00:35:34.453 [2024-11-18 00:40:58.031138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.453 [2024-11-18 00:40:58.031188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.453 qpair failed and we were unable to recover it. 00:35:34.453 [2024-11-18 00:40:58.031308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.453 [2024-11-18 00:40:58.031347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.453 qpair failed and we were unable to recover it. 00:35:34.453 [2024-11-18 00:40:58.031509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.453 [2024-11-18 00:40:58.031539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.453 qpair failed and we were unable to recover it. 00:35:34.453 [2024-11-18 00:40:58.031704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.453 [2024-11-18 00:40:58.031733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.453 qpair failed and we were unable to recover it. 00:35:34.453 [2024-11-18 00:40:58.031841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.453 [2024-11-18 00:40:58.031870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.453 qpair failed and we were unable to recover it. 00:35:34.453 [2024-11-18 00:40:58.031996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.453 [2024-11-18 00:40:58.032025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.453 qpair failed and we were unable to recover it. 00:35:34.453 [2024-11-18 00:40:58.032153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.453 [2024-11-18 00:40:58.032182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.453 qpair failed and we were unable to recover it. 00:35:34.453 [2024-11-18 00:40:58.032269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.453 [2024-11-18 00:40:58.032298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.453 qpair failed and we were unable to recover it. 00:35:34.453 [2024-11-18 00:40:58.032422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.453 [2024-11-18 00:40:58.032451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.453 qpair failed and we were unable to recover it. 00:35:34.453 [2024-11-18 00:40:58.032577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.453 [2024-11-18 00:40:58.032614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.453 qpair failed and we were unable to recover it. 00:35:34.453 [2024-11-18 00:40:58.032719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.453 [2024-11-18 00:40:58.032748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.453 qpair failed and we were unable to recover it. 00:35:34.454 [2024-11-18 00:40:58.032875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.454 [2024-11-18 00:40:58.032909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.454 qpair failed and we were unable to recover it. 00:35:34.454 [2024-11-18 00:40:58.033060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.454 [2024-11-18 00:40:58.033089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.454 qpair failed and we were unable to recover it. 00:35:34.454 [2024-11-18 00:40:58.033197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.454 [2024-11-18 00:40:58.033226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.454 qpair failed and we were unable to recover it. 00:35:34.454 [2024-11-18 00:40:58.033326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.454 [2024-11-18 00:40:58.033355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.454 qpair failed and we were unable to recover it. 00:35:34.454 [2024-11-18 00:40:58.033477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.454 [2024-11-18 00:40:58.033506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.454 qpair failed and we were unable to recover it. 00:35:34.454 [2024-11-18 00:40:58.033624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.454 [2024-11-18 00:40:58.033668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.454 qpair failed and we were unable to recover it. 00:35:34.454 [2024-11-18 00:40:58.033797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.454 [2024-11-18 00:40:58.033825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.454 qpair failed and we were unable to recover it. 00:35:34.454 [2024-11-18 00:40:58.033991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.454 [2024-11-18 00:40:58.034035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.454 qpair failed and we were unable to recover it. 00:35:34.454 [2024-11-18 00:40:58.034178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.454 [2024-11-18 00:40:58.034213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.454 qpair failed and we were unable to recover it. 00:35:34.454 [2024-11-18 00:40:58.034349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.455 [2024-11-18 00:40:58.034382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.455 qpair failed and we were unable to recover it. 00:35:34.455 [2024-11-18 00:40:58.034466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.455 [2024-11-18 00:40:58.034494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.455 qpair failed and we were unable to recover it. 00:35:34.455 [2024-11-18 00:40:58.034593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.455 [2024-11-18 00:40:58.034634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.455 qpair failed and we were unable to recover it. 00:35:34.455 [2024-11-18 00:40:58.034787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.455 [2024-11-18 00:40:58.034816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.455 qpair failed and we were unable to recover it. 00:35:34.455 [2024-11-18 00:40:58.034985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.455 [2024-11-18 00:40:58.035034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.455 qpair failed and we were unable to recover it. 00:35:34.455 [2024-11-18 00:40:58.035192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.455 [2024-11-18 00:40:58.035235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.455 qpair failed and we were unable to recover it. 00:35:34.455 [2024-11-18 00:40:58.035363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.455 [2024-11-18 00:40:58.035389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.455 qpair failed and we were unable to recover it. 00:35:34.455 [2024-11-18 00:40:58.035473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.455 [2024-11-18 00:40:58.035500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.455 qpair failed and we were unable to recover it. 00:35:34.455 [2024-11-18 00:40:58.035613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.455 [2024-11-18 00:40:58.035640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.455 qpair failed and we were unable to recover it. 00:35:34.455 [2024-11-18 00:40:58.035740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.455 [2024-11-18 00:40:58.035766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.455 qpair failed and we were unable to recover it. 00:35:34.455 [2024-11-18 00:40:58.035855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.456 [2024-11-18 00:40:58.035893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.456 qpair failed and we were unable to recover it. 00:35:34.456 [2024-11-18 00:40:58.036013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.456 [2024-11-18 00:40:58.036040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.456 qpair failed and we were unable to recover it. 00:35:34.456 [2024-11-18 00:40:58.036154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.456 [2024-11-18 00:40:58.036180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.456 qpair failed and we were unable to recover it. 00:35:34.456 [2024-11-18 00:40:58.036291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.456 [2024-11-18 00:40:58.036333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.456 qpair failed and we were unable to recover it. 00:35:34.456 [2024-11-18 00:40:58.036448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.456 [2024-11-18 00:40:58.036474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.456 qpair failed and we were unable to recover it. 00:35:34.456 [2024-11-18 00:40:58.036590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.456 [2024-11-18 00:40:58.036616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.456 qpair failed and we were unable to recover it. 00:35:34.456 [2024-11-18 00:40:58.036756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.456 [2024-11-18 00:40:58.036786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.456 qpair failed and we were unable to recover it. 00:35:34.456 [2024-11-18 00:40:58.036887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.456 [2024-11-18 00:40:58.036917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.456 qpair failed and we were unable to recover it. 00:35:34.456 [2024-11-18 00:40:58.037049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.456 [2024-11-18 00:40:58.037078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.456 qpair failed and we were unable to recover it. 00:35:34.456 [2024-11-18 00:40:58.037236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.456 [2024-11-18 00:40:58.037266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.456 qpair failed and we were unable to recover it. 00:35:34.456 [2024-11-18 00:40:58.037396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.456 [2024-11-18 00:40:58.037425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.456 qpair failed and we were unable to recover it. 00:35:34.456 [2024-11-18 00:40:58.037536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.456 [2024-11-18 00:40:58.037563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.457 qpair failed and we were unable to recover it. 00:35:34.457 [2024-11-18 00:40:58.037718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.457 [2024-11-18 00:40:58.037764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.457 qpair failed and we were unable to recover it. 00:35:34.457 [2024-11-18 00:40:58.037909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.457 [2024-11-18 00:40:58.037950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.457 qpair failed and we were unable to recover it. 00:35:34.457 [2024-11-18 00:40:58.038106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.457 [2024-11-18 00:40:58.038136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.457 qpair failed and we were unable to recover it. 00:35:34.457 [2024-11-18 00:40:58.038247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.457 [2024-11-18 00:40:58.038276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.457 qpair failed and we were unable to recover it. 00:35:34.457 [2024-11-18 00:40:58.038414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.457 [2024-11-18 00:40:58.038442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.457 qpair failed and we were unable to recover it. 00:35:34.457 [2024-11-18 00:40:58.038558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.457 [2024-11-18 00:40:58.038600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.457 qpair failed and we were unable to recover it. 00:35:34.457 [2024-11-18 00:40:58.038731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.457 [2024-11-18 00:40:58.038761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.457 qpair failed and we were unable to recover it. 00:35:34.457 [2024-11-18 00:40:58.038893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.457 [2024-11-18 00:40:58.038924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.457 qpair failed and we were unable to recover it. 00:35:34.457 [2024-11-18 00:40:58.039055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.457 [2024-11-18 00:40:58.039085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.457 qpair failed and we were unable to recover it. 00:35:34.457 [2024-11-18 00:40:58.039221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.457 [2024-11-18 00:40:58.039250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.457 qpair failed and we were unable to recover it. 00:35:34.458 [2024-11-18 00:40:58.039376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.458 [2024-11-18 00:40:58.039403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.458 qpair failed and we were unable to recover it. 00:35:34.458 [2024-11-18 00:40:58.039495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.458 [2024-11-18 00:40:58.039520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.458 qpair failed and we were unable to recover it. 00:35:34.458 [2024-11-18 00:40:58.039603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.458 [2024-11-18 00:40:58.039629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.458 qpair failed and we were unable to recover it. 00:35:34.458 [2024-11-18 00:40:58.039748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.458 [2024-11-18 00:40:58.039774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.458 qpair failed and we were unable to recover it. 00:35:34.458 [2024-11-18 00:40:58.039861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.458 [2024-11-18 00:40:58.039887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.458 qpair failed and we were unable to recover it. 00:35:34.458 [2024-11-18 00:40:58.039965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.458 [2024-11-18 00:40:58.039991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.458 qpair failed and we were unable to recover it. 00:35:34.458 [2024-11-18 00:40:58.040102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.458 [2024-11-18 00:40:58.040128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.458 qpair failed and we were unable to recover it. 00:35:34.458 [2024-11-18 00:40:58.040212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.458 [2024-11-18 00:40:58.040237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.458 qpair failed and we were unable to recover it. 00:35:34.458 [2024-11-18 00:40:58.040350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.458 [2024-11-18 00:40:58.040377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.459 qpair failed and we were unable to recover it. 00:35:34.459 [2024-11-18 00:40:58.040513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.459 [2024-11-18 00:40:58.040539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.459 qpair failed and we were unable to recover it. 00:35:34.459 [2024-11-18 00:40:58.040653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.459 [2024-11-18 00:40:58.040679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.459 qpair failed and we were unable to recover it. 00:35:34.459 [2024-11-18 00:40:58.040804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.459 [2024-11-18 00:40:58.040834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.459 qpair failed and we were unable to recover it. 00:35:34.459 [2024-11-18 00:40:58.040960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.459 [2024-11-18 00:40:58.040995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.459 qpair failed and we were unable to recover it. 00:35:34.459 [2024-11-18 00:40:58.041097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.459 [2024-11-18 00:40:58.041127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.459 qpair failed and we were unable to recover it. 00:35:34.459 [2024-11-18 00:40:58.041239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.459 [2024-11-18 00:40:58.041266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.459 qpair failed and we were unable to recover it. 00:35:34.460 [2024-11-18 00:40:58.041359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.460 [2024-11-18 00:40:58.041386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.460 qpair failed and we were unable to recover it. 00:35:34.460 [2024-11-18 00:40:58.041473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.460 [2024-11-18 00:40:58.041499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.460 qpair failed and we were unable to recover it. 00:35:34.460 [2024-11-18 00:40:58.041653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.460 [2024-11-18 00:40:58.041684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.460 qpair failed and we were unable to recover it. 00:35:34.460 [2024-11-18 00:40:58.041779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.460 [2024-11-18 00:40:58.041809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.460 qpair failed and we were unable to recover it. 00:35:34.460 [2024-11-18 00:40:58.041939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.460 [2024-11-18 00:40:58.041969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.460 qpair failed and we were unable to recover it. 00:35:34.460 [2024-11-18 00:40:58.042091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.460 [2024-11-18 00:40:58.042118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.460 qpair failed and we were unable to recover it. 00:35:34.460 [2024-11-18 00:40:58.042294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.460 [2024-11-18 00:40:58.042331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.460 qpair failed and we were unable to recover it. 00:35:34.460 [2024-11-18 00:40:58.042432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.460 [2024-11-18 00:40:58.042458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.460 qpair failed and we were unable to recover it. 00:35:34.460 [2024-11-18 00:40:58.042550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.460 [2024-11-18 00:40:58.042577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.460 qpair failed and we were unable to recover it. 00:35:34.460 [2024-11-18 00:40:58.042714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.461 [2024-11-18 00:40:58.042744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.461 qpair failed and we were unable to recover it. 00:35:34.461 [2024-11-18 00:40:58.042863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.461 [2024-11-18 00:40:58.042893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.461 qpair failed and we were unable to recover it. 00:35:34.461 [2024-11-18 00:40:58.043049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.461 [2024-11-18 00:40:58.043079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.461 qpair failed and we were unable to recover it. 00:35:34.461 [2024-11-18 00:40:58.043203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.461 [2024-11-18 00:40:58.043232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.461 qpair failed and we were unable to recover it. 00:35:34.461 [2024-11-18 00:40:58.043358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.461 [2024-11-18 00:40:58.043392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.461 qpair failed and we were unable to recover it. 00:35:34.461 [2024-11-18 00:40:58.043518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.461 [2024-11-18 00:40:58.043546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.461 qpair failed and we were unable to recover it. 00:35:34.461 [2024-11-18 00:40:58.043689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.461 [2024-11-18 00:40:58.043735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.461 qpair failed and we were unable to recover it. 00:35:34.461 [2024-11-18 00:40:58.043856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.461 [2024-11-18 00:40:58.043883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.462 qpair failed and we were unable to recover it. 00:35:34.462 [2024-11-18 00:40:58.043990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.462 [2024-11-18 00:40:58.044020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.462 qpair failed and we were unable to recover it. 00:35:34.462 [2024-11-18 00:40:58.044143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.462 [2024-11-18 00:40:58.044169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.462 qpair failed and we were unable to recover it. 00:35:34.462 [2024-11-18 00:40:58.044327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.462 [2024-11-18 00:40:58.044355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.462 qpair failed and we were unable to recover it. 00:35:34.462 [2024-11-18 00:40:58.044446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.462 [2024-11-18 00:40:58.044473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.462 qpair failed and we were unable to recover it. 00:35:34.462 [2024-11-18 00:40:58.044584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.462 [2024-11-18 00:40:58.044632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.462 qpair failed and we were unable to recover it. 00:35:34.462 [2024-11-18 00:40:58.044817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.462 [2024-11-18 00:40:58.044855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.462 qpair failed and we were unable to recover it. 00:35:34.462 [2024-11-18 00:40:58.044954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.462 [2024-11-18 00:40:58.044981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.462 qpair failed and we were unable to recover it. 00:35:34.462 [2024-11-18 00:40:58.045103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.462 [2024-11-18 00:40:58.045132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.462 qpair failed and we were unable to recover it. 00:35:34.462 [2024-11-18 00:40:58.045249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.462 [2024-11-18 00:40:58.045281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.462 qpair failed and we were unable to recover it. 00:35:34.463 [2024-11-18 00:40:58.045383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.463 [2024-11-18 00:40:58.045420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.463 qpair failed and we were unable to recover it. 00:35:34.463 [2024-11-18 00:40:58.045569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.463 [2024-11-18 00:40:58.045612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.463 qpair failed and we were unable to recover it. 00:35:34.463 [2024-11-18 00:40:58.045722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.463 [2024-11-18 00:40:58.045762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.463 qpair failed and we were unable to recover it. 00:35:34.463 [2024-11-18 00:40:58.045879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.463 [2024-11-18 00:40:58.045908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.463 qpair failed and we were unable to recover it. 00:35:34.463 [2024-11-18 00:40:58.046056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.463 [2024-11-18 00:40:58.046085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.463 qpair failed and we were unable to recover it. 00:35:34.463 [2024-11-18 00:40:58.046186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.463 [2024-11-18 00:40:58.046213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.463 qpair failed and we were unable to recover it. 00:35:34.463 [2024-11-18 00:40:58.046299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.463 [2024-11-18 00:40:58.046340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.463 qpair failed and we were unable to recover it. 00:35:34.463 [2024-11-18 00:40:58.046441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.463 [2024-11-18 00:40:58.046469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.463 qpair failed and we were unable to recover it. 00:35:34.463 [2024-11-18 00:40:58.046555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.463 [2024-11-18 00:40:58.046581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.463 qpair failed and we were unable to recover it. 00:35:34.463 [2024-11-18 00:40:58.046749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.463 [2024-11-18 00:40:58.046785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.463 qpair failed and we were unable to recover it. 00:35:34.463 [2024-11-18 00:40:58.047000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.464 [2024-11-18 00:40:58.047057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.464 qpair failed and we were unable to recover it. 00:35:34.464 [2024-11-18 00:40:58.047225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.464 [2024-11-18 00:40:58.047252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.464 qpair failed and we were unable to recover it. 00:35:34.464 [2024-11-18 00:40:58.047363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.464 [2024-11-18 00:40:58.047391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.464 qpair failed and we were unable to recover it. 00:35:34.464 [2024-11-18 00:40:58.047512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.464 [2024-11-18 00:40:58.047540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.464 qpair failed and we were unable to recover it. 00:35:34.464 [2024-11-18 00:40:58.047695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.464 [2024-11-18 00:40:58.047722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.464 qpair failed and we were unable to recover it. 00:35:34.464 [2024-11-18 00:40:58.047915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.464 [2024-11-18 00:40:58.047970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.464 qpair failed and we were unable to recover it. 00:35:34.464 [2024-11-18 00:40:58.048184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.464 [2024-11-18 00:40:58.048221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.464 qpair failed and we were unable to recover it. 00:35:34.464 [2024-11-18 00:40:58.048344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.464 [2024-11-18 00:40:58.048383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.464 qpair failed and we were unable to recover it. 00:35:34.464 [2024-11-18 00:40:58.048471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.465 [2024-11-18 00:40:58.048498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.465 qpair failed and we were unable to recover it. 00:35:34.465 [2024-11-18 00:40:58.048631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.465 [2024-11-18 00:40:58.048657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.465 qpair failed and we were unable to recover it. 00:35:34.465 [2024-11-18 00:40:58.048776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.465 [2024-11-18 00:40:58.048803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.465 qpair failed and we were unable to recover it. 00:35:34.465 [2024-11-18 00:40:58.048894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.465 [2024-11-18 00:40:58.048921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.465 qpair failed and we were unable to recover it. 00:35:34.465 [2024-11-18 00:40:58.049055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.465 [2024-11-18 00:40:58.049094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.465 qpair failed and we were unable to recover it. 00:35:34.465 [2024-11-18 00:40:58.049246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.465 [2024-11-18 00:40:58.049272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.465 qpair failed and we were unable to recover it. 00:35:34.465 [2024-11-18 00:40:58.049410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.465 [2024-11-18 00:40:58.049437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.465 qpair failed and we were unable to recover it. 00:35:34.465 [2024-11-18 00:40:58.049564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.465 [2024-11-18 00:40:58.049591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.465 qpair failed and we were unable to recover it. 00:35:34.466 [2024-11-18 00:40:58.049692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.466 [2024-11-18 00:40:58.049719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.466 qpair failed and we were unable to recover it. 00:35:34.466 [2024-11-18 00:40:58.049867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.466 [2024-11-18 00:40:58.049905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.466 qpair failed and we were unable to recover it. 00:35:34.466 [2024-11-18 00:40:58.050021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.466 [2024-11-18 00:40:58.050074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.466 qpair failed and we were unable to recover it. 00:35:34.466 [2024-11-18 00:40:58.050281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.466 [2024-11-18 00:40:58.050325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.466 qpair failed and we were unable to recover it. 00:35:34.466 [2024-11-18 00:40:58.050442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.466 [2024-11-18 00:40:58.050469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.466 qpair failed and we were unable to recover it. 00:35:34.466 [2024-11-18 00:40:58.050558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.466 [2024-11-18 00:40:58.050585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.466 qpair failed and we were unable to recover it. 00:35:34.466 [2024-11-18 00:40:58.050737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.467 [2024-11-18 00:40:58.050767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.467 qpair failed and we were unable to recover it. 00:35:34.467 [2024-11-18 00:40:58.050910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.467 [2024-11-18 00:40:58.050947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.467 qpair failed and we were unable to recover it. 00:35:34.467 [2024-11-18 00:40:58.051078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.467 [2024-11-18 00:40:58.051130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.467 qpair failed and we were unable to recover it. 00:35:34.467 [2024-11-18 00:40:58.051343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.467 [2024-11-18 00:40:58.051372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.467 qpair failed and we were unable to recover it. 00:35:34.467 [2024-11-18 00:40:58.051462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.467 [2024-11-18 00:40:58.051490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.467 qpair failed and we were unable to recover it. 00:35:34.467 [2024-11-18 00:40:58.051600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.467 [2024-11-18 00:40:58.051626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.467 qpair failed and we were unable to recover it. 00:35:34.467 [2024-11-18 00:40:58.051774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.467 [2024-11-18 00:40:58.051824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.467 qpair failed and we were unable to recover it. 00:35:34.467 [2024-11-18 00:40:58.052012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.467 [2024-11-18 00:40:58.052062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.467 qpair failed and we were unable to recover it. 00:35:34.467 [2024-11-18 00:40:58.052226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.467 [2024-11-18 00:40:58.052254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.467 qpair failed and we were unable to recover it. 00:35:34.467 [2024-11-18 00:40:58.052378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.467 [2024-11-18 00:40:58.052406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.467 qpair failed and we were unable to recover it. 00:35:34.467 [2024-11-18 00:40:58.052486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.468 [2024-11-18 00:40:58.052512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.468 qpair failed and we were unable to recover it. 00:35:34.468 [2024-11-18 00:40:58.052642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.468 [2024-11-18 00:40:58.052669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.468 qpair failed and we were unable to recover it. 00:35:34.468 [2024-11-18 00:40:58.052808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.468 [2024-11-18 00:40:58.052855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.468 qpair failed and we were unable to recover it. 00:35:34.468 [2024-11-18 00:40:58.053030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.468 [2024-11-18 00:40:58.053064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.468 qpair failed and we were unable to recover it. 00:35:34.468 [2024-11-18 00:40:58.053263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.468 [2024-11-18 00:40:58.053298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.468 qpair failed and we were unable to recover it. 00:35:34.468 [2024-11-18 00:40:58.053423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.468 [2024-11-18 00:40:58.053451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.468 qpair failed and we were unable to recover it. 00:35:34.468 [2024-11-18 00:40:58.053542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.468 [2024-11-18 00:40:58.053568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.468 qpair failed and we were unable to recover it. 00:35:34.468 [2024-11-18 00:40:58.053655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.468 [2024-11-18 00:40:58.053682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.468 qpair failed and we were unable to recover it. 00:35:34.468 [2024-11-18 00:40:58.053774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.468 [2024-11-18 00:40:58.053829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.468 qpair failed and we were unable to recover it. 00:35:34.468 [2024-11-18 00:40:58.054044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.468 [2024-11-18 00:40:58.054080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.468 qpair failed and we were unable to recover it. 00:35:34.468 [2024-11-18 00:40:58.054265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.468 [2024-11-18 00:40:58.054294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.468 qpair failed and we were unable to recover it. 00:35:34.468 [2024-11-18 00:40:58.054433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.468 [2024-11-18 00:40:58.054467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.468 qpair failed and we were unable to recover it. 00:35:34.468 [2024-11-18 00:40:58.054553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.468 [2024-11-18 00:40:58.054579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.468 qpair failed and we were unable to recover it. 00:35:34.468 [2024-11-18 00:40:58.054679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.468 [2024-11-18 00:40:58.054706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.468 qpair failed and we were unable to recover it. 00:35:34.468 [2024-11-18 00:40:58.054826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.468 [2024-11-18 00:40:58.054853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.468 qpair failed and we were unable to recover it. 00:35:34.468 [2024-11-18 00:40:58.054970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.468 [2024-11-18 00:40:58.055008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.468 qpair failed and we were unable to recover it. 00:35:34.468 [2024-11-18 00:40:58.055225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.468 [2024-11-18 00:40:58.055270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.468 qpair failed and we were unable to recover it. 00:35:34.468 [2024-11-18 00:40:58.055411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.468 [2024-11-18 00:40:58.055439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.468 qpair failed and we were unable to recover it. 00:35:34.468 [2024-11-18 00:40:58.055535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.468 [2024-11-18 00:40:58.055568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.468 qpair failed and we were unable to recover it. 00:35:34.468 [2024-11-18 00:40:58.055737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.468 [2024-11-18 00:40:58.055783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.468 qpair failed and we were unable to recover it. 00:35:34.468 [2024-11-18 00:40:58.055986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.468 [2024-11-18 00:40:58.056039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.468 qpair failed and we were unable to recover it. 00:35:34.468 [2024-11-18 00:40:58.056240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.468 [2024-11-18 00:40:58.056274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.468 qpair failed and we were unable to recover it. 00:35:34.468 [2024-11-18 00:40:58.056400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.468 [2024-11-18 00:40:58.056429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.468 qpair failed and we were unable to recover it. 00:35:34.468 [2024-11-18 00:40:58.056572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.469 [2024-11-18 00:40:58.056600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.469 qpair failed and we were unable to recover it. 00:35:34.469 [2024-11-18 00:40:58.056697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.469 [2024-11-18 00:40:58.056724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.469 qpair failed and we were unable to recover it. 00:35:34.469 [2024-11-18 00:40:58.056837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.469 [2024-11-18 00:40:58.056863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.469 qpair failed and we were unable to recover it. 00:35:34.469 [2024-11-18 00:40:58.056976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.469 [2024-11-18 00:40:58.057003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.469 qpair failed and we were unable to recover it. 00:35:34.469 [2024-11-18 00:40:58.057145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.469 [2024-11-18 00:40:58.057171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.469 qpair failed and we were unable to recover it. 00:35:34.469 [2024-11-18 00:40:58.057271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.469 [2024-11-18 00:40:58.057320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.469 qpair failed and we were unable to recover it. 00:35:34.469 [2024-11-18 00:40:58.057458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.469 [2024-11-18 00:40:58.057487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.469 qpair failed and we were unable to recover it. 00:35:34.469 [2024-11-18 00:40:58.057594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.469 [2024-11-18 00:40:58.057622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.469 qpair failed and we were unable to recover it. 00:35:34.469 [2024-11-18 00:40:58.057741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.469 [2024-11-18 00:40:58.057769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.469 qpair failed and we were unable to recover it. 00:35:34.469 [2024-11-18 00:40:58.057882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.469 [2024-11-18 00:40:58.057909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.469 qpair failed and we were unable to recover it. 00:35:34.470 [2024-11-18 00:40:58.057995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.470 [2024-11-18 00:40:58.058023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.470 qpair failed and we were unable to recover it. 00:35:34.470 [2024-11-18 00:40:58.058135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.470 [2024-11-18 00:40:58.058162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.470 qpair failed and we were unable to recover it. 00:35:34.470 [2024-11-18 00:40:58.058273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.470 [2024-11-18 00:40:58.058299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.470 qpair failed and we were unable to recover it. 00:35:34.470 [2024-11-18 00:40:58.058435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.470 [2024-11-18 00:40:58.058463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.470 qpair failed and we were unable to recover it. 00:35:34.470 [2024-11-18 00:40:58.058559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.470 [2024-11-18 00:40:58.058592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.470 qpair failed and we were unable to recover it. 00:35:34.470 [2024-11-18 00:40:58.058735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.470 [2024-11-18 00:40:58.058762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.470 qpair failed and we were unable to recover it. 00:35:34.470 [2024-11-18 00:40:58.058877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.470 [2024-11-18 00:40:58.058903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.470 qpair failed and we were unable to recover it. 00:35:34.470 [2024-11-18 00:40:58.059052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.470 [2024-11-18 00:40:58.059079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.470 qpair failed and we were unable to recover it. 00:35:34.470 [2024-11-18 00:40:58.059192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.470 [2024-11-18 00:40:58.059219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.470 qpair failed and we were unable to recover it. 00:35:34.470 [2024-11-18 00:40:58.059301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.470 [2024-11-18 00:40:58.059338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.470 qpair failed and we were unable to recover it. 00:35:34.470 [2024-11-18 00:40:58.059422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.470 [2024-11-18 00:40:58.059446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.470 qpair failed and we were unable to recover it. 00:35:34.470 [2024-11-18 00:40:58.059534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.470 [2024-11-18 00:40:58.059561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.470 qpair failed and we were unable to recover it. 00:35:34.470 [2024-11-18 00:40:58.059714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.470 [2024-11-18 00:40:58.059740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.470 qpair failed and we were unable to recover it. 00:35:34.470 [2024-11-18 00:40:58.059828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.471 [2024-11-18 00:40:58.059854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.471 qpair failed and we were unable to recover it. 00:35:34.471 [2024-11-18 00:40:58.059938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.471 [2024-11-18 00:40:58.059964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.471 qpair failed and we were unable to recover it. 00:35:34.471 [2024-11-18 00:40:58.060096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.471 [2024-11-18 00:40:58.060124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.471 qpair failed and we were unable to recover it. 00:35:34.471 [2024-11-18 00:40:58.060223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.471 [2024-11-18 00:40:58.060249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.471 qpair failed and we were unable to recover it. 00:35:34.471 [2024-11-18 00:40:58.060384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.471 [2024-11-18 00:40:58.060411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.471 qpair failed and we were unable to recover it. 00:35:34.471 [2024-11-18 00:40:58.060533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.471 [2024-11-18 00:40:58.060559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.471 qpair failed and we were unable to recover it. 00:35:34.471 [2024-11-18 00:40:58.060740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.471 [2024-11-18 00:40:58.060780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.471 qpair failed and we were unable to recover it. 00:35:34.471 [2024-11-18 00:40:58.060906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.471 [2024-11-18 00:40:58.060935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.471 qpair failed and we were unable to recover it. 00:35:34.471 [2024-11-18 00:40:58.061057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.471 [2024-11-18 00:40:58.061086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.472 qpair failed and we were unable to recover it. 00:35:34.472 [2024-11-18 00:40:58.061233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.472 [2024-11-18 00:40:58.061272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.472 qpair failed and we were unable to recover it. 00:35:34.472 [2024-11-18 00:40:58.061379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.472 [2024-11-18 00:40:58.061408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.472 qpair failed and we were unable to recover it. 00:35:34.472 [2024-11-18 00:40:58.061540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.472 [2024-11-18 00:40:58.061570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.472 qpair failed and we were unable to recover it. 00:35:34.472 [2024-11-18 00:40:58.061667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.472 [2024-11-18 00:40:58.061694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.472 qpair failed and we were unable to recover it. 00:35:34.472 [2024-11-18 00:40:58.061862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.472 [2024-11-18 00:40:58.061908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.472 qpair failed and we were unable to recover it. 00:35:34.472 [2024-11-18 00:40:58.062045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.472 [2024-11-18 00:40:58.062089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.472 qpair failed and we were unable to recover it. 00:35:34.472 [2024-11-18 00:40:58.062215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.472 [2024-11-18 00:40:58.062241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.472 qpair failed and we were unable to recover it. 00:35:34.472 [2024-11-18 00:40:58.062388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.472 [2024-11-18 00:40:58.062433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.472 qpair failed and we were unable to recover it. 00:35:34.472 [2024-11-18 00:40:58.062515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.472 [2024-11-18 00:40:58.062542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.472 qpair failed and we were unable to recover it. 00:35:34.472 [2024-11-18 00:40:58.062701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.472 [2024-11-18 00:40:58.062750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.472 qpair failed and we were unable to recover it. 00:35:34.473 [2024-11-18 00:40:58.062890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.473 [2024-11-18 00:40:58.062936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.473 qpair failed and we were unable to recover it. 00:35:34.473 [2024-11-18 00:40:58.063047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.473 [2024-11-18 00:40:58.063073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.473 qpair failed and we were unable to recover it. 00:35:34.473 [2024-11-18 00:40:58.063793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.473 [2024-11-18 00:40:58.063824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.473 qpair failed and we were unable to recover it. 00:35:34.473 [2024-11-18 00:40:58.063974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.473 [2024-11-18 00:40:58.064010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.473 qpair failed and we were unable to recover it. 00:35:34.473 [2024-11-18 00:40:58.064127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.473 [2024-11-18 00:40:58.064162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.473 qpair failed and we were unable to recover it. 00:35:34.473 [2024-11-18 00:40:58.064270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.473 [2024-11-18 00:40:58.064296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.473 qpair failed and we were unable to recover it. 00:35:34.473 [2024-11-18 00:40:58.064389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.473 [2024-11-18 00:40:58.064415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.473 qpair failed and we were unable to recover it. 00:35:34.473 [2024-11-18 00:40:58.064512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.473 [2024-11-18 00:40:58.064539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.473 qpair failed and we were unable to recover it. 00:35:34.473 [2024-11-18 00:40:58.064681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.473 [2024-11-18 00:40:58.064708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.473 qpair failed and we were unable to recover it. 00:35:34.473 [2024-11-18 00:40:58.064796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.473 [2024-11-18 00:40:58.064821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.473 qpair failed and we were unable to recover it. 00:35:34.473 [2024-11-18 00:40:58.064938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.474 [2024-11-18 00:40:58.064965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.474 qpair failed and we were unable to recover it. 00:35:34.474 [2024-11-18 00:40:58.065051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.474 [2024-11-18 00:40:58.065078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.474 qpair failed and we were unable to recover it. 00:35:34.474 [2024-11-18 00:40:58.065169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.474 [2024-11-18 00:40:58.065195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.474 qpair failed and we were unable to recover it. 00:35:34.474 [2024-11-18 00:40:58.065348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.474 [2024-11-18 00:40:58.065376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.474 qpair failed and we were unable to recover it. 00:35:34.474 [2024-11-18 00:40:58.065493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.474 [2024-11-18 00:40:58.065519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.474 qpair failed and we were unable to recover it. 00:35:34.474 [2024-11-18 00:40:58.065621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.474 [2024-11-18 00:40:58.065660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.474 qpair failed and we were unable to recover it. 00:35:34.474 [2024-11-18 00:40:58.065820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.474 [2024-11-18 00:40:58.065848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.474 qpair failed and we were unable to recover it. 00:35:34.474 [2024-11-18 00:40:58.065926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.474 [2024-11-18 00:40:58.065974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.474 qpair failed and we were unable to recover it. 00:35:34.474 [2024-11-18 00:40:58.066065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.474 [2024-11-18 00:40:58.066091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.474 qpair failed and we were unable to recover it. 00:35:34.475 [2024-11-18 00:40:58.066177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.475 [2024-11-18 00:40:58.066203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.475 qpair failed and we were unable to recover it. 00:35:34.475 [2024-11-18 00:40:58.066376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.475 [2024-11-18 00:40:58.066406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.475 qpair failed and we were unable to recover it. 00:35:34.475 [2024-11-18 00:40:58.066534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.475 [2024-11-18 00:40:58.066563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.475 qpair failed and we were unable to recover it. 00:35:34.475 [2024-11-18 00:40:58.066698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.475 [2024-11-18 00:40:58.066727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.475 qpair failed and we were unable to recover it. 00:35:34.475 [2024-11-18 00:40:58.066814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.475 [2024-11-18 00:40:58.066843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.475 qpair failed and we were unable to recover it. 00:35:34.475 [2024-11-18 00:40:58.066996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.475 [2024-11-18 00:40:58.067025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.475 qpair failed and we were unable to recover it. 00:35:34.475 [2024-11-18 00:40:58.067146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.475 [2024-11-18 00:40:58.067176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.475 qpair failed and we were unable to recover it. 00:35:34.475 [2024-11-18 00:40:58.067294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.475 [2024-11-18 00:40:58.067347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.475 qpair failed and we were unable to recover it. 00:35:34.475 [2024-11-18 00:40:58.067477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.475 [2024-11-18 00:40:58.067522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.475 qpair failed and we were unable to recover it. 00:35:34.475 [2024-11-18 00:40:58.067642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.476 [2024-11-18 00:40:58.067673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.476 qpair failed and we were unable to recover it. 00:35:34.476 [2024-11-18 00:40:58.067816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.476 [2024-11-18 00:40:58.067845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.476 qpair failed and we were unable to recover it. 00:35:34.476 [2024-11-18 00:40:58.067966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.476 [2024-11-18 00:40:58.067995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.476 qpair failed and we were unable to recover it. 00:35:34.476 [2024-11-18 00:40:58.068093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.477 [2024-11-18 00:40:58.068124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.477 qpair failed and we were unable to recover it. 00:35:34.477 [2024-11-18 00:40:58.068267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.477 [2024-11-18 00:40:58.068298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.477 qpair failed and we were unable to recover it. 00:35:34.477 [2024-11-18 00:40:58.068432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.477 [2024-11-18 00:40:58.068462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.477 qpair failed and we were unable to recover it. 00:35:34.477 [2024-11-18 00:40:58.068562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.477 [2024-11-18 00:40:58.068591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.477 qpair failed and we were unable to recover it. 00:35:34.477 [2024-11-18 00:40:58.068689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.477 [2024-11-18 00:40:58.068718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.477 qpair failed and we were unable to recover it. 00:35:34.477 [2024-11-18 00:40:58.068836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.477 [2024-11-18 00:40:58.068865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.477 qpair failed and we were unable to recover it. 00:35:34.477 [2024-11-18 00:40:58.069025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.477 [2024-11-18 00:40:58.069054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.477 qpair failed and we were unable to recover it. 00:35:34.477 [2024-11-18 00:40:58.069203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.477 [2024-11-18 00:40:58.069238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.477 qpair failed and we were unable to recover it. 00:35:34.477 [2024-11-18 00:40:58.069500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.478 [2024-11-18 00:40:58.069527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.478 qpair failed and we were unable to recover it. 00:35:34.478 [2024-11-18 00:40:58.069652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.478 [2024-11-18 00:40:58.069682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.478 qpair failed and we were unable to recover it. 00:35:34.478 [2024-11-18 00:40:58.069785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.478 [2024-11-18 00:40:58.069827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.478 qpair failed and we were unable to recover it. 00:35:34.478 [2024-11-18 00:40:58.070049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.478 [2024-11-18 00:40:58.070078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.478 qpair failed and we were unable to recover it. 00:35:34.478 [2024-11-18 00:40:58.070209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.478 [2024-11-18 00:40:58.070238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.478 qpair failed and we were unable to recover it. 00:35:34.478 [2024-11-18 00:40:58.070406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.478 [2024-11-18 00:40:58.070433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.478 qpair failed and we were unable to recover it. 00:35:34.478 [2024-11-18 00:40:58.070550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.478 [2024-11-18 00:40:58.070595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.478 qpair failed and we were unable to recover it. 00:35:34.478 [2024-11-18 00:40:58.070754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.478 [2024-11-18 00:40:58.070801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.478 qpair failed and we were unable to recover it. 00:35:34.478 [2024-11-18 00:40:58.070928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.478 [2024-11-18 00:40:58.070958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.478 qpair failed and we were unable to recover it. 00:35:34.478 [2024-11-18 00:40:58.071129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.478 [2024-11-18 00:40:58.071187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.478 qpair failed and we were unable to recover it. 00:35:34.478 [2024-11-18 00:40:58.071285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.478 [2024-11-18 00:40:58.071322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.478 qpair failed and we were unable to recover it. 00:35:34.478 [2024-11-18 00:40:58.071470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.478 [2024-11-18 00:40:58.071497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.478 qpair failed and we were unable to recover it. 00:35:34.478 [2024-11-18 00:40:58.071618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.478 [2024-11-18 00:40:58.071644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.478 qpair failed and we were unable to recover it. 00:35:34.478 [2024-11-18 00:40:58.071771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.478 [2024-11-18 00:40:58.071815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.478 qpair failed and we were unable to recover it. 00:35:34.478 [2024-11-18 00:40:58.071930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.478 [2024-11-18 00:40:58.071962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.478 qpair failed and we were unable to recover it. 00:35:34.479 [2024-11-18 00:40:58.072101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.479 [2024-11-18 00:40:58.072128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.479 qpair failed and we were unable to recover it. 00:35:34.479 [2024-11-18 00:40:58.072220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.479 [2024-11-18 00:40:58.072246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.479 qpair failed and we were unable to recover it. 00:35:34.479 [2024-11-18 00:40:58.072362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.479 [2024-11-18 00:40:58.072389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.479 qpair failed and we were unable to recover it. 00:35:34.479 [2024-11-18 00:40:58.072475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.479 [2024-11-18 00:40:58.072501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.479 qpair failed and we were unable to recover it. 00:35:34.479 [2024-11-18 00:40:58.072591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.479 [2024-11-18 00:40:58.072628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.479 qpair failed and we were unable to recover it. 00:35:34.479 [2024-11-18 00:40:58.072766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.479 [2024-11-18 00:40:58.072791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.479 qpair failed and we were unable to recover it. 00:35:34.479 [2024-11-18 00:40:58.072905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.479 [2024-11-18 00:40:58.072933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.479 qpair failed and we were unable to recover it. 00:35:34.479 [2024-11-18 00:40:58.073024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.479 [2024-11-18 00:40:58.073051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.479 qpair failed and we were unable to recover it. 00:35:34.479 [2024-11-18 00:40:58.073167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.479 [2024-11-18 00:40:58.073194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.479 qpair failed and we were unable to recover it. 00:35:34.479 [2024-11-18 00:40:58.073307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.479 [2024-11-18 00:40:58.073339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.479 qpair failed and we were unable to recover it. 00:35:34.480 [2024-11-18 00:40:58.073475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.480 [2024-11-18 00:40:58.073523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.480 qpair failed and we were unable to recover it. 00:35:34.480 [2024-11-18 00:40:58.073664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.480 [2024-11-18 00:40:58.073713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.480 qpair failed and we were unable to recover it. 00:35:34.480 [2024-11-18 00:40:58.073834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.480 [2024-11-18 00:40:58.073863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.480 qpair failed and we were unable to recover it. 00:35:34.480 [2024-11-18 00:40:58.073991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.480 [2024-11-18 00:40:58.074021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.480 qpair failed and we were unable to recover it. 00:35:34.480 [2024-11-18 00:40:58.074142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.480 [2024-11-18 00:40:58.074171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.480 qpair failed and we were unable to recover it. 00:35:34.480 [2024-11-18 00:40:58.074323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.480 [2024-11-18 00:40:58.074351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.480 qpair failed and we were unable to recover it. 00:35:34.480 [2024-11-18 00:40:58.074467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.480 [2024-11-18 00:40:58.074496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.480 qpair failed and we were unable to recover it. 00:35:34.480 [2024-11-18 00:40:58.074644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.480 [2024-11-18 00:40:58.074690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.480 qpair failed and we were unable to recover it. 00:35:34.480 [2024-11-18 00:40:58.074796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.480 [2024-11-18 00:40:58.074825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.480 qpair failed and we were unable to recover it. 00:35:34.480 [2024-11-18 00:40:58.074932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.480 [2024-11-18 00:40:58.074959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.481 qpair failed and we were unable to recover it. 00:35:34.481 [2024-11-18 00:40:58.075039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.481 [2024-11-18 00:40:58.075066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.481 qpair failed and we were unable to recover it. 00:35:34.481 [2024-11-18 00:40:58.075148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.481 [2024-11-18 00:40:58.075174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.481 qpair failed and we were unable to recover it. 00:35:34.481 [2024-11-18 00:40:58.075280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.481 [2024-11-18 00:40:58.075307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.481 qpair failed and we were unable to recover it. 00:35:34.481 [2024-11-18 00:40:58.075416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.481 [2024-11-18 00:40:58.075442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.481 qpair failed and we were unable to recover it. 00:35:34.481 [2024-11-18 00:40:58.075532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.481 [2024-11-18 00:40:58.075558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.481 qpair failed and we were unable to recover it. 00:35:34.481 [2024-11-18 00:40:58.075671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.481 [2024-11-18 00:40:58.075698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.481 qpair failed and we were unable to recover it. 00:35:34.481 [2024-11-18 00:40:58.075854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.481 [2024-11-18 00:40:58.075892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.481 qpair failed and we were unable to recover it. 00:35:34.481 [2024-11-18 00:40:58.076022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.481 [2024-11-18 00:40:58.076051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.481 qpair failed and we were unable to recover it. 00:35:34.481 [2024-11-18 00:40:58.076195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.481 [2024-11-18 00:40:58.076222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.481 qpair failed and we were unable to recover it. 00:35:34.481 [2024-11-18 00:40:58.076347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.481 [2024-11-18 00:40:58.076376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.481 qpair failed and we were unable to recover it. 00:35:34.482 [2024-11-18 00:40:58.076462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.482 [2024-11-18 00:40:58.076488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.482 qpair failed and we were unable to recover it. 00:35:34.482 [2024-11-18 00:40:58.076573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.482 [2024-11-18 00:40:58.076600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.482 qpair failed and we were unable to recover it. 00:35:34.482 [2024-11-18 00:40:58.076712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.482 [2024-11-18 00:40:58.076738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.482 qpair failed and we were unable to recover it. 00:35:34.482 [2024-11-18 00:40:58.076891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.482 [2024-11-18 00:40:58.076917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.482 qpair failed and we were unable to recover it. 00:35:34.482 [2024-11-18 00:40:58.077003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.482 [2024-11-18 00:40:58.077030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.482 qpair failed and we were unable to recover it. 00:35:34.482 [2024-11-18 00:40:58.077122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.482 [2024-11-18 00:40:58.077150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.482 qpair failed and we were unable to recover it. 00:35:34.482 [2024-11-18 00:40:58.077231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.482 [2024-11-18 00:40:58.077257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.482 qpair failed and we were unable to recover it. 00:35:34.482 [2024-11-18 00:40:58.077402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.482 [2024-11-18 00:40:58.077430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.482 qpair failed and we were unable to recover it. 00:35:34.482 [2024-11-18 00:40:58.077524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.482 [2024-11-18 00:40:58.077550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.482 qpair failed and we were unable to recover it. 00:35:34.482 [2024-11-18 00:40:58.077664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.482 [2024-11-18 00:40:58.077693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.482 qpair failed and we were unable to recover it. 00:35:34.482 [2024-11-18 00:40:58.077791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.483 [2024-11-18 00:40:58.077818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.483 qpair failed and we were unable to recover it. 00:35:34.483 [2024-11-18 00:40:58.077930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.483 [2024-11-18 00:40:58.077956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.483 qpair failed and we were unable to recover it. 00:35:34.483 [2024-11-18 00:40:58.078042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.483 [2024-11-18 00:40:58.078080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.483 qpair failed and we were unable to recover it. 00:35:34.483 [2024-11-18 00:40:58.078189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.483 [2024-11-18 00:40:58.078215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.483 qpair failed and we were unable to recover it. 00:35:34.483 [2024-11-18 00:40:58.078302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.483 [2024-11-18 00:40:58.078338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.483 qpair failed and we were unable to recover it. 00:35:34.483 [2024-11-18 00:40:58.078455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.483 [2024-11-18 00:40:58.078482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.483 qpair failed and we were unable to recover it. 00:35:34.483 [2024-11-18 00:40:58.078609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.483 [2024-11-18 00:40:58.078640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.483 qpair failed and we were unable to recover it. 00:35:34.483 [2024-11-18 00:40:58.078729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.483 [2024-11-18 00:40:58.078756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.483 qpair failed and we were unable to recover it. 00:35:34.483 [2024-11-18 00:40:58.078911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.483 [2024-11-18 00:40:58.078941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.483 qpair failed and we were unable to recover it. 00:35:34.483 [2024-11-18 00:40:58.079117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.483 [2024-11-18 00:40:58.079146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.483 qpair failed and we were unable to recover it. 00:35:34.483 [2024-11-18 00:40:58.079266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.483 [2024-11-18 00:40:58.079305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.483 qpair failed and we were unable to recover it. 00:35:34.483 [2024-11-18 00:40:58.079455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.484 [2024-11-18 00:40:58.079482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.484 qpair failed and we were unable to recover it. 00:35:34.484 [2024-11-18 00:40:58.079577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.484 [2024-11-18 00:40:58.079604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.484 qpair failed and we were unable to recover it. 00:35:34.484 [2024-11-18 00:40:58.079760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.484 [2024-11-18 00:40:58.079801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.484 qpair failed and we were unable to recover it. 00:35:34.484 [2024-11-18 00:40:58.079922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.484 [2024-11-18 00:40:58.079951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.484 qpair failed and we were unable to recover it. 00:35:34.484 [2024-11-18 00:40:58.080070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.484 [2024-11-18 00:40:58.080099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.484 qpair failed and we were unable to recover it. 00:35:34.484 [2024-11-18 00:40:58.080191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.484 [2024-11-18 00:40:58.080227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.484 qpair failed and we were unable to recover it. 00:35:34.484 [2024-11-18 00:40:58.080337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.484 [2024-11-18 00:40:58.080363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.484 qpair failed and we were unable to recover it. 00:35:34.484 [2024-11-18 00:40:58.080476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.484 [2024-11-18 00:40:58.080503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.484 qpair failed and we were unable to recover it. 00:35:34.484 [2024-11-18 00:40:58.080618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.484 [2024-11-18 00:40:58.080648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.484 qpair failed and we were unable to recover it. 00:35:34.484 [2024-11-18 00:40:58.080805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.484 [2024-11-18 00:40:58.080835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.484 qpair failed and we were unable to recover it. 00:35:34.484 [2024-11-18 00:40:58.080957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.484 [2024-11-18 00:40:58.080985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.484 qpair failed and we were unable to recover it. 00:35:34.484 [2024-11-18 00:40:58.081113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.484 [2024-11-18 00:40:58.081143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.484 qpair failed and we were unable to recover it. 00:35:34.484 [2024-11-18 00:40:58.081270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.484 [2024-11-18 00:40:58.081297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.484 qpair failed and we were unable to recover it. 00:35:34.485 [2024-11-18 00:40:58.081389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.485 [2024-11-18 00:40:58.081415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.485 qpair failed and we were unable to recover it. 00:35:34.485 [2024-11-18 00:40:58.081494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.485 [2024-11-18 00:40:58.081522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.485 qpair failed and we were unable to recover it. 00:35:34.485 [2024-11-18 00:40:58.081633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.485 [2024-11-18 00:40:58.081661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.485 qpair failed and we were unable to recover it. 00:35:34.485 [2024-11-18 00:40:58.081794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.485 [2024-11-18 00:40:58.081837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.485 qpair failed and we were unable to recover it. 00:35:34.485 [2024-11-18 00:40:58.081980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.485 [2024-11-18 00:40:58.082010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.485 qpair failed and we were unable to recover it. 00:35:34.485 [2024-11-18 00:40:58.082161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.485 [2024-11-18 00:40:58.082191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.485 qpair failed and we were unable to recover it. 00:35:34.485 [2024-11-18 00:40:58.082285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.485 [2024-11-18 00:40:58.082331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.485 qpair failed and we were unable to recover it. 00:35:34.485 [2024-11-18 00:40:58.082459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.485 [2024-11-18 00:40:58.082485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.485 qpair failed and we were unable to recover it. 00:35:34.485 [2024-11-18 00:40:58.082562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.485 [2024-11-18 00:40:58.082589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.485 qpair failed and we were unable to recover it. 00:35:34.485 [2024-11-18 00:40:58.082672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.486 [2024-11-18 00:40:58.082700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.486 qpair failed and we were unable to recover it. 00:35:34.486 [2024-11-18 00:40:58.082837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.486 [2024-11-18 00:40:58.082865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.486 qpair failed and we were unable to recover it. 00:35:34.486 [2024-11-18 00:40:58.083023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.486 [2024-11-18 00:40:58.083053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.486 qpair failed and we were unable to recover it. 00:35:34.486 [2024-11-18 00:40:58.083172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.486 [2024-11-18 00:40:58.083198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.486 qpair failed and we were unable to recover it. 00:35:34.486 [2024-11-18 00:40:58.083270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.486 [2024-11-18 00:40:58.083295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.486 qpair failed and we were unable to recover it. 00:35:34.486 [2024-11-18 00:40:58.083433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.486 [2024-11-18 00:40:58.083460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.486 qpair failed and we were unable to recover it. 00:35:34.486 [2024-11-18 00:40:58.083571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.486 [2024-11-18 00:40:58.083617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.486 qpair failed and we were unable to recover it. 00:35:34.486 [2024-11-18 00:40:58.083729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.486 [2024-11-18 00:40:58.083767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.486 qpair failed and we were unable to recover it. 00:35:34.486 [2024-11-18 00:40:58.083892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.486 [2024-11-18 00:40:58.083922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.486 qpair failed and we were unable to recover it. 00:35:34.486 [2024-11-18 00:40:58.084079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.486 [2024-11-18 00:40:58.084109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.486 qpair failed and we were unable to recover it. 00:35:34.486 [2024-11-18 00:40:58.084231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.486 [2024-11-18 00:40:58.084260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.486 qpair failed and we were unable to recover it. 00:35:34.486 [2024-11-18 00:40:58.084402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.487 [2024-11-18 00:40:58.084429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.487 qpair failed and we were unable to recover it. 00:35:34.487 [2024-11-18 00:40:58.084532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.487 [2024-11-18 00:40:58.084562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.487 qpair failed and we were unable to recover it. 00:35:34.487 [2024-11-18 00:40:58.084729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.487 [2024-11-18 00:40:58.084769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.487 qpair failed and we were unable to recover it. 00:35:34.487 [2024-11-18 00:40:58.084866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.487 [2024-11-18 00:40:58.084895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.487 qpair failed and we were unable to recover it. 00:35:34.487 [2024-11-18 00:40:58.085012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.487 [2024-11-18 00:40:58.085041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.487 qpair failed and we were unable to recover it. 00:35:34.487 [2024-11-18 00:40:58.085172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.487 [2024-11-18 00:40:58.085215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.487 qpair failed and we were unable to recover it. 00:35:34.487 [2024-11-18 00:40:58.085368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.487 [2024-11-18 00:40:58.085395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.487 qpair failed and we were unable to recover it. 00:35:34.487 [2024-11-18 00:40:58.085477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.487 [2024-11-18 00:40:58.085503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.487 qpair failed and we were unable to recover it. 00:35:34.487 [2024-11-18 00:40:58.085584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.487 [2024-11-18 00:40:58.085610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.487 qpair failed and we were unable to recover it. 00:35:34.488 [2024-11-18 00:40:58.085715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.488 [2024-11-18 00:40:58.085744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.488 qpair failed and we were unable to recover it. 00:35:34.488 [2024-11-18 00:40:58.085864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.488 [2024-11-18 00:40:58.085893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.488 qpair failed and we were unable to recover it. 00:35:34.488 [2024-11-18 00:40:58.086011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.488 [2024-11-18 00:40:58.086040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.488 qpair failed and we were unable to recover it. 00:35:34.488 [2024-11-18 00:40:58.086204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.488 [2024-11-18 00:40:58.086234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.488 qpair failed and we were unable to recover it. 00:35:34.488 [2024-11-18 00:40:58.086349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.488 [2024-11-18 00:40:58.086376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.488 qpair failed and we were unable to recover it. 00:35:34.488 [2024-11-18 00:40:58.086486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.488 [2024-11-18 00:40:58.086511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.488 qpair failed and we were unable to recover it. 00:35:34.488 [2024-11-18 00:40:58.086645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.488 [2024-11-18 00:40:58.086674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.488 qpair failed and we were unable to recover it. 00:35:34.488 [2024-11-18 00:40:58.086835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.488 [2024-11-18 00:40:58.086864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.488 qpair failed and we were unable to recover it. 00:35:34.488 [2024-11-18 00:40:58.087013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.488 [2024-11-18 00:40:58.087042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.488 qpair failed and we were unable to recover it. 00:35:34.488 [2024-11-18 00:40:58.087164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.488 [2024-11-18 00:40:58.087194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.488 qpair failed and we were unable to recover it. 00:35:34.489 [2024-11-18 00:40:58.087296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.489 [2024-11-18 00:40:58.087333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.489 qpair failed and we were unable to recover it. 00:35:34.489 [2024-11-18 00:40:58.087446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.489 [2024-11-18 00:40:58.087472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.489 qpair failed and we were unable to recover it. 00:35:34.489 [2024-11-18 00:40:58.087567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.489 [2024-11-18 00:40:58.087593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.489 qpair failed and we were unable to recover it. 00:35:34.489 [2024-11-18 00:40:58.087729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.489 [2024-11-18 00:40:58.087757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.489 qpair failed and we were unable to recover it. 00:35:34.489 [2024-11-18 00:40:58.087912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.489 [2024-11-18 00:40:58.087958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.489 qpair failed and we were unable to recover it. 00:35:34.489 [2024-11-18 00:40:58.088113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.489 [2024-11-18 00:40:58.088143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.489 qpair failed and we were unable to recover it. 00:35:34.489 [2024-11-18 00:40:58.088242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.489 [2024-11-18 00:40:58.088268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.489 qpair failed and we were unable to recover it. 00:35:34.489 [2024-11-18 00:40:58.088387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.489 [2024-11-18 00:40:58.088427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.489 qpair failed and we were unable to recover it. 00:35:34.489 [2024-11-18 00:40:58.088551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.489 [2024-11-18 00:40:58.088578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.489 qpair failed and we were unable to recover it. 00:35:34.489 [2024-11-18 00:40:58.088752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.489 [2024-11-18 00:40:58.088783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.489 qpair failed and we were unable to recover it. 00:35:34.489 [2024-11-18 00:40:58.088900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.489 [2024-11-18 00:40:58.088936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.489 qpair failed and we were unable to recover it. 00:35:34.489 [2024-11-18 00:40:58.089069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.489 [2024-11-18 00:40:58.089099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.489 qpair failed and we were unable to recover it. 00:35:34.489 [2024-11-18 00:40:58.089199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.490 [2024-11-18 00:40:58.089225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.490 qpair failed and we were unable to recover it. 00:35:34.490 [2024-11-18 00:40:58.089325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.490 [2024-11-18 00:40:58.089352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.490 qpair failed and we were unable to recover it. 00:35:34.490 [2024-11-18 00:40:58.089441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.490 [2024-11-18 00:40:58.089468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.490 qpair failed and we were unable to recover it. 00:35:34.490 [2024-11-18 00:40:58.089577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.490 [2024-11-18 00:40:58.089608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.490 qpair failed and we were unable to recover it. 00:35:34.490 [2024-11-18 00:40:58.089796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.490 [2024-11-18 00:40:58.089844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.490 qpair failed and we were unable to recover it. 00:35:34.490 [2024-11-18 00:40:58.089993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.490 [2024-11-18 00:40:58.090038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.490 qpair failed and we were unable to recover it. 00:35:34.490 [2024-11-18 00:40:58.090214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.490 [2024-11-18 00:40:58.090254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.490 qpair failed and we were unable to recover it. 00:35:34.490 [2024-11-18 00:40:58.090358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.490 [2024-11-18 00:40:58.090387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.490 qpair failed and we were unable to recover it. 00:35:34.490 [2024-11-18 00:40:58.090469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.490 [2024-11-18 00:40:58.090496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.490 qpair failed and we were unable to recover it. 00:35:34.490 [2024-11-18 00:40:58.090640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.490 [2024-11-18 00:40:58.090671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.490 qpair failed and we were unable to recover it. 00:35:34.490 [2024-11-18 00:40:58.090816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.490 [2024-11-18 00:40:58.090861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.490 qpair failed and we were unable to recover it. 00:35:34.490 [2024-11-18 00:40:58.091027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.490 [2024-11-18 00:40:58.091071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.490 qpair failed and we were unable to recover it. 00:35:34.490 [2024-11-18 00:40:58.091192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.490 [2024-11-18 00:40:58.091220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.490 qpair failed and we were unable to recover it. 00:35:34.490 [2024-11-18 00:40:58.091338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.490 [2024-11-18 00:40:58.091367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.491 qpair failed and we were unable to recover it. 00:35:34.491 [2024-11-18 00:40:58.091454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.491 [2024-11-18 00:40:58.091501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.491 qpair failed and we were unable to recover it. 00:35:34.491 [2024-11-18 00:40:58.091595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.491 [2024-11-18 00:40:58.091653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.491 qpair failed and we were unable to recover it. 00:35:34.491 [2024-11-18 00:40:58.091766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.491 [2024-11-18 00:40:58.091812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.491 qpair failed and we were unable to recover it. 00:35:34.491 [2024-11-18 00:40:58.091978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.491 [2024-11-18 00:40:58.092009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.491 qpair failed and we were unable to recover it. 00:35:34.491 [2024-11-18 00:40:58.092146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.491 [2024-11-18 00:40:58.092174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.491 qpair failed and we were unable to recover it. 00:35:34.491 [2024-11-18 00:40:58.092284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.491 [2024-11-18 00:40:58.092323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.491 qpair failed and we were unable to recover it. 00:35:34.491 [2024-11-18 00:40:58.092457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.491 [2024-11-18 00:40:58.092503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.491 qpair failed and we were unable to recover it. 00:35:34.491 [2024-11-18 00:40:58.092629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.491 [2024-11-18 00:40:58.092655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.491 qpair failed and we were unable to recover it. 00:35:34.491 [2024-11-18 00:40:58.092737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.491 [2024-11-18 00:40:58.092764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.491 qpair failed and we were unable to recover it. 00:35:34.491 [2024-11-18 00:40:58.092850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.491 [2024-11-18 00:40:58.092877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.491 qpair failed and we were unable to recover it. 00:35:34.491 [2024-11-18 00:40:58.092995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.491 [2024-11-18 00:40:58.093022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.491 qpair failed and we were unable to recover it. 00:35:34.491 [2024-11-18 00:40:58.093148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.491 [2024-11-18 00:40:58.093188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.491 qpair failed and we were unable to recover it. 00:35:34.492 [2024-11-18 00:40:58.093280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.492 [2024-11-18 00:40:58.093320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.492 qpair failed and we were unable to recover it. 00:35:34.492 [2024-11-18 00:40:58.093413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.492 [2024-11-18 00:40:58.093441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.492 qpair failed and we were unable to recover it. 00:35:34.492 [2024-11-18 00:40:58.093527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.492 [2024-11-18 00:40:58.093553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.492 qpair failed and we were unable to recover it. 00:35:34.492 [2024-11-18 00:40:58.093688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.492 [2024-11-18 00:40:58.093715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.492 qpair failed and we were unable to recover it. 00:35:34.492 [2024-11-18 00:40:58.093855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.492 [2024-11-18 00:40:58.093881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.492 qpair failed and we were unable to recover it. 00:35:34.492 [2024-11-18 00:40:58.094032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.492 [2024-11-18 00:40:58.094079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.492 qpair failed and we were unable to recover it. 00:35:34.492 [2024-11-18 00:40:58.094175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.492 [2024-11-18 00:40:58.094205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.492 qpair failed and we were unable to recover it. 00:35:34.492 [2024-11-18 00:40:58.094349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.493 [2024-11-18 00:40:58.094392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.493 qpair failed and we were unable to recover it. 00:35:34.493 [2024-11-18 00:40:58.094502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.493 [2024-11-18 00:40:58.094531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.493 qpair failed and we were unable to recover it. 00:35:34.493 [2024-11-18 00:40:58.094630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.493 [2024-11-18 00:40:58.094659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.493 qpair failed and we were unable to recover it. 00:35:34.493 [2024-11-18 00:40:58.094809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.493 [2024-11-18 00:40:58.094838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.493 qpair failed and we were unable to recover it. 00:35:34.493 [2024-11-18 00:40:58.094968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.493 [2024-11-18 00:40:58.094997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.493 qpair failed and we were unable to recover it. 00:35:34.493 [2024-11-18 00:40:58.095091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.493 [2024-11-18 00:40:58.095121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.493 qpair failed and we were unable to recover it. 00:35:34.493 [2024-11-18 00:40:58.095212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.493 [2024-11-18 00:40:58.095245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.493 qpair failed and we were unable to recover it. 00:35:34.493 [2024-11-18 00:40:58.095386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.493 [2024-11-18 00:40:58.095414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.493 qpair failed and we were unable to recover it. 00:35:34.493 [2024-11-18 00:40:58.095499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.493 [2024-11-18 00:40:58.095526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.493 qpair failed and we were unable to recover it. 00:35:34.493 [2024-11-18 00:40:58.095655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.494 [2024-11-18 00:40:58.095701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.494 qpair failed and we were unable to recover it. 00:35:34.494 [2024-11-18 00:40:58.095847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.494 [2024-11-18 00:40:58.095878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.494 qpair failed and we were unable to recover it. 00:35:34.494 [2024-11-18 00:40:58.095992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.494 [2024-11-18 00:40:58.096020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.494 qpair failed and we were unable to recover it. 00:35:34.494 [2024-11-18 00:40:58.096146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.494 [2024-11-18 00:40:58.096172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.494 qpair failed and we were unable to recover it. 00:35:34.494 [2024-11-18 00:40:58.096323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.494 [2024-11-18 00:40:58.096352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.494 qpair failed and we were unable to recover it. 00:35:34.494 [2024-11-18 00:40:58.096448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.494 [2024-11-18 00:40:58.096475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.494 qpair failed and we were unable to recover it. 00:35:34.494 [2024-11-18 00:40:58.096550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.494 [2024-11-18 00:40:58.096577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.494 qpair failed and we were unable to recover it. 00:35:34.494 [2024-11-18 00:40:58.096734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.494 [2024-11-18 00:40:58.096782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.494 qpair failed and we were unable to recover it. 00:35:34.494 [2024-11-18 00:40:58.096929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.494 [2024-11-18 00:40:58.096977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.494 qpair failed and we were unable to recover it. 00:35:34.494 [2024-11-18 00:40:58.097103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.494 [2024-11-18 00:40:58.097132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.494 qpair failed and we were unable to recover it. 00:35:34.494 [2024-11-18 00:40:58.097233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.494 [2024-11-18 00:40:58.097259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.494 qpair failed and we were unable to recover it. 00:35:34.494 [2024-11-18 00:40:58.097365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.494 [2024-11-18 00:40:58.097393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.494 qpair failed and we were unable to recover it. 00:35:34.494 [2024-11-18 00:40:58.097472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.494 [2024-11-18 00:40:58.097499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.494 qpair failed and we were unable to recover it. 00:35:34.494 [2024-11-18 00:40:58.097601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.495 [2024-11-18 00:40:58.097630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.495 qpair failed and we were unable to recover it. 00:35:34.495 [2024-11-18 00:40:58.097805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.495 [2024-11-18 00:40:58.097836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.495 qpair failed and we were unable to recover it. 00:35:34.495 [2024-11-18 00:40:58.097943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.495 [2024-11-18 00:40:58.097970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.495 qpair failed and we were unable to recover it. 00:35:34.495 [2024-11-18 00:40:58.098085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.495 [2024-11-18 00:40:58.098115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.495 qpair failed and we were unable to recover it. 00:35:34.495 [2024-11-18 00:40:58.098275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.495 [2024-11-18 00:40:58.098324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.495 qpair failed and we were unable to recover it. 00:35:34.495 [2024-11-18 00:40:58.098433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.495 [2024-11-18 00:40:58.098460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.495 qpair failed and we were unable to recover it. 00:35:34.495 [2024-11-18 00:40:58.098575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.495 [2024-11-18 00:40:58.098602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.495 qpair failed and we were unable to recover it. 00:35:34.495 [2024-11-18 00:40:58.098747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.495 [2024-11-18 00:40:58.098774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.495 qpair failed and we were unable to recover it. 00:35:34.495 [2024-11-18 00:40:58.098889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.495 [2024-11-18 00:40:58.098915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.495 qpair failed and we were unable to recover it. 00:35:34.495 [2024-11-18 00:40:58.099017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.495 [2024-11-18 00:40:58.099048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.495 qpair failed and we were unable to recover it. 00:35:34.496 [2024-11-18 00:40:58.099182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.496 [2024-11-18 00:40:58.099208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.496 qpair failed and we were unable to recover it. 00:35:34.496 [2024-11-18 00:40:58.099344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.496 [2024-11-18 00:40:58.099384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.496 qpair failed and we were unable to recover it. 00:35:34.496 [2024-11-18 00:40:58.099529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.496 [2024-11-18 00:40:58.099560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.496 qpair failed and we were unable to recover it. 00:35:34.496 [2024-11-18 00:40:58.099652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.496 [2024-11-18 00:40:58.099682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.496 qpair failed and we were unable to recover it. 00:35:34.496 [2024-11-18 00:40:58.099822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.496 [2024-11-18 00:40:58.099867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.496 qpair failed and we were unable to recover it. 00:35:34.496 [2024-11-18 00:40:58.100021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.496 [2024-11-18 00:40:58.100068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.496 qpair failed and we were unable to recover it. 00:35:34.496 [2024-11-18 00:40:58.100155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.496 [2024-11-18 00:40:58.100182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.496 qpair failed and we were unable to recover it. 00:35:34.496 [2024-11-18 00:40:58.100267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.496 [2024-11-18 00:40:58.100294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.496 qpair failed and we were unable to recover it. 00:35:34.496 [2024-11-18 00:40:58.100383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.497 [2024-11-18 00:40:58.100410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.497 qpair failed and we were unable to recover it. 00:35:34.497 [2024-11-18 00:40:58.100514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.497 [2024-11-18 00:40:58.100543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.497 qpair failed and we were unable to recover it. 00:35:34.497 [2024-11-18 00:40:58.100652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.497 [2024-11-18 00:40:58.100680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.497 qpair failed and we were unable to recover it. 00:35:34.497 [2024-11-18 00:40:58.100824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.497 [2024-11-18 00:40:58.100852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.497 qpair failed and we were unable to recover it. 00:35:34.497 [2024-11-18 00:40:58.100994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.497 [2024-11-18 00:40:58.101026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.497 qpair failed and we were unable to recover it. 00:35:34.497 [2024-11-18 00:40:58.101167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.497 [2024-11-18 00:40:58.101192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.497 qpair failed and we were unable to recover it. 00:35:34.497 [2024-11-18 00:40:58.101302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.497 [2024-11-18 00:40:58.101335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.497 qpair failed and we were unable to recover it. 00:35:34.497 [2024-11-18 00:40:58.101430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.497 [2024-11-18 00:40:58.101457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.497 qpair failed and we were unable to recover it. 00:35:34.497 [2024-11-18 00:40:58.101559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.497 [2024-11-18 00:40:58.101607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.497 qpair failed and we were unable to recover it. 00:35:34.497 [2024-11-18 00:40:58.101745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.497 [2024-11-18 00:40:58.101776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.497 qpair failed and we were unable to recover it. 00:35:34.497 [2024-11-18 00:40:58.101878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.497 [2024-11-18 00:40:58.101908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.497 qpair failed and we were unable to recover it. 00:35:34.498 [2024-11-18 00:40:58.102054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.498 [2024-11-18 00:40:58.102083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.498 qpair failed and we were unable to recover it. 00:35:34.498 [2024-11-18 00:40:58.102211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.498 [2024-11-18 00:40:58.102238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.498 qpair failed and we were unable to recover it. 00:35:34.498 [2024-11-18 00:40:58.102356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.498 [2024-11-18 00:40:58.102391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.498 qpair failed and we were unable to recover it. 00:35:34.498 [2024-11-18 00:40:58.102523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.498 [2024-11-18 00:40:58.102550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.498 qpair failed and we were unable to recover it. 00:35:34.498 [2024-11-18 00:40:58.102688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.498 [2024-11-18 00:40:58.102732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.498 qpair failed and we were unable to recover it. 00:35:34.498 [2024-11-18 00:40:58.102875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.498 [2024-11-18 00:40:58.102901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.498 qpair failed and we were unable to recover it. 00:35:34.498 [2024-11-18 00:40:58.103016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.498 [2024-11-18 00:40:58.103044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.498 qpair failed and we were unable to recover it. 00:35:34.498 [2024-11-18 00:40:58.103159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.498 [2024-11-18 00:40:58.103187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.498 qpair failed and we were unable to recover it. 00:35:34.498 [2024-11-18 00:40:58.103276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.498 [2024-11-18 00:40:58.103303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.499 qpair failed and we were unable to recover it. 00:35:34.499 [2024-11-18 00:40:58.103476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.499 [2024-11-18 00:40:58.103505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.499 qpair failed and we were unable to recover it. 00:35:34.499 [2024-11-18 00:40:58.103643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.499 [2024-11-18 00:40:58.103674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.499 qpair failed and we were unable to recover it. 00:35:34.499 [2024-11-18 00:40:58.103865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.499 [2024-11-18 00:40:58.103911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.499 qpair failed and we were unable to recover it. 00:35:34.499 [2024-11-18 00:40:58.104024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.499 [2024-11-18 00:40:58.104071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.499 qpair failed and we were unable to recover it. 00:35:34.499 [2024-11-18 00:40:58.104200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.499 [2024-11-18 00:40:58.104229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.499 qpair failed and we were unable to recover it. 00:35:34.499 [2024-11-18 00:40:58.104359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.499 [2024-11-18 00:40:58.104387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.499 qpair failed and we were unable to recover it. 00:35:34.499 [2024-11-18 00:40:58.104500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.499 [2024-11-18 00:40:58.104544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.499 qpair failed and we were unable to recover it. 00:35:34.499 [2024-11-18 00:40:58.104674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.499 [2024-11-18 00:40:58.104719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.499 qpair failed and we were unable to recover it. 00:35:34.499 [2024-11-18 00:40:58.104819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.499 [2024-11-18 00:40:58.104850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.499 qpair failed and we were unable to recover it. 00:35:34.499 [2024-11-18 00:40:58.104957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.499 [2024-11-18 00:40:58.105002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.499 qpair failed and we were unable to recover it. 00:35:34.499 [2024-11-18 00:40:58.105142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.499 [2024-11-18 00:40:58.105169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.499 qpair failed and we were unable to recover it. 00:35:34.499 [2024-11-18 00:40:58.105262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.499 [2024-11-18 00:40:58.105290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.499 qpair failed and we were unable to recover it. 00:35:34.499 [2024-11-18 00:40:58.105393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.499 [2024-11-18 00:40:58.105432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.499 qpair failed and we were unable to recover it. 00:35:34.500 [2024-11-18 00:40:58.105527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.500 [2024-11-18 00:40:58.105573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.500 qpair failed and we were unable to recover it. 00:35:34.500 [2024-11-18 00:40:58.105685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.500 [2024-11-18 00:40:58.105716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.500 qpair failed and we were unable to recover it. 00:35:34.500 [2024-11-18 00:40:58.105872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.500 [2024-11-18 00:40:58.105918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.500 qpair failed and we were unable to recover it. 00:35:34.500 [2024-11-18 00:40:58.106055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.500 [2024-11-18 00:40:58.106100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.500 qpair failed and we were unable to recover it. 00:35:34.500 [2024-11-18 00:40:58.106234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.500 [2024-11-18 00:40:58.106263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.500 qpair failed and we were unable to recover it. 00:35:34.500 [2024-11-18 00:40:58.106382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.500 [2024-11-18 00:40:58.106410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.500 qpair failed and we were unable to recover it. 00:35:34.500 [2024-11-18 00:40:58.106514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.500 [2024-11-18 00:40:58.106546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.500 qpair failed and we were unable to recover it. 00:35:34.500 [2024-11-18 00:40:58.106693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.500 [2024-11-18 00:40:58.106745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.500 qpair failed and we were unable to recover it. 00:35:34.500 [2024-11-18 00:40:58.106905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.500 [2024-11-18 00:40:58.106951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.500 qpair failed and we were unable to recover it. 00:35:34.500 [2024-11-18 00:40:58.107095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.500 [2024-11-18 00:40:58.107124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.500 qpair failed and we were unable to recover it. 00:35:34.500 [2024-11-18 00:40:58.107269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.500 [2024-11-18 00:40:58.107296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.500 qpair failed and we were unable to recover it. 00:35:34.500 [2024-11-18 00:40:58.107390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.500 [2024-11-18 00:40:58.107417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.500 qpair failed and we were unable to recover it. 00:35:34.500 [2024-11-18 00:40:58.107547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.500 [2024-11-18 00:40:58.107578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.500 qpair failed and we were unable to recover it. 00:35:34.500 [2024-11-18 00:40:58.107702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.500 [2024-11-18 00:40:58.107733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.500 qpair failed and we were unable to recover it. 00:35:34.500 [2024-11-18 00:40:58.107879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.500 [2024-11-18 00:40:58.107910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.500 qpair failed and we were unable to recover it. 00:35:34.500 [2024-11-18 00:40:58.108045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.501 [2024-11-18 00:40:58.108077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.501 qpair failed and we were unable to recover it. 00:35:34.501 [2024-11-18 00:40:58.108172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.501 [2024-11-18 00:40:58.108202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.501 qpair failed and we were unable to recover it. 00:35:34.501 [2024-11-18 00:40:58.108359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.501 [2024-11-18 00:40:58.108414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.501 qpair failed and we were unable to recover it. 00:35:34.501 [2024-11-18 00:40:58.108503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.501 [2024-11-18 00:40:58.108530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.501 qpair failed and we were unable to recover it. 00:35:34.501 [2024-11-18 00:40:58.108637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.501 [2024-11-18 00:40:58.108682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.501 qpair failed and we were unable to recover it. 00:35:34.501 [2024-11-18 00:40:58.108826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.501 [2024-11-18 00:40:58.108868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.501 qpair failed and we were unable to recover it. 00:35:34.501 [2024-11-18 00:40:58.109014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.501 [2024-11-18 00:40:58.109059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.501 qpair failed and we were unable to recover it. 00:35:34.501 [2024-11-18 00:40:58.109169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.501 [2024-11-18 00:40:58.109195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.501 qpair failed and we were unable to recover it. 00:35:34.501 [2024-11-18 00:40:58.109280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.501 [2024-11-18 00:40:58.109306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.501 qpair failed and we were unable to recover it. 00:35:34.501 [2024-11-18 00:40:58.109412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.501 [2024-11-18 00:40:58.109438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.502 qpair failed and we were unable to recover it. 00:35:34.502 [2024-11-18 00:40:58.109557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.502 [2024-11-18 00:40:58.109583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.502 qpair failed and we were unable to recover it. 00:35:34.502 [2024-11-18 00:40:58.109662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.502 [2024-11-18 00:40:58.109689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.502 qpair failed and we were unable to recover it. 00:35:34.502 [2024-11-18 00:40:58.109809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.502 [2024-11-18 00:40:58.109839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.502 qpair failed and we were unable to recover it. 00:35:34.502 [2024-11-18 00:40:58.109954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.502 [2024-11-18 00:40:58.109981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.502 qpair failed and we were unable to recover it. 00:35:34.502 [2024-11-18 00:40:58.110066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.502 [2024-11-18 00:40:58.110092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.502 qpair failed and we were unable to recover it. 00:35:34.502 [2024-11-18 00:40:58.110204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.502 [2024-11-18 00:40:58.110230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.502 qpair failed and we were unable to recover it. 00:35:34.502 [2024-11-18 00:40:58.110326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.502 [2024-11-18 00:40:58.110370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.502 qpair failed and we were unable to recover it. 00:35:34.502 [2024-11-18 00:40:58.110521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.502 [2024-11-18 00:40:58.110550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.503 qpair failed and we were unable to recover it. 00:35:34.503 [2024-11-18 00:40:58.110712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.503 [2024-11-18 00:40:58.110740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.503 qpair failed and we were unable to recover it. 00:35:34.503 [2024-11-18 00:40:58.110871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.503 [2024-11-18 00:40:58.110900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.503 qpair failed and we were unable to recover it. 00:35:34.503 [2024-11-18 00:40:58.111023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.503 [2024-11-18 00:40:58.111055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.503 qpair failed and we were unable to recover it. 00:35:34.503 [2024-11-18 00:40:58.111197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.503 [2024-11-18 00:40:58.111223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.503 qpair failed and we were unable to recover it. 00:35:34.503 [2024-11-18 00:40:58.111343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.503 [2024-11-18 00:40:58.111370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.503 qpair failed and we were unable to recover it. 00:35:34.503 [2024-11-18 00:40:58.111454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.503 [2024-11-18 00:40:58.111481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.503 qpair failed and we were unable to recover it. 00:35:34.503 [2024-11-18 00:40:58.111624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.503 [2024-11-18 00:40:58.111654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.503 qpair failed and we were unable to recover it. 00:35:34.503 [2024-11-18 00:40:58.111790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.503 [2024-11-18 00:40:58.111820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.503 qpair failed and we were unable to recover it. 00:35:34.503 [2024-11-18 00:40:58.111927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.503 [2024-11-18 00:40:58.111953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.503 qpair failed and we were unable to recover it. 00:35:34.504 [2024-11-18 00:40:58.112061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.504 [2024-11-18 00:40:58.112092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.504 qpair failed and we were unable to recover it. 00:35:34.504 [2024-11-18 00:40:58.112223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.504 [2024-11-18 00:40:58.112253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.504 qpair failed and we were unable to recover it. 00:35:34.504 [2024-11-18 00:40:58.112419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.504 [2024-11-18 00:40:58.112446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.504 qpair failed and we were unable to recover it. 00:35:34.504 [2024-11-18 00:40:58.112589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.504 [2024-11-18 00:40:58.112620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.504 qpair failed and we were unable to recover it. 00:35:34.504 [2024-11-18 00:40:58.112727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.504 [2024-11-18 00:40:58.112756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.504 qpair failed and we were unable to recover it. 00:35:34.504 [2024-11-18 00:40:58.112885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.504 [2024-11-18 00:40:58.112935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.504 qpair failed and we were unable to recover it. 00:35:34.504 [2024-11-18 00:40:58.113048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.504 [2024-11-18 00:40:58.113078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.504 qpair failed and we were unable to recover it. 00:35:34.504 [2024-11-18 00:40:58.113167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.504 [2024-11-18 00:40:58.113196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.504 qpair failed and we were unable to recover it. 00:35:34.504 [2024-11-18 00:40:58.113287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.504 [2024-11-18 00:40:58.113323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.504 qpair failed and we were unable to recover it. 00:35:34.504 [2024-11-18 00:40:58.113430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.504 [2024-11-18 00:40:58.113456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.504 qpair failed and we were unable to recover it. 00:35:34.504 [2024-11-18 00:40:58.113552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.505 [2024-11-18 00:40:58.113581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.505 qpair failed and we were unable to recover it. 00:35:34.505 [2024-11-18 00:40:58.113700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.505 [2024-11-18 00:40:58.113731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.505 qpair failed and we were unable to recover it. 00:35:34.505 [2024-11-18 00:40:58.113860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.505 [2024-11-18 00:40:58.113889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.505 qpair failed and we were unable to recover it. 00:35:34.505 [2024-11-18 00:40:58.114031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.505 [2024-11-18 00:40:58.114061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.505 qpair failed and we were unable to recover it. 00:35:34.505 [2024-11-18 00:40:58.114181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.505 [2024-11-18 00:40:58.114210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.505 qpair failed and we were unable to recover it. 00:35:34.505 [2024-11-18 00:40:58.114345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.505 [2024-11-18 00:40:58.114372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.505 qpair failed and we were unable to recover it. 00:35:34.505 [2024-11-18 00:40:58.114465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.505 [2024-11-18 00:40:58.114491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.505 qpair failed and we were unable to recover it. 00:35:34.505 [2024-11-18 00:40:58.114607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.505 [2024-11-18 00:40:58.114634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.505 qpair failed and we were unable to recover it. 00:35:34.505 [2024-11-18 00:40:58.114799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.505 [2024-11-18 00:40:58.114847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.505 qpair failed and we were unable to recover it. 00:35:34.505 [2024-11-18 00:40:58.114989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.505 [2024-11-18 00:40:58.115035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.505 qpair failed and we were unable to recover it. 00:35:34.505 [2024-11-18 00:40:58.115123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.505 [2024-11-18 00:40:58.115151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.505 qpair failed and we were unable to recover it. 00:35:34.505 [2024-11-18 00:40:58.115241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.506 [2024-11-18 00:40:58.115267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.506 qpair failed and we were unable to recover it. 00:35:34.506 [2024-11-18 00:40:58.115429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.506 [2024-11-18 00:40:58.115473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.506 qpair failed and we were unable to recover it. 00:35:34.506 [2024-11-18 00:40:58.115557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.506 [2024-11-18 00:40:58.115584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.506 qpair failed and we were unable to recover it. 00:35:34.506 [2024-11-18 00:40:58.115718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.506 [2024-11-18 00:40:58.115763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.506 qpair failed and we were unable to recover it. 00:35:34.506 [2024-11-18 00:40:58.115871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.506 [2024-11-18 00:40:58.115916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.506 qpair failed and we were unable to recover it. 00:35:34.506 [2024-11-18 00:40:58.116029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.506 [2024-11-18 00:40:58.116056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.506 qpair failed and we were unable to recover it. 00:35:34.506 [2024-11-18 00:40:58.116167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.506 [2024-11-18 00:40:58.116194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.506 qpair failed and we were unable to recover it. 00:35:34.506 [2024-11-18 00:40:58.116274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.507 [2024-11-18 00:40:58.116300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.507 qpair failed and we were unable to recover it. 00:35:34.507 [2024-11-18 00:40:58.116444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.507 [2024-11-18 00:40:58.116471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.507 qpair failed and we were unable to recover it. 00:35:34.507 [2024-11-18 00:40:58.116627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.507 [2024-11-18 00:40:58.116673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.507 qpair failed and we were unable to recover it. 00:35:34.507 [2024-11-18 00:40:58.116755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.507 [2024-11-18 00:40:58.116781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.507 qpair failed and we were unable to recover it. 00:35:34.507 [2024-11-18 00:40:58.116863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.507 [2024-11-18 00:40:58.116896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.507 qpair failed and we were unable to recover it. 00:35:34.507 [2024-11-18 00:40:58.117013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.507 [2024-11-18 00:40:58.117041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.507 qpair failed and we were unable to recover it. 00:35:34.507 [2024-11-18 00:40:58.117160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.507 [2024-11-18 00:40:58.117187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.507 qpair failed and we were unable to recover it. 00:35:34.507 [2024-11-18 00:40:58.117295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.507 [2024-11-18 00:40:58.117329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.507 qpair failed and we were unable to recover it. 00:35:34.507 [2024-11-18 00:40:58.117438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.508 [2024-11-18 00:40:58.117467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.508 qpair failed and we were unable to recover it. 00:35:34.508 [2024-11-18 00:40:58.117604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.508 [2024-11-18 00:40:58.117650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.508 qpair failed and we were unable to recover it. 00:35:34.508 [2024-11-18 00:40:58.117784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.508 [2024-11-18 00:40:58.117829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.508 qpair failed and we were unable to recover it. 00:35:34.508 [2024-11-18 00:40:58.117983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.508 [2024-11-18 00:40:58.118012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.508 qpair failed and we were unable to recover it. 00:35:34.508 [2024-11-18 00:40:58.118101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.508 [2024-11-18 00:40:58.118130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.508 qpair failed and we were unable to recover it. 00:35:34.508 [2024-11-18 00:40:58.118241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.508 [2024-11-18 00:40:58.118269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.508 qpair failed and we were unable to recover it. 00:35:34.508 [2024-11-18 00:40:58.118432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.508 [2024-11-18 00:40:58.118477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.508 qpair failed and we were unable to recover it. 00:35:34.508 [2024-11-18 00:40:58.118607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.508 [2024-11-18 00:40:58.118636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.508 qpair failed and we were unable to recover it. 00:35:34.508 [2024-11-18 00:40:58.118819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.508 [2024-11-18 00:40:58.118864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.508 qpair failed and we were unable to recover it. 00:35:34.508 [2024-11-18 00:40:58.119027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.508 [2024-11-18 00:40:58.119069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.508 qpair failed and we were unable to recover it. 00:35:34.508 [2024-11-18 00:40:58.119190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.508 [2024-11-18 00:40:58.119217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.508 qpair failed and we were unable to recover it. 00:35:34.509 [2024-11-18 00:40:58.119332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.509 [2024-11-18 00:40:58.119358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.509 qpair failed and we were unable to recover it. 00:35:34.509 [2024-11-18 00:40:58.119485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.509 [2024-11-18 00:40:58.119529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.509 qpair failed and we were unable to recover it. 00:35:34.509 [2024-11-18 00:40:58.119664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.509 [2024-11-18 00:40:58.119708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.509 qpair failed and we were unable to recover it. 00:35:34.509 [2024-11-18 00:40:58.119869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.509 [2024-11-18 00:40:58.119915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.509 qpair failed and we were unable to recover it. 00:35:34.509 [2024-11-18 00:40:58.120023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.509 [2024-11-18 00:40:58.120049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.509 qpair failed and we were unable to recover it. 00:35:34.509 [2024-11-18 00:40:58.120169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.509 [2024-11-18 00:40:58.120195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.509 qpair failed and we were unable to recover it. 00:35:34.509 [2024-11-18 00:40:58.120325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.510 [2024-11-18 00:40:58.120370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.510 qpair failed and we were unable to recover it. 00:35:34.510 [2024-11-18 00:40:58.120463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.510 [2024-11-18 00:40:58.120491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.510 qpair failed and we were unable to recover it. 00:35:34.510 [2024-11-18 00:40:58.120631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.510 [2024-11-18 00:40:58.120676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.510 qpair failed and we were unable to recover it. 00:35:34.510 [2024-11-18 00:40:58.120815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.510 [2024-11-18 00:40:58.120860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.510 qpair failed and we were unable to recover it. 00:35:34.510 [2024-11-18 00:40:58.120995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.510 [2024-11-18 00:40:58.121025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.510 qpair failed and we were unable to recover it. 00:35:34.510 [2024-11-18 00:40:58.121184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.510 [2024-11-18 00:40:58.121212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.510 qpair failed and we were unable to recover it. 00:35:34.510 [2024-11-18 00:40:58.121302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.510 [2024-11-18 00:40:58.121349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.510 qpair failed and we were unable to recover it. 00:35:34.510 [2024-11-18 00:40:58.121523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.510 [2024-11-18 00:40:58.121566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.510 qpair failed and we were unable to recover it. 00:35:34.510 [2024-11-18 00:40:58.121733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.510 [2024-11-18 00:40:58.121778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.510 qpair failed and we were unable to recover it. 00:35:34.510 [2024-11-18 00:40:58.121903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.510 [2024-11-18 00:40:58.121947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.510 qpair failed and we were unable to recover it. 00:35:34.510 [2024-11-18 00:40:58.122095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.510 [2024-11-18 00:40:58.122123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.510 qpair failed and we were unable to recover it. 00:35:34.510 [2024-11-18 00:40:58.122220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.510 [2024-11-18 00:40:58.122247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.510 qpair failed and we were unable to recover it. 00:35:34.510 [2024-11-18 00:40:58.122382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.510 [2024-11-18 00:40:58.122407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.510 qpair failed and we were unable to recover it. 00:35:34.510 [2024-11-18 00:40:58.122516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.510 [2024-11-18 00:40:58.122541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.510 qpair failed and we were unable to recover it. 00:35:34.510 [2024-11-18 00:40:58.122652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.510 [2024-11-18 00:40:58.122695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.510 qpair failed and we were unable to recover it. 00:35:34.510 [2024-11-18 00:40:58.122848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.510 [2024-11-18 00:40:58.122876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.510 qpair failed and we were unable to recover it. 00:35:34.510 [2024-11-18 00:40:58.122994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.510 [2024-11-18 00:40:58.123022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.510 qpair failed and we were unable to recover it. 00:35:34.510 [2024-11-18 00:40:58.123128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.510 [2024-11-18 00:40:58.123152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.510 qpair failed and we were unable to recover it. 00:35:34.510 [2024-11-18 00:40:58.123256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.510 [2024-11-18 00:40:58.123284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.510 qpair failed and we were unable to recover it. 00:35:34.510 [2024-11-18 00:40:58.123434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.510 [2024-11-18 00:40:58.123459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.510 qpair failed and we were unable to recover it. 00:35:34.510 [2024-11-18 00:40:58.123585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.510 [2024-11-18 00:40:58.123620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.510 qpair failed and we were unable to recover it. 00:35:34.510 [2024-11-18 00:40:58.123777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.510 [2024-11-18 00:40:58.123803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.510 qpair failed and we were unable to recover it. 00:35:34.510 [2024-11-18 00:40:58.123952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.510 [2024-11-18 00:40:58.123984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.510 qpair failed and we were unable to recover it. 00:35:34.510 [2024-11-18 00:40:58.124138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.510 [2024-11-18 00:40:58.124182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.510 qpair failed and we were unable to recover it. 00:35:34.510 [2024-11-18 00:40:58.124338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.510 [2024-11-18 00:40:58.124382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.510 qpair failed and we were unable to recover it. 00:35:34.510 [2024-11-18 00:40:58.124497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.510 [2024-11-18 00:40:58.124523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.510 qpair failed and we were unable to recover it. 00:35:34.510 [2024-11-18 00:40:58.124649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.510 [2024-11-18 00:40:58.124692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.510 qpair failed and we were unable to recover it. 00:35:34.510 [2024-11-18 00:40:58.124777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.510 [2024-11-18 00:40:58.124804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.510 qpair failed and we were unable to recover it. 00:35:34.510 [2024-11-18 00:40:58.124924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.510 [2024-11-18 00:40:58.124950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.510 qpair failed and we were unable to recover it. 00:35:34.510 [2024-11-18 00:40:58.125069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.510 [2024-11-18 00:40:58.125096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.510 qpair failed and we were unable to recover it. 00:35:34.510 [2024-11-18 00:40:58.125184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.510 [2024-11-18 00:40:58.125212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.510 qpair failed and we were unable to recover it. 00:35:34.510 [2024-11-18 00:40:58.125336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.510 [2024-11-18 00:40:58.125375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.510 qpair failed and we were unable to recover it. 00:35:34.510 [2024-11-18 00:40:58.125497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.510 [2024-11-18 00:40:58.125536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.510 qpair failed and we were unable to recover it. 00:35:34.510 [2024-11-18 00:40:58.125651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.510 [2024-11-18 00:40:58.125685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.510 qpair failed and we were unable to recover it. 00:35:34.510 [2024-11-18 00:40:58.125816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.510 [2024-11-18 00:40:58.125859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.126000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.126028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.126132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.126158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.126270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.126298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.126420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.126447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.126573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.126615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.126737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.126771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.126912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.126939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.127081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.127108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.127218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.127244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.127359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.127386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.127504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.127530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.127630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.127658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.127750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.127778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.127893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.127920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.128011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.128038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.128169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.128195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.128320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.128347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.128470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.128497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.128606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.128648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.128797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.128824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.128953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.128981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.129105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.129132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.129257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.129283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.129396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.129422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.129530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.129555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.129633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.129679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.129798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.129840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.129992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.130019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.130108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.130141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.130264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.130290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.130388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.130414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.130497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.130524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.130659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.130686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.130807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.130834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.130967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.130994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.131106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.131133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.131218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.131245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.131343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.131386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.131482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.131520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.131656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.131701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.131836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.131864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.131962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.131989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.132102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.132128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.132207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.132234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.132382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.132409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.132490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.132517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.132611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.132637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.132743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.132769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.132845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.132871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.132981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.133010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.133112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.133136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.133254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.133280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.133400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.133431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.133547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.133573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.133675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.133703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.133826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.133856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.134039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.134090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.134224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.134250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.134379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.134406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.134566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.134612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.134751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.134795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.134957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.135014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.135134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.135162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.135248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.135276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.135444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.135470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.135556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.135582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.135742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.135768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.135944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.135994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.136145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.136172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.136277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.136304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.136412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.136455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.136566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.511 [2024-11-18 00:40:58.136613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.511 qpair failed and we were unable to recover it. 00:35:34.511 [2024-11-18 00:40:58.136746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.512 [2024-11-18 00:40:58.136792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.512 qpair failed and we were unable to recover it. 00:35:34.512 [2024-11-18 00:40:58.136929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.512 [2024-11-18 00:40:58.136956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.512 qpair failed and we were unable to recover it. 00:35:34.512 [2024-11-18 00:40:58.137073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.512 [2024-11-18 00:40:58.137100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.512 qpair failed and we were unable to recover it. 00:35:34.512 [2024-11-18 00:40:58.137187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.512 [2024-11-18 00:40:58.137213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.512 qpair failed and we were unable to recover it. 00:35:34.512 [2024-11-18 00:40:58.137345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.512 [2024-11-18 00:40:58.137374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.512 qpair failed and we were unable to recover it. 00:35:34.512 [2024-11-18 00:40:58.137467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.512 [2024-11-18 00:40:58.137494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.512 qpair failed and we were unable to recover it. 00:35:34.512 [2024-11-18 00:40:58.137591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.512 [2024-11-18 00:40:58.137618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.512 qpair failed and we were unable to recover it. 00:35:34.512 [2024-11-18 00:40:58.137754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.512 [2024-11-18 00:40:58.137785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.512 qpair failed and we were unable to recover it. 00:35:34.512 [2024-11-18 00:40:58.137905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.512 [2024-11-18 00:40:58.137932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.512 qpair failed and we were unable to recover it. 00:35:34.512 [2024-11-18 00:40:58.138051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.512 [2024-11-18 00:40:58.138076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.512 qpair failed and we were unable to recover it. 00:35:34.512 [2024-11-18 00:40:58.138187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.512 [2024-11-18 00:40:58.138213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.512 qpair failed and we were unable to recover it. 00:35:34.512 [2024-11-18 00:40:58.138299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.512 [2024-11-18 00:40:58.138334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.512 qpair failed and we were unable to recover it. 00:35:34.512 [2024-11-18 00:40:58.138447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.512 [2024-11-18 00:40:58.138474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.512 qpair failed and we were unable to recover it. 00:35:34.512 [2024-11-18 00:40:58.138592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.512 [2024-11-18 00:40:58.138618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.512 qpair failed and we were unable to recover it. 00:35:34.512 [2024-11-18 00:40:58.138737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.512 [2024-11-18 00:40:58.138764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.512 qpair failed and we were unable to recover it. 00:35:34.512 [2024-11-18 00:40:58.138852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.512 [2024-11-18 00:40:58.138878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.512 qpair failed and we were unable to recover it. 00:35:34.512 [2024-11-18 00:40:58.138962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.512 [2024-11-18 00:40:58.138988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.512 qpair failed and we were unable to recover it. 00:35:34.512 [2024-11-18 00:40:58.139071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.512 [2024-11-18 00:40:58.139098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.512 qpair failed and we were unable to recover it. 00:35:34.512 [2024-11-18 00:40:58.139242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.512 [2024-11-18 00:40:58.139268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.512 qpair failed and we were unable to recover it. 00:35:34.512 [2024-11-18 00:40:58.139416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.512 [2024-11-18 00:40:58.139443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.512 qpair failed and we were unable to recover it. 00:35:34.512 [2024-11-18 00:40:58.139559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.512 [2024-11-18 00:40:58.139586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.512 qpair failed and we were unable to recover it. 00:35:34.512 [2024-11-18 00:40:58.139705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.512 [2024-11-18 00:40:58.139732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.512 qpair failed and we were unable to recover it. 00:35:34.512 [2024-11-18 00:40:58.139836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.512 [2024-11-18 00:40:58.139862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.512 qpair failed and we were unable to recover it. 00:35:34.512 [2024-11-18 00:40:58.140014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.512 [2024-11-18 00:40:58.140040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.512 qpair failed and we were unable to recover it. 00:35:34.512 [2024-11-18 00:40:58.140155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.512 [2024-11-18 00:40:58.140181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.512 qpair failed and we were unable to recover it. 00:35:34.512 [2024-11-18 00:40:58.140291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.512 [2024-11-18 00:40:58.140326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.512 qpair failed and we were unable to recover it. 00:35:34.512 [2024-11-18 00:40:58.140440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.512 [2024-11-18 00:40:58.140466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.512 qpair failed and we were unable to recover it. 00:35:34.512 [2024-11-18 00:40:58.140552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.512 [2024-11-18 00:40:58.140578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.512 qpair failed and we were unable to recover it. 00:35:34.512 [2024-11-18 00:40:58.140729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.512 [2024-11-18 00:40:58.140757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.512 qpair failed and we were unable to recover it. 00:35:34.512 [2024-11-18 00:40:58.140872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.512 [2024-11-18 00:40:58.140899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.512 qpair failed and we were unable to recover it. 00:35:34.512 [2024-11-18 00:40:58.140976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.512 [2024-11-18 00:40:58.141002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.512 qpair failed and we were unable to recover it. 00:35:34.512 [2024-11-18 00:40:58.141146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.512 [2024-11-18 00:40:58.141172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.512 qpair failed and we were unable to recover it. 00:35:34.512 [2024-11-18 00:40:58.141281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.512 [2024-11-18 00:40:58.141308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.512 qpair failed and we were unable to recover it. 00:35:34.512 [2024-11-18 00:40:58.141425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.512 [2024-11-18 00:40:58.141451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.512 qpair failed and we were unable to recover it. 00:35:34.512 [2024-11-18 00:40:58.141559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.512 [2024-11-18 00:40:58.141588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.512 qpair failed and we were unable to recover it. 00:35:34.512 [2024-11-18 00:40:58.141738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.512 [2024-11-18 00:40:58.141765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.512 qpair failed and we were unable to recover it. 00:35:34.512 [2024-11-18 00:40:58.141867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.512 [2024-11-18 00:40:58.141894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.512 qpair failed and we were unable to recover it. 00:35:34.512 [2024-11-18 00:40:58.141975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.512 [2024-11-18 00:40:58.142000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.512 qpair failed and we were unable to recover it. 00:35:34.512 [2024-11-18 00:40:58.142093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.512 [2024-11-18 00:40:58.142120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.512 qpair failed and we were unable to recover it. 00:35:34.512 [2024-11-18 00:40:58.142226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.512 [2024-11-18 00:40:58.142252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.512 qpair failed and we were unable to recover it. 00:35:34.512 [2024-11-18 00:40:58.142398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.512 [2024-11-18 00:40:58.142424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.512 qpair failed and we were unable to recover it. 00:35:34.512 [2024-11-18 00:40:58.142541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.512 [2024-11-18 00:40:58.142567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.512 qpair failed and we were unable to recover it. 00:35:34.512 [2024-11-18 00:40:58.142713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.512 [2024-11-18 00:40:58.142738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.512 qpair failed and we were unable to recover it. 00:35:34.512 [2024-11-18 00:40:58.142843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.512 [2024-11-18 00:40:58.142869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.512 qpair failed and we were unable to recover it. 00:35:34.512 [2024-11-18 00:40:58.142990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.512 [2024-11-18 00:40:58.143017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.512 qpair failed and we were unable to recover it. 00:35:34.512 [2024-11-18 00:40:58.143129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.512 [2024-11-18 00:40:58.143155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.512 qpair failed and we were unable to recover it. 00:35:34.512 [2024-11-18 00:40:58.143291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.512 [2024-11-18 00:40:58.143322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.512 qpair failed and we were unable to recover it. 00:35:34.512 [2024-11-18 00:40:58.143428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.512 [2024-11-18 00:40:58.143460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.512 qpair failed and we were unable to recover it. 00:35:34.512 [2024-11-18 00:40:58.143593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.512 [2024-11-18 00:40:58.143619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.512 qpair failed and we were unable to recover it. 00:35:34.512 [2024-11-18 00:40:58.143771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.512 [2024-11-18 00:40:58.143800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.512 qpair failed and we were unable to recover it. 00:35:34.512 [2024-11-18 00:40:58.143944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.512 [2024-11-18 00:40:58.143971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.512 qpair failed and we were unable to recover it. 00:35:34.512 [2024-11-18 00:40:58.144090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.512 [2024-11-18 00:40:58.144116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.512 qpair failed and we were unable to recover it. 00:35:34.512 [2024-11-18 00:40:58.144236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.512 [2024-11-18 00:40:58.144262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.512 qpair failed and we were unable to recover it. 00:35:34.512 [2024-11-18 00:40:58.144381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.512 [2024-11-18 00:40:58.144408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.512 qpair failed and we were unable to recover it. 00:35:34.512 [2024-11-18 00:40:58.144520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.512 [2024-11-18 00:40:58.144546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.512 qpair failed and we were unable to recover it. 00:35:34.512 [2024-11-18 00:40:58.144660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.512 [2024-11-18 00:40:58.144687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.512 qpair failed and we were unable to recover it. 00:35:34.512 [2024-11-18 00:40:58.144842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.512 [2024-11-18 00:40:58.144869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.512 qpair failed and we were unable to recover it. 00:35:34.512 [2024-11-18 00:40:58.145008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.512 [2024-11-18 00:40:58.145035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.512 qpair failed and we were unable to recover it. 00:35:34.512 [2024-11-18 00:40:58.145200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.512 [2024-11-18 00:40:58.145227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.512 qpair failed and we were unable to recover it. 00:35:34.512 [2024-11-18 00:40:58.145324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.512 [2024-11-18 00:40:58.145351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.512 qpair failed and we were unable to recover it. 00:35:34.512 [2024-11-18 00:40:58.145479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.512 [2024-11-18 00:40:58.145521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.512 qpair failed and we were unable to recover it. 00:35:34.512 [2024-11-18 00:40:58.145651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.512 [2024-11-18 00:40:58.145693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.512 qpair failed and we were unable to recover it. 00:35:34.512 [2024-11-18 00:40:58.145838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.512 [2024-11-18 00:40:58.145882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.512 qpair failed and we were unable to recover it. 00:35:34.512 [2024-11-18 00:40:58.145964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.145991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.513 qpair failed and we were unable to recover it. 00:35:34.513 [2024-11-18 00:40:58.146126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.146152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.513 qpair failed and we were unable to recover it. 00:35:34.513 [2024-11-18 00:40:58.146264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.146290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.513 qpair failed and we were unable to recover it. 00:35:34.513 [2024-11-18 00:40:58.146406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.146463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.513 qpair failed and we were unable to recover it. 00:35:34.513 [2024-11-18 00:40:58.146644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.146700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.513 qpair failed and we were unable to recover it. 00:35:34.513 [2024-11-18 00:40:58.146807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.146852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.513 qpair failed and we were unable to recover it. 00:35:34.513 [2024-11-18 00:40:58.146980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.147007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.513 qpair failed and we were unable to recover it. 00:35:34.513 [2024-11-18 00:40:58.147111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.147138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.513 qpair failed and we were unable to recover it. 00:35:34.513 [2024-11-18 00:40:58.147221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.147247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.513 qpair failed and we were unable to recover it. 00:35:34.513 [2024-11-18 00:40:58.147396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.147424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.513 qpair failed and we were unable to recover it. 00:35:34.513 [2024-11-18 00:40:58.147569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.147596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.513 qpair failed and we were unable to recover it. 00:35:34.513 [2024-11-18 00:40:58.147685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.147718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.513 qpair failed and we were unable to recover it. 00:35:34.513 [2024-11-18 00:40:58.147840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.147867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.513 qpair failed and we were unable to recover it. 00:35:34.513 [2024-11-18 00:40:58.147973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.148000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.513 qpair failed and we were unable to recover it. 00:35:34.513 [2024-11-18 00:40:58.148088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.148115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.513 qpair failed and we were unable to recover it. 00:35:34.513 [2024-11-18 00:40:58.148219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.148247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.513 qpair failed and we were unable to recover it. 00:35:34.513 [2024-11-18 00:40:58.148377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.148408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.513 qpair failed and we were unable to recover it. 00:35:34.513 [2024-11-18 00:40:58.148497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.148525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.513 qpair failed and we were unable to recover it. 00:35:34.513 [2024-11-18 00:40:58.148660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.148687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.513 qpair failed and we were unable to recover it. 00:35:34.513 [2024-11-18 00:40:58.148791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.148819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.513 qpair failed and we were unable to recover it. 00:35:34.513 [2024-11-18 00:40:58.148943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.148971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.513 qpair failed and we were unable to recover it. 00:35:34.513 [2024-11-18 00:40:58.149121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.149150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.513 qpair failed and we were unable to recover it. 00:35:34.513 [2024-11-18 00:40:58.149253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.149279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.513 qpair failed and we were unable to recover it. 00:35:34.513 [2024-11-18 00:40:58.149387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.149413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.513 qpair failed and we were unable to recover it. 00:35:34.513 [2024-11-18 00:40:58.149528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.149553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.513 qpair failed and we were unable to recover it. 00:35:34.513 [2024-11-18 00:40:58.149673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.149701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.513 qpair failed and we were unable to recover it. 00:35:34.513 [2024-11-18 00:40:58.149822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.149849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.513 qpair failed and we were unable to recover it. 00:35:34.513 [2024-11-18 00:40:58.149987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.150031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.513 qpair failed and we were unable to recover it. 00:35:34.513 [2024-11-18 00:40:58.150123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.150164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.513 qpair failed and we were unable to recover it. 00:35:34.513 [2024-11-18 00:40:58.150257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.150284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.513 qpair failed and we were unable to recover it. 00:35:34.513 [2024-11-18 00:40:58.150380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.150408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.513 qpair failed and we were unable to recover it. 00:35:34.513 [2024-11-18 00:40:58.150524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.150550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.513 qpair failed and we were unable to recover it. 00:35:34.513 [2024-11-18 00:40:58.150677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.150703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.513 qpair failed and we were unable to recover it. 00:35:34.513 [2024-11-18 00:40:58.150792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.150819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.513 qpair failed and we were unable to recover it. 00:35:34.513 [2024-11-18 00:40:58.150910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.150949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.513 qpair failed and we were unable to recover it. 00:35:34.513 [2024-11-18 00:40:58.151122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.151168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.513 qpair failed and we were unable to recover it. 00:35:34.513 [2024-11-18 00:40:58.151257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.151285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.513 qpair failed and we were unable to recover it. 00:35:34.513 [2024-11-18 00:40:58.151429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.151458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.513 qpair failed and we were unable to recover it. 00:35:34.513 [2024-11-18 00:40:58.151549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.151581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.513 qpair failed and we were unable to recover it. 00:35:34.513 [2024-11-18 00:40:58.151669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.151697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.513 qpair failed and we were unable to recover it. 00:35:34.513 [2024-11-18 00:40:58.151816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.151844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.513 qpair failed and we were unable to recover it. 00:35:34.513 [2024-11-18 00:40:58.151941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.151969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.513 qpair failed and we were unable to recover it. 00:35:34.513 [2024-11-18 00:40:58.152095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.152123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.513 qpair failed and we were unable to recover it. 00:35:34.513 [2024-11-18 00:40:58.152245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.152276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.513 qpair failed and we were unable to recover it. 00:35:34.513 [2024-11-18 00:40:58.152420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.152446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.513 qpair failed and we were unable to recover it. 00:35:34.513 [2024-11-18 00:40:58.152558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.152586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.513 qpair failed and we were unable to recover it. 00:35:34.513 [2024-11-18 00:40:58.152704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.152732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.513 qpair failed and we were unable to recover it. 00:35:34.513 [2024-11-18 00:40:58.152855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.152883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.513 qpair failed and we were unable to recover it. 00:35:34.513 [2024-11-18 00:40:58.153043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.153089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.513 qpair failed and we were unable to recover it. 00:35:34.513 [2024-11-18 00:40:58.153173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.153199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.513 qpair failed and we were unable to recover it. 00:35:34.513 [2024-11-18 00:40:58.153284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.153315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.513 qpair failed and we were unable to recover it. 00:35:34.513 [2024-11-18 00:40:58.153432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.153458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.513 qpair failed and we were unable to recover it. 00:35:34.513 [2024-11-18 00:40:58.153592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.153619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.513 qpair failed and we were unable to recover it. 00:35:34.513 [2024-11-18 00:40:58.153770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.153799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.513 qpair failed and we were unable to recover it. 00:35:34.513 [2024-11-18 00:40:58.153887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.153915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.513 qpair failed and we were unable to recover it. 00:35:34.513 [2024-11-18 00:40:58.154085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.154128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.513 qpair failed and we were unable to recover it. 00:35:34.513 [2024-11-18 00:40:58.154222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.154249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.513 qpair failed and we were unable to recover it. 00:35:34.513 [2024-11-18 00:40:58.154364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.154391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.513 qpair failed and we were unable to recover it. 00:35:34.513 [2024-11-18 00:40:58.154549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.154592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.513 qpair failed and we were unable to recover it. 00:35:34.513 [2024-11-18 00:40:58.154675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.154701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.513 qpair failed and we were unable to recover it. 00:35:34.513 [2024-11-18 00:40:58.154862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.154905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.513 qpair failed and we were unable to recover it. 00:35:34.513 [2024-11-18 00:40:58.155021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.155047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.513 qpair failed and we were unable to recover it. 00:35:34.513 [2024-11-18 00:40:58.155129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.155155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.513 qpair failed and we were unable to recover it. 00:35:34.513 [2024-11-18 00:40:58.155317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.155375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.513 qpair failed and we were unable to recover it. 00:35:34.513 [2024-11-18 00:40:58.155473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.155502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.513 qpair failed and we were unable to recover it. 00:35:34.513 [2024-11-18 00:40:58.155604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.155633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.513 qpair failed and we were unable to recover it. 00:35:34.513 [2024-11-18 00:40:58.155757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.155785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.513 qpair failed and we were unable to recover it. 00:35:34.513 [2024-11-18 00:40:58.155873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.513 [2024-11-18 00:40:58.155902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.514 qpair failed and we were unable to recover it. 00:35:34.514 [2024-11-18 00:40:58.156032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-11-18 00:40:58.156061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.514 qpair failed and we were unable to recover it. 00:35:34.514 [2024-11-18 00:40:58.156162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-11-18 00:40:58.156190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.514 qpair failed and we were unable to recover it. 00:35:34.514 [2024-11-18 00:40:58.156283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-11-18 00:40:58.156320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.514 qpair failed and we were unable to recover it. 00:35:34.514 [2024-11-18 00:40:58.156444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-11-18 00:40:58.156474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.514 qpair failed and we were unable to recover it. 00:35:34.514 [2024-11-18 00:40:58.156600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-11-18 00:40:58.156629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.514 qpair failed and we were unable to recover it. 00:35:34.514 [2024-11-18 00:40:58.156747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-11-18 00:40:58.156775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.514 qpair failed and we were unable to recover it. 00:35:34.514 [2024-11-18 00:40:58.156888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-11-18 00:40:58.156916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.514 qpair failed and we were unable to recover it. 00:35:34.514 [2024-11-18 00:40:58.157061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-11-18 00:40:58.157089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.514 qpair failed and we were unable to recover it. 00:35:34.514 [2024-11-18 00:40:58.157195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-11-18 00:40:58.157221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.514 qpair failed and we were unable to recover it. 00:35:34.514 [2024-11-18 00:40:58.157363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-11-18 00:40:58.157390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.514 qpair failed and we were unable to recover it. 00:35:34.514 [2024-11-18 00:40:58.157478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-11-18 00:40:58.157504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.514 qpair failed and we were unable to recover it. 00:35:34.514 [2024-11-18 00:40:58.157667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-11-18 00:40:58.157695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.514 qpair failed and we were unable to recover it. 00:35:34.514 [2024-11-18 00:40:58.157781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-11-18 00:40:58.157810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.514 qpair failed and we were unable to recover it. 00:35:34.514 [2024-11-18 00:40:58.157931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-11-18 00:40:58.157960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.514 qpair failed and we were unable to recover it. 00:35:34.514 [2024-11-18 00:40:58.158122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-11-18 00:40:58.158148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.514 qpair failed and we were unable to recover it. 00:35:34.514 [2024-11-18 00:40:58.158259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-11-18 00:40:58.158286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.514 qpair failed and we were unable to recover it. 00:35:34.514 [2024-11-18 00:40:58.158419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-11-18 00:40:58.158448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.514 qpair failed and we were unable to recover it. 00:35:34.514 [2024-11-18 00:40:58.158537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-11-18 00:40:58.158565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.514 qpair failed and we were unable to recover it. 00:35:34.514 [2024-11-18 00:40:58.158662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-11-18 00:40:58.158690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.514 qpair failed and we were unable to recover it. 00:35:34.514 [2024-11-18 00:40:58.158815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-11-18 00:40:58.158844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.514 qpair failed and we were unable to recover it. 00:35:34.514 [2024-11-18 00:40:58.158964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-11-18 00:40:58.158991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.514 qpair failed and we were unable to recover it. 00:35:34.514 [2024-11-18 00:40:58.159118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-11-18 00:40:58.159147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.514 qpair failed and we were unable to recover it. 00:35:34.514 [2024-11-18 00:40:58.159292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-11-18 00:40:58.159327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.514 qpair failed and we were unable to recover it. 00:35:34.514 [2024-11-18 00:40:58.159455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-11-18 00:40:58.159481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.514 qpair failed and we were unable to recover it. 00:35:34.514 [2024-11-18 00:40:58.159567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-11-18 00:40:58.159609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.514 qpair failed and we were unable to recover it. 00:35:34.514 [2024-11-18 00:40:58.159727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-11-18 00:40:58.159755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.514 qpair failed and we were unable to recover it. 00:35:34.514 [2024-11-18 00:40:58.159851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-11-18 00:40:58.159879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.514 qpair failed and we were unable to recover it. 00:35:34.514 [2024-11-18 00:40:58.160027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-11-18 00:40:58.160055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.514 qpair failed and we were unable to recover it. 00:35:34.514 [2024-11-18 00:40:58.160188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-11-18 00:40:58.160229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.514 qpair failed and we were unable to recover it. 00:35:34.514 [2024-11-18 00:40:58.160372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-11-18 00:40:58.160400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.514 qpair failed and we were unable to recover it. 00:35:34.514 [2024-11-18 00:40:58.160534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-11-18 00:40:58.160562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.514 qpair failed and we were unable to recover it. 00:35:34.514 [2024-11-18 00:40:58.160702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-11-18 00:40:58.160745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.514 qpair failed and we were unable to recover it. 00:35:34.514 [2024-11-18 00:40:58.160866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-11-18 00:40:58.160908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.514 qpair failed and we were unable to recover it. 00:35:34.514 [2024-11-18 00:40:58.161039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-11-18 00:40:58.161082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.514 qpair failed and we were unable to recover it. 00:35:34.514 [2024-11-18 00:40:58.161193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-11-18 00:40:58.161219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.514 qpair failed and we were unable to recover it. 00:35:34.514 [2024-11-18 00:40:58.161337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-11-18 00:40:58.161362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.514 qpair failed and we were unable to recover it. 00:35:34.514 [2024-11-18 00:40:58.161487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-11-18 00:40:58.161513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.514 qpair failed and we were unable to recover it. 00:35:34.514 [2024-11-18 00:40:58.161625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-11-18 00:40:58.161659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.514 qpair failed and we were unable to recover it. 00:35:34.514 [2024-11-18 00:40:58.161795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-11-18 00:40:58.161821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.514 qpair failed and we were unable to recover it. 00:35:34.514 [2024-11-18 00:40:58.161940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-11-18 00:40:58.161967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.514 qpair failed and we were unable to recover it. 00:35:34.514 [2024-11-18 00:40:58.162058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-11-18 00:40:58.162086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.514 qpair failed and we were unable to recover it. 00:35:34.514 [2024-11-18 00:40:58.162235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-11-18 00:40:58.162265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.514 qpair failed and we were unable to recover it. 00:35:34.514 [2024-11-18 00:40:58.162416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-11-18 00:40:58.162444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.514 qpair failed and we were unable to recover it. 00:35:34.514 [2024-11-18 00:40:58.162556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-11-18 00:40:58.162582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.514 qpair failed and we were unable to recover it. 00:35:34.514 [2024-11-18 00:40:58.162732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-11-18 00:40:58.162758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.514 qpair failed and we were unable to recover it. 00:35:34.514 [2024-11-18 00:40:58.162843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-11-18 00:40:58.162886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.514 qpair failed and we were unable to recover it. 00:35:34.514 [2024-11-18 00:40:58.163037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-11-18 00:40:58.163065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.514 qpair failed and we were unable to recover it. 00:35:34.514 [2024-11-18 00:40:58.163209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-11-18 00:40:58.163237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.514 qpair failed and we were unable to recover it. 00:35:34.514 [2024-11-18 00:40:58.163382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-11-18 00:40:58.163409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.514 qpair failed and we were unable to recover it. 00:35:34.514 [2024-11-18 00:40:58.163529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-11-18 00:40:58.163557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.514 qpair failed and we were unable to recover it. 00:35:34.514 [2024-11-18 00:40:58.163683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-11-18 00:40:58.163711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.514 qpair failed and we were unable to recover it. 00:35:34.514 [2024-11-18 00:40:58.163866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-11-18 00:40:58.163895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.514 qpair failed and we were unable to recover it. 00:35:34.514 [2024-11-18 00:40:58.164074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-11-18 00:40:58.164119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.514 qpair failed and we were unable to recover it. 00:35:34.514 [2024-11-18 00:40:58.164246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-11-18 00:40:58.164285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.514 qpair failed and we were unable to recover it. 00:35:34.514 [2024-11-18 00:40:58.164389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-11-18 00:40:58.164418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.514 qpair failed and we were unable to recover it. 00:35:34.514 [2024-11-18 00:40:58.164536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-11-18 00:40:58.164563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.514 qpair failed and we were unable to recover it. 00:35:34.514 [2024-11-18 00:40:58.164671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-11-18 00:40:58.164699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.514 qpair failed and we were unable to recover it. 00:35:34.514 [2024-11-18 00:40:58.164831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-11-18 00:40:58.164860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.514 qpair failed and we were unable to recover it. 00:35:34.514 [2024-11-18 00:40:58.164953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-11-18 00:40:58.164982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.514 qpair failed and we were unable to recover it. 00:35:34.514 [2024-11-18 00:40:58.165093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-11-18 00:40:58.165120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.514 qpair failed and we were unable to recover it. 00:35:34.514 [2024-11-18 00:40:58.165210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.514 [2024-11-18 00:40:58.165236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.515 qpair failed and we were unable to recover it. 00:35:34.515 [2024-11-18 00:40:58.165376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.515 [2024-11-18 00:40:58.165404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.515 qpair failed and we were unable to recover it. 00:35:34.515 [2024-11-18 00:40:58.165504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.515 [2024-11-18 00:40:58.165530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.515 qpair failed and we were unable to recover it. 00:35:34.515 [2024-11-18 00:40:58.165623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.515 [2024-11-18 00:40:58.165649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.515 qpair failed and we were unable to recover it. 00:35:34.515 [2024-11-18 00:40:58.165769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.515 [2024-11-18 00:40:58.165797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.515 qpair failed and we were unable to recover it. 00:35:34.515 [2024-11-18 00:40:58.165879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.515 [2024-11-18 00:40:58.165906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.515 qpair failed and we were unable to recover it. 00:35:34.515 [2024-11-18 00:40:58.166050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.515 [2024-11-18 00:40:58.166077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.515 qpair failed and we were unable to recover it. 00:35:34.515 [2024-11-18 00:40:58.166218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.515 [2024-11-18 00:40:58.166244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.515 qpair failed and we were unable to recover it. 00:35:34.515 [2024-11-18 00:40:58.166320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.515 [2024-11-18 00:40:58.166348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.515 qpair failed and we were unable to recover it. 00:35:34.515 [2024-11-18 00:40:58.166465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.515 [2024-11-18 00:40:58.166492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.515 qpair failed and we were unable to recover it. 00:35:34.515 [2024-11-18 00:40:58.166618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.515 [2024-11-18 00:40:58.166646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.515 qpair failed and we were unable to recover it. 00:35:34.515 [2024-11-18 00:40:58.166766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.515 [2024-11-18 00:40:58.166794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.515 qpair failed and we were unable to recover it. 00:35:34.515 [2024-11-18 00:40:58.166910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.515 [2024-11-18 00:40:58.166938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.515 qpair failed and we were unable to recover it. 00:35:34.515 [2024-11-18 00:40:58.167072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.515 [2024-11-18 00:40:58.167100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.515 qpair failed and we were unable to recover it. 00:35:34.515 [2024-11-18 00:40:58.167186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.515 [2024-11-18 00:40:58.167214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.515 qpair failed and we were unable to recover it. 00:35:34.515 [2024-11-18 00:40:58.167333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.515 [2024-11-18 00:40:58.167360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.515 qpair failed and we were unable to recover it. 00:35:34.515 [2024-11-18 00:40:58.167503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.515 [2024-11-18 00:40:58.167529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.515 qpair failed and we were unable to recover it. 00:35:34.515 [2024-11-18 00:40:58.167654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.515 [2024-11-18 00:40:58.167683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.515 qpair failed and we were unable to recover it. 00:35:34.515 [2024-11-18 00:40:58.167827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.515 [2024-11-18 00:40:58.167869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.515 qpair failed and we were unable to recover it. 00:35:34.515 [2024-11-18 00:40:58.167990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.515 [2024-11-18 00:40:58.168018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.515 qpair failed and we were unable to recover it. 00:35:34.515 [2024-11-18 00:40:58.168157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.515 [2024-11-18 00:40:58.168187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.515 qpair failed and we were unable to recover it. 00:35:34.515 [2024-11-18 00:40:58.168341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.515 [2024-11-18 00:40:58.168383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.515 qpair failed and we were unable to recover it. 00:35:34.515 [2024-11-18 00:40:58.168494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.515 [2024-11-18 00:40:58.168520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.515 qpair failed and we were unable to recover it. 00:35:34.515 [2024-11-18 00:40:58.168656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.515 [2024-11-18 00:40:58.168685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.515 qpair failed and we were unable to recover it. 00:35:34.515 [2024-11-18 00:40:58.168837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.515 [2024-11-18 00:40:58.168866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.515 qpair failed and we were unable to recover it. 00:35:34.515 [2024-11-18 00:40:58.168949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.515 [2024-11-18 00:40:58.168978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.515 qpair failed and we were unable to recover it. 00:35:34.515 [2024-11-18 00:40:58.169092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.515 [2024-11-18 00:40:58.169120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.515 qpair failed and we were unable to recover it. 00:35:34.515 [2024-11-18 00:40:58.169210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.515 [2024-11-18 00:40:58.169237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.515 qpair failed and we were unable to recover it. 00:35:34.515 [2024-11-18 00:40:58.169356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.515 [2024-11-18 00:40:58.169386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.515 qpair failed and we were unable to recover it. 00:35:34.515 [2024-11-18 00:40:58.169532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.515 [2024-11-18 00:40:58.169575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.515 qpair failed and we were unable to recover it. 00:35:34.515 [2024-11-18 00:40:58.169657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.515 [2024-11-18 00:40:58.169683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.515 qpair failed and we were unable to recover it. 00:35:34.515 [2024-11-18 00:40:58.169802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.515 [2024-11-18 00:40:58.169829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.515 qpair failed and we were unable to recover it. 00:35:34.515 [2024-11-18 00:40:58.169942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.515 [2024-11-18 00:40:58.169968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.515 qpair failed and we were unable to recover it. 00:35:34.515 [2024-11-18 00:40:58.170086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.515 [2024-11-18 00:40:58.170113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.515 qpair failed and we were unable to recover it. 00:35:34.515 [2024-11-18 00:40:58.170233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.515 [2024-11-18 00:40:58.170258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.515 qpair failed and we were unable to recover it. 00:35:34.515 [2024-11-18 00:40:58.170350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.515 [2024-11-18 00:40:58.170397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.515 qpair failed and we were unable to recover it. 00:35:34.515 [2024-11-18 00:40:58.170528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.515 [2024-11-18 00:40:58.170557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.515 qpair failed and we were unable to recover it. 00:35:34.515 [2024-11-18 00:40:58.170705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.515 [2024-11-18 00:40:58.170734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.515 qpair failed and we were unable to recover it. 00:35:34.515 [2024-11-18 00:40:58.170854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.515 [2024-11-18 00:40:58.170883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.515 qpair failed and we were unable to recover it. 00:35:34.515 [2024-11-18 00:40:58.171039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.515 [2024-11-18 00:40:58.171069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.515 qpair failed and we were unable to recover it. 00:35:34.515 [2024-11-18 00:40:58.171190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.515 [2024-11-18 00:40:58.171219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.515 qpair failed and we were unable to recover it. 00:35:34.515 [2024-11-18 00:40:58.171324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.515 [2024-11-18 00:40:58.171369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.515 qpair failed and we were unable to recover it. 00:35:34.515 [2024-11-18 00:40:58.171489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.515 [2024-11-18 00:40:58.171518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.515 qpair failed and we were unable to recover it. 00:35:34.515 [2024-11-18 00:40:58.171616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.515 [2024-11-18 00:40:58.171646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.515 qpair failed and we were unable to recover it. 00:35:34.515 [2024-11-18 00:40:58.171738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.515 [2024-11-18 00:40:58.171772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.515 qpair failed and we were unable to recover it. 00:35:34.515 [2024-11-18 00:40:58.171893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.515 [2024-11-18 00:40:58.171922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.515 qpair failed and we were unable to recover it. 00:35:34.515 [2024-11-18 00:40:58.172055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.515 [2024-11-18 00:40:58.172084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.515 qpair failed and we were unable to recover it. 00:35:34.515 [2024-11-18 00:40:58.172217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.515 [2024-11-18 00:40:58.172245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.515 qpair failed and we were unable to recover it. 00:35:34.515 [2024-11-18 00:40:58.172374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.515 [2024-11-18 00:40:58.172401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.515 qpair failed and we were unable to recover it. 00:35:34.515 [2024-11-18 00:40:58.172534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.515 [2024-11-18 00:40:58.172577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.515 qpair failed and we were unable to recover it. 00:35:34.515 [2024-11-18 00:40:58.172708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.515 [2024-11-18 00:40:58.172752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.515 qpair failed and we were unable to recover it. 00:35:34.515 [2024-11-18 00:40:58.172834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.515 [2024-11-18 00:40:58.172860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.515 qpair failed and we were unable to recover it. 00:35:34.515 [2024-11-18 00:40:58.172991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.515 [2024-11-18 00:40:58.173036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.515 qpair failed and we were unable to recover it. 00:35:34.515 [2024-11-18 00:40:58.173161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.515 [2024-11-18 00:40:58.173188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.515 qpair failed and we were unable to recover it. 00:35:34.515 [2024-11-18 00:40:58.173302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.515 [2024-11-18 00:40:58.173335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.515 qpair failed and we were unable to recover it. 00:35:34.515 [2024-11-18 00:40:58.173467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.515 [2024-11-18 00:40:58.173510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.515 qpair failed and we were unable to recover it. 00:35:34.515 [2024-11-18 00:40:58.173672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.515 [2024-11-18 00:40:58.173716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.515 qpair failed and we were unable to recover it. 00:35:34.515 [2024-11-18 00:40:58.173801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.515 [2024-11-18 00:40:58.173828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.515 qpair failed and we were unable to recover it. 00:35:34.515 [2024-11-18 00:40:58.173948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.515 [2024-11-18 00:40:58.173974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.515 qpair failed and we were unable to recover it. 00:35:34.515 [2024-11-18 00:40:58.174061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.515 [2024-11-18 00:40:58.174089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.515 qpair failed and we were unable to recover it. 00:35:34.515 [2024-11-18 00:40:58.174232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.515 [2024-11-18 00:40:58.174259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.515 qpair failed and we were unable to recover it. 00:35:34.515 [2024-11-18 00:40:58.174361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.515 [2024-11-18 00:40:58.174400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.515 qpair failed and we were unable to recover it. 00:35:34.516 [2024-11-18 00:40:58.174554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.516 [2024-11-18 00:40:58.174585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.516 qpair failed and we were unable to recover it. 00:35:34.516 [2024-11-18 00:40:58.174742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.516 [2024-11-18 00:40:58.174772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.516 qpair failed and we were unable to recover it. 00:35:34.516 [2024-11-18 00:40:58.174903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.516 [2024-11-18 00:40:58.174945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.516 qpair failed and we were unable to recover it. 00:35:34.516 [2024-11-18 00:40:58.175077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.516 [2024-11-18 00:40:58.175104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.516 qpair failed and we were unable to recover it. 00:35:34.516 [2024-11-18 00:40:58.175199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.516 [2024-11-18 00:40:58.175226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.516 qpair failed and we were unable to recover it. 00:35:34.516 [2024-11-18 00:40:58.175341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.516 [2024-11-18 00:40:58.175368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.516 qpair failed and we were unable to recover it. 00:35:34.516 [2024-11-18 00:40:58.175500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.516 [2024-11-18 00:40:58.175530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.516 qpair failed and we were unable to recover it. 00:35:34.516 [2024-11-18 00:40:58.175623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.516 [2024-11-18 00:40:58.175653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.516 qpair failed and we were unable to recover it. 00:35:34.516 [2024-11-18 00:40:58.175741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.516 [2024-11-18 00:40:58.175786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.516 qpair failed and we were unable to recover it. 00:35:34.516 [2024-11-18 00:40:58.175997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.516 [2024-11-18 00:40:58.176042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.516 qpair failed and we were unable to recover it. 00:35:34.516 [2024-11-18 00:40:58.176157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.516 [2024-11-18 00:40:58.176183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.516 qpair failed and we were unable to recover it. 00:35:34.516 [2024-11-18 00:40:58.176269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.516 [2024-11-18 00:40:58.176295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.516 qpair failed and we were unable to recover it. 00:35:34.516 [2024-11-18 00:40:58.176435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.516 [2024-11-18 00:40:58.176479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.516 qpair failed and we were unable to recover it. 00:35:34.516 [2024-11-18 00:40:58.176609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.516 [2024-11-18 00:40:58.176652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.516 qpair failed and we were unable to recover it. 00:35:34.516 [2024-11-18 00:40:58.176793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.516 [2024-11-18 00:40:58.176820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.516 qpair failed and we were unable to recover it. 00:35:34.516 [2024-11-18 00:40:58.176927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.516 [2024-11-18 00:40:58.176953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.516 qpair failed and we were unable to recover it. 00:35:34.516 [2024-11-18 00:40:58.177100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.516 [2024-11-18 00:40:58.177126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.516 qpair failed and we were unable to recover it. 00:35:34.516 [2024-11-18 00:40:58.177206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.516 [2024-11-18 00:40:58.177232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.516 qpair failed and we were unable to recover it. 00:35:34.516 [2024-11-18 00:40:58.177323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.516 [2024-11-18 00:40:58.177350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.516 qpair failed and we were unable to recover it. 00:35:34.516 [2024-11-18 00:40:58.177494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.516 [2024-11-18 00:40:58.177523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.516 qpair failed and we were unable to recover it. 00:35:34.516 [2024-11-18 00:40:58.177647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.516 [2024-11-18 00:40:58.177676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.516 qpair failed and we were unable to recover it. 00:35:34.516 [2024-11-18 00:40:58.177772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.516 [2024-11-18 00:40:58.177799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.516 qpair failed and we were unable to recover it. 00:35:34.516 [2024-11-18 00:40:58.177887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.516 [2024-11-18 00:40:58.177919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.516 qpair failed and we were unable to recover it. 00:35:34.516 [2024-11-18 00:40:58.178061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.516 [2024-11-18 00:40:58.178088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.516 qpair failed and we were unable to recover it. 00:35:34.516 [2024-11-18 00:40:58.178203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.516 [2024-11-18 00:40:58.178229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.516 qpair failed and we were unable to recover it. 00:35:34.516 [2024-11-18 00:40:58.178317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.516 [2024-11-18 00:40:58.178344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.516 qpair failed and we were unable to recover it. 00:35:34.516 [2024-11-18 00:40:58.178449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.516 [2024-11-18 00:40:58.178475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.516 qpair failed and we were unable to recover it. 00:35:34.516 [2024-11-18 00:40:58.178617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.516 [2024-11-18 00:40:58.178646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.516 qpair failed and we were unable to recover it. 00:35:34.516 [2024-11-18 00:40:58.178733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.516 [2024-11-18 00:40:58.178760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.516 qpair failed and we were unable to recover it. 00:35:34.516 [2024-11-18 00:40:58.178891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.516 [2024-11-18 00:40:58.178917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.516 qpair failed and we were unable to recover it. 00:35:34.516 [2024-11-18 00:40:58.179005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.516 [2024-11-18 00:40:58.179031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.516 qpair failed and we were unable to recover it. 00:35:34.516 [2024-11-18 00:40:58.179116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.516 [2024-11-18 00:40:58.179142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.516 qpair failed and we were unable to recover it. 00:35:34.516 [2024-11-18 00:40:58.179255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.516 [2024-11-18 00:40:58.179281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.516 qpair failed and we were unable to recover it. 00:35:34.516 [2024-11-18 00:40:58.179417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.516 [2024-11-18 00:40:58.179447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.516 qpair failed and we were unable to recover it. 00:35:34.516 [2024-11-18 00:40:58.179542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.516 [2024-11-18 00:40:58.179572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.516 qpair failed and we were unable to recover it. 00:35:34.516 [2024-11-18 00:40:58.179694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.516 [2024-11-18 00:40:58.179723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.516 qpair failed and we were unable to recover it. 00:35:34.516 [2024-11-18 00:40:58.179881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.516 [2024-11-18 00:40:58.179911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.516 qpair failed and we were unable to recover it. 00:35:34.516 [2024-11-18 00:40:58.180062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.516 [2024-11-18 00:40:58.180105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.516 qpair failed and we were unable to recover it. 00:35:34.516 [2024-11-18 00:40:58.180200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.516 [2024-11-18 00:40:58.180239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.516 qpair failed and we were unable to recover it. 00:35:34.516 [2024-11-18 00:40:58.180358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.516 [2024-11-18 00:40:58.180404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.516 qpair failed and we were unable to recover it. 00:35:34.516 [2024-11-18 00:40:58.180534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.516 [2024-11-18 00:40:58.180564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.516 qpair failed and we were unable to recover it. 00:35:34.516 [2024-11-18 00:40:58.180689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.516 [2024-11-18 00:40:58.180718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.516 qpair failed and we were unable to recover it. 00:35:34.516 [2024-11-18 00:40:58.180870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.516 [2024-11-18 00:40:58.180901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.516 qpair failed and we were unable to recover it. 00:35:34.516 [2024-11-18 00:40:58.181032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.516 [2024-11-18 00:40:58.181061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.516 qpair failed and we were unable to recover it. 00:35:34.516 [2024-11-18 00:40:58.181229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.516 [2024-11-18 00:40:58.181257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.516 qpair failed and we were unable to recover it. 00:35:34.516 [2024-11-18 00:40:58.181374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.516 [2024-11-18 00:40:58.181400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.516 qpair failed and we were unable to recover it. 00:35:34.516 [2024-11-18 00:40:58.181533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.516 [2024-11-18 00:40:58.181577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.516 qpair failed and we were unable to recover it. 00:35:34.516 [2024-11-18 00:40:58.181705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.516 [2024-11-18 00:40:58.181734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.516 qpair failed and we were unable to recover it. 00:35:34.516 [2024-11-18 00:40:58.181879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.516 [2024-11-18 00:40:58.181923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.516 qpair failed and we were unable to recover it. 00:35:34.516 [2024-11-18 00:40:58.182064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.516 [2024-11-18 00:40:58.182095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.516 qpair failed and we were unable to recover it. 00:35:34.516 [2024-11-18 00:40:58.182173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.516 [2024-11-18 00:40:58.182200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.516 qpair failed and we were unable to recover it. 00:35:34.516 [2024-11-18 00:40:58.182328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.516 [2024-11-18 00:40:58.182357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.516 qpair failed and we were unable to recover it. 00:35:34.516 [2024-11-18 00:40:58.182482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.516 [2024-11-18 00:40:58.182509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.516 qpair failed and we were unable to recover it. 00:35:34.516 [2024-11-18 00:40:58.182616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.516 [2024-11-18 00:40:58.182645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.516 qpair failed and we were unable to recover it. 00:35:34.516 [2024-11-18 00:40:58.182796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.516 [2024-11-18 00:40:58.182825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.516 qpair failed and we were unable to recover it. 00:35:34.516 [2024-11-18 00:40:58.182950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.516 [2024-11-18 00:40:58.182980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.516 qpair failed and we were unable to recover it. 00:35:34.516 [2024-11-18 00:40:58.183150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.516 [2024-11-18 00:40:58.183196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.516 qpair failed and we were unable to recover it. 00:35:34.516 [2024-11-18 00:40:58.183326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.516 [2024-11-18 00:40:58.183366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.516 qpair failed and we were unable to recover it. 00:35:34.516 [2024-11-18 00:40:58.183515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.516 [2024-11-18 00:40:58.183543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.516 qpair failed and we were unable to recover it. 00:35:34.516 [2024-11-18 00:40:58.183646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.516 [2024-11-18 00:40:58.183675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.516 qpair failed and we were unable to recover it. 00:35:34.516 [2024-11-18 00:40:58.183776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.516 [2024-11-18 00:40:58.183818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.516 qpair failed and we were unable to recover it. 00:35:34.516 [2024-11-18 00:40:58.183946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.516 [2024-11-18 00:40:58.183989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.516 qpair failed and we were unable to recover it. 00:35:34.516 [2024-11-18 00:40:58.184140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.516 [2024-11-18 00:40:58.184167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.517 [2024-11-18 00:40:58.184332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.517 [2024-11-18 00:40:58.184359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.517 [2024-11-18 00:40:58.184477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.517 [2024-11-18 00:40:58.184504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.517 [2024-11-18 00:40:58.184623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.517 [2024-11-18 00:40:58.184649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.517 [2024-11-18 00:40:58.184779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.517 [2024-11-18 00:40:58.184808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.517 [2024-11-18 00:40:58.184901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.517 [2024-11-18 00:40:58.184931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.517 [2024-11-18 00:40:58.185052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.517 [2024-11-18 00:40:58.185082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.517 [2024-11-18 00:40:58.185216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.517 [2024-11-18 00:40:58.185242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.517 [2024-11-18 00:40:58.185357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.517 [2024-11-18 00:40:58.185384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.517 [2024-11-18 00:40:58.185500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.517 [2024-11-18 00:40:58.185526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.517 [2024-11-18 00:40:58.185640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.517 [2024-11-18 00:40:58.185666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.517 [2024-11-18 00:40:58.185796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.517 [2024-11-18 00:40:58.185826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.517 [2024-11-18 00:40:58.185958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.517 [2024-11-18 00:40:58.185986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.517 [2024-11-18 00:40:58.186156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.517 [2024-11-18 00:40:58.186212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.517 [2024-11-18 00:40:58.186342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.517 [2024-11-18 00:40:58.186371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.517 [2024-11-18 00:40:58.186479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.517 [2024-11-18 00:40:58.186506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.517 [2024-11-18 00:40:58.186621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.517 [2024-11-18 00:40:58.186647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.517 [2024-11-18 00:40:58.186792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.517 [2024-11-18 00:40:58.186819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.517 [2024-11-18 00:40:58.186972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.517 [2024-11-18 00:40:58.187001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.517 [2024-11-18 00:40:58.187116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.517 [2024-11-18 00:40:58.187144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.517 [2024-11-18 00:40:58.187286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.517 [2024-11-18 00:40:58.187321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.517 [2024-11-18 00:40:58.187433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.517 [2024-11-18 00:40:58.187460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.517 [2024-11-18 00:40:58.187543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.517 [2024-11-18 00:40:58.187569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.517 [2024-11-18 00:40:58.187677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.517 [2024-11-18 00:40:58.187706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.517 [2024-11-18 00:40:58.187829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.517 [2024-11-18 00:40:58.187858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.517 [2024-11-18 00:40:58.187974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.517 [2024-11-18 00:40:58.188003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.517 [2024-11-18 00:40:58.188121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.517 [2024-11-18 00:40:58.188150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.517 [2024-11-18 00:40:58.188299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.517 [2024-11-18 00:40:58.188345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.517 [2024-11-18 00:40:58.188465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.517 [2024-11-18 00:40:58.188493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.517 [2024-11-18 00:40:58.188581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.517 [2024-11-18 00:40:58.188608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.517 [2024-11-18 00:40:58.188687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.517 [2024-11-18 00:40:58.188713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.517 [2024-11-18 00:40:58.188839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.517 [2024-11-18 00:40:58.188868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.517 [2024-11-18 00:40:58.188997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.517 [2024-11-18 00:40:58.189026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.517 [2024-11-18 00:40:58.189130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.517 [2024-11-18 00:40:58.189160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.517 [2024-11-18 00:40:58.189325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.517 [2024-11-18 00:40:58.189355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.517 [2024-11-18 00:40:58.189442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.517 [2024-11-18 00:40:58.189469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.517 [2024-11-18 00:40:58.189571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.517 [2024-11-18 00:40:58.189600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.517 [2024-11-18 00:40:58.189778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.517 [2024-11-18 00:40:58.189823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.517 [2024-11-18 00:40:58.189951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.517 [2024-11-18 00:40:58.189980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.517 [2024-11-18 00:40:58.190072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.517 [2024-11-18 00:40:58.190098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.517 [2024-11-18 00:40:58.190209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.517 [2024-11-18 00:40:58.190235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.517 [2024-11-18 00:40:58.190352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.517 [2024-11-18 00:40:58.190378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.517 [2024-11-18 00:40:58.190468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.517 [2024-11-18 00:40:58.190495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.517 [2024-11-18 00:40:58.190578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.517 [2024-11-18 00:40:58.190604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.517 [2024-11-18 00:40:58.190714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.517 [2024-11-18 00:40:58.190740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.517 [2024-11-18 00:40:58.190877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.517 [2024-11-18 00:40:58.190903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.517 [2024-11-18 00:40:58.191038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.517 [2024-11-18 00:40:58.191064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.517 [2024-11-18 00:40:58.191204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.517 [2024-11-18 00:40:58.191230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.517 [2024-11-18 00:40:58.191361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.517 [2024-11-18 00:40:58.191393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.517 [2024-11-18 00:40:58.191492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.517 [2024-11-18 00:40:58.191523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.517 [2024-11-18 00:40:58.191651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.517 [2024-11-18 00:40:58.191680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.517 [2024-11-18 00:40:58.191803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.517 [2024-11-18 00:40:58.191832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.517 [2024-11-18 00:40:58.191987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.517 [2024-11-18 00:40:58.192016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.517 [2024-11-18 00:40:58.192133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.517 [2024-11-18 00:40:58.192162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.517 [2024-11-18 00:40:58.192269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.517 [2024-11-18 00:40:58.192301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.517 [2024-11-18 00:40:58.192453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.517 [2024-11-18 00:40:58.192497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.517 [2024-11-18 00:40:58.192635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.517 [2024-11-18 00:40:58.192679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.517 [2024-11-18 00:40:58.192806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.517 [2024-11-18 00:40:58.192836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.517 [2024-11-18 00:40:58.192966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.517 [2024-11-18 00:40:58.192994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.517 [2024-11-18 00:40:58.193093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.517 [2024-11-18 00:40:58.193120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.517 [2024-11-18 00:40:58.193240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.517 [2024-11-18 00:40:58.193266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.517 [2024-11-18 00:40:58.193379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.517 [2024-11-18 00:40:58.193406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.517 [2024-11-18 00:40:58.193525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.517 [2024-11-18 00:40:58.193552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.517 [2024-11-18 00:40:58.193663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.517 [2024-11-18 00:40:58.193690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.517 [2024-11-18 00:40:58.193795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.517 [2024-11-18 00:40:58.193839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.517 [2024-11-18 00:40:58.193972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.517 [2024-11-18 00:40:58.194003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.517 [2024-11-18 00:40:58.194178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.517 [2024-11-18 00:40:58.194209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.517 [2024-11-18 00:40:58.194372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.517 [2024-11-18 00:40:58.194398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.517 qpair failed and we were unable to recover it. 00:35:34.518 [2024-11-18 00:40:58.194497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.518 [2024-11-18 00:40:58.194542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.518 qpair failed and we were unable to recover it. 00:35:34.518 [2024-11-18 00:40:58.194680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.518 [2024-11-18 00:40:58.194723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.518 qpair failed and we were unable to recover it. 00:35:34.518 [2024-11-18 00:40:58.194840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.518 [2024-11-18 00:40:58.194884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.518 qpair failed and we were unable to recover it. 00:35:34.518 [2024-11-18 00:40:58.195014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.518 [2024-11-18 00:40:58.195045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.518 qpair failed and we were unable to recover it. 00:35:34.518 [2024-11-18 00:40:58.195233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.518 [2024-11-18 00:40:58.195264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.518 qpair failed and we were unable to recover it. 00:35:34.518 [2024-11-18 00:40:58.195411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.518 [2024-11-18 00:40:58.195440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.518 qpair failed and we were unable to recover it. 00:35:34.518 [2024-11-18 00:40:58.195576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.518 [2024-11-18 00:40:58.195625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.518 qpair failed and we were unable to recover it. 00:35:34.518 [2024-11-18 00:40:58.195751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.518 [2024-11-18 00:40:58.195795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.518 qpair failed and we were unable to recover it. 00:35:34.518 [2024-11-18 00:40:58.195925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.518 [2024-11-18 00:40:58.195969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.518 qpair failed and we were unable to recover it. 00:35:34.518 [2024-11-18 00:40:58.196081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.518 [2024-11-18 00:40:58.196108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.518 qpair failed and we were unable to recover it. 00:35:34.518 [2024-11-18 00:40:58.196186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.518 [2024-11-18 00:40:58.196212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.518 qpair failed and we were unable to recover it. 00:35:34.518 [2024-11-18 00:40:58.196296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.518 [2024-11-18 00:40:58.196336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.518 qpair failed and we were unable to recover it. 00:35:34.518 [2024-11-18 00:40:58.196427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.518 [2024-11-18 00:40:58.196453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.518 qpair failed and we were unable to recover it. 00:35:34.518 [2024-11-18 00:40:58.196581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.518 [2024-11-18 00:40:58.196620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.518 qpair failed and we were unable to recover it. 00:35:34.518 [2024-11-18 00:40:58.196744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.518 [2024-11-18 00:40:58.196773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.518 qpair failed and we were unable to recover it. 00:35:34.518 [2024-11-18 00:40:58.196895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.518 [2024-11-18 00:40:58.196923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.518 qpair failed and we were unable to recover it. 00:35:34.518 [2024-11-18 00:40:58.197030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.518 [2024-11-18 00:40:58.197056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.518 qpair failed and we were unable to recover it. 00:35:34.518 [2024-11-18 00:40:58.197193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.518 [2024-11-18 00:40:58.197219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.518 qpair failed and we were unable to recover it. 00:35:34.518 [2024-11-18 00:40:58.197365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.518 [2024-11-18 00:40:58.197392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.518 qpair failed and we were unable to recover it. 00:35:34.518 [2024-11-18 00:40:58.197520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.518 [2024-11-18 00:40:58.197548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.518 qpair failed and we were unable to recover it. 00:35:34.518 [2024-11-18 00:40:58.197672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.518 [2024-11-18 00:40:58.197701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.518 qpair failed and we were unable to recover it. 00:35:34.518 [2024-11-18 00:40:58.197820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.518 [2024-11-18 00:40:58.197866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.518 qpair failed and we were unable to recover it. 00:35:34.518 [2024-11-18 00:40:58.198024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.518 [2024-11-18 00:40:58.198054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.518 qpair failed and we were unable to recover it. 00:35:34.518 [2024-11-18 00:40:58.198164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.518 [2024-11-18 00:40:58.198208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.518 qpair failed and we were unable to recover it. 00:35:34.518 [2024-11-18 00:40:58.198294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.518 [2024-11-18 00:40:58.198328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.518 qpair failed and we were unable to recover it. 00:35:34.518 [2024-11-18 00:40:58.198436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.518 [2024-11-18 00:40:58.198462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.518 qpair failed and we were unable to recover it. 00:35:34.518 [2024-11-18 00:40:58.198549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.518 [2024-11-18 00:40:58.198581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.518 qpair failed and we were unable to recover it. 00:35:34.518 [2024-11-18 00:40:58.198700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.518 [2024-11-18 00:40:58.198728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.518 qpair failed and we were unable to recover it. 00:35:34.518 [2024-11-18 00:40:58.198886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.518 [2024-11-18 00:40:58.198916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.518 qpair failed and we were unable to recover it. 00:35:34.518 [2024-11-18 00:40:58.199066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.518 [2024-11-18 00:40:58.199096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.518 qpair failed and we were unable to recover it. 00:35:34.518 [2024-11-18 00:40:58.199226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.518 [2024-11-18 00:40:58.199256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.518 qpair failed and we were unable to recover it. 00:35:34.518 [2024-11-18 00:40:58.199422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.518 [2024-11-18 00:40:58.199449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.518 qpair failed and we were unable to recover it. 00:35:34.518 [2024-11-18 00:40:58.199528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.518 [2024-11-18 00:40:58.199572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.518 qpair failed and we were unable to recover it. 00:35:34.518 [2024-11-18 00:40:58.199692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.518 [2024-11-18 00:40:58.199721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.518 qpair failed and we were unable to recover it. 00:35:34.518 [2024-11-18 00:40:58.199853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.518 [2024-11-18 00:40:58.199883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.518 qpair failed and we were unable to recover it. 00:35:34.518 [2024-11-18 00:40:58.200065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.518 [2024-11-18 00:40:58.200096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.518 qpair failed and we were unable to recover it. 00:35:34.518 [2024-11-18 00:40:58.200224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.518 [2024-11-18 00:40:58.200254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.518 qpair failed and we were unable to recover it. 00:35:34.518 [2024-11-18 00:40:58.200370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.518 [2024-11-18 00:40:58.200398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.518 qpair failed and we were unable to recover it. 00:35:34.518 [2024-11-18 00:40:58.200494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.518 [2024-11-18 00:40:58.200520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.518 qpair failed and we were unable to recover it. 00:35:34.518 [2024-11-18 00:40:58.200603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.519 [2024-11-18 00:40:58.200630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.519 qpair failed and we were unable to recover it. 00:35:34.519 [2024-11-18 00:40:58.200759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.519 [2024-11-18 00:40:58.200790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.519 qpair failed and we were unable to recover it. 00:35:34.519 [2024-11-18 00:40:58.200946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.519 [2024-11-18 00:40:58.200977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.519 qpair failed and we were unable to recover it. 00:35:34.519 [2024-11-18 00:40:58.201108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.519 [2024-11-18 00:40:58.201139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.519 qpair failed and we were unable to recover it. 00:35:34.519 [2024-11-18 00:40:58.201274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.519 [2024-11-18 00:40:58.201303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.519 qpair failed and we were unable to recover it. 00:35:34.519 [2024-11-18 00:40:58.201397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.519 [2024-11-18 00:40:58.201424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.519 qpair failed and we were unable to recover it. 00:35:34.519 [2024-11-18 00:40:58.201588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.519 [2024-11-18 00:40:58.201631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.519 qpair failed and we were unable to recover it. 00:35:34.519 [2024-11-18 00:40:58.201746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.519 [2024-11-18 00:40:58.201771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.519 qpair failed and we were unable to recover it. 00:35:34.519 [2024-11-18 00:40:58.201899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.519 [2024-11-18 00:40:58.201939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.519 qpair failed and we were unable to recover it. 00:35:34.519 [2024-11-18 00:40:58.202066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.519 [2024-11-18 00:40:58.202094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.519 qpair failed and we were unable to recover it. 00:35:34.519 [2024-11-18 00:40:58.202206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.519 [2024-11-18 00:40:58.202233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.519 qpair failed and we were unable to recover it. 00:35:34.519 [2024-11-18 00:40:58.202343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.519 [2024-11-18 00:40:58.202371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.519 qpair failed and we were unable to recover it. 00:35:34.519 [2024-11-18 00:40:58.202457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.519 [2024-11-18 00:40:58.202483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.519 qpair failed and we were unable to recover it. 00:35:34.519 [2024-11-18 00:40:58.202565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.519 [2024-11-18 00:40:58.202591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.519 qpair failed and we were unable to recover it. 00:35:34.519 [2024-11-18 00:40:58.202712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.519 [2024-11-18 00:40:58.202748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.519 qpair failed and we were unable to recover it. 00:35:34.519 [2024-11-18 00:40:58.202869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.519 [2024-11-18 00:40:58.202898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.519 qpair failed and we were unable to recover it. 00:35:34.519 [2024-11-18 00:40:58.202984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.519 [2024-11-18 00:40:58.203012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.519 qpair failed and we were unable to recover it. 00:35:34.519 [2024-11-18 00:40:58.203178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.519 [2024-11-18 00:40:58.203205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.519 qpair failed and we were unable to recover it. 00:35:34.519 [2024-11-18 00:40:58.203346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.519 [2024-11-18 00:40:58.203373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.519 qpair failed and we were unable to recover it. 00:35:34.519 [2024-11-18 00:40:58.203477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.519 [2024-11-18 00:40:58.203509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.519 qpair failed and we were unable to recover it. 00:35:34.519 [2024-11-18 00:40:58.203637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.519 [2024-11-18 00:40:58.203667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.519 qpair failed and we were unable to recover it. 00:35:34.519 [2024-11-18 00:40:58.203822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.519 [2024-11-18 00:40:58.203851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.519 qpair failed and we were unable to recover it. 00:35:34.519 [2024-11-18 00:40:58.203970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.519 [2024-11-18 00:40:58.203999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.519 qpair failed and we were unable to recover it. 00:35:34.519 [2024-11-18 00:40:58.204145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.519 [2024-11-18 00:40:58.204176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.519 qpair failed and we were unable to recover it. 00:35:34.519 [2024-11-18 00:40:58.204325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.519 [2024-11-18 00:40:58.204370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.519 qpair failed and we were unable to recover it. 00:35:34.519 [2024-11-18 00:40:58.204475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.519 [2024-11-18 00:40:58.204506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.519 qpair failed and we were unable to recover it. 00:35:34.519 [2024-11-18 00:40:58.204661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.519 [2024-11-18 00:40:58.204706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.519 qpair failed and we were unable to recover it. 00:35:34.519 [2024-11-18 00:40:58.204845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.519 [2024-11-18 00:40:58.204892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.519 qpair failed and we were unable to recover it. 00:35:34.519 [2024-11-18 00:40:58.204997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.519 [2024-11-18 00:40:58.205026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.519 qpair failed and we were unable to recover it. 00:35:34.519 [2024-11-18 00:40:58.205150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.519 [2024-11-18 00:40:58.205179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.519 qpair failed and we were unable to recover it. 00:35:34.519 [2024-11-18 00:40:58.205318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.519 [2024-11-18 00:40:58.205363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.519 qpair failed and we were unable to recover it. 00:35:34.519 [2024-11-18 00:40:58.205505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.519 [2024-11-18 00:40:58.205533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.519 qpair failed and we were unable to recover it. 00:35:34.519 [2024-11-18 00:40:58.205621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.519 [2024-11-18 00:40:58.205647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.519 qpair failed and we were unable to recover it. 00:35:34.519 [2024-11-18 00:40:58.205774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.519 [2024-11-18 00:40:58.205803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.519 qpair failed and we were unable to recover it. 00:35:34.519 [2024-11-18 00:40:58.205956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.519 [2024-11-18 00:40:58.205982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.519 qpair failed and we were unable to recover it. 00:35:34.519 [2024-11-18 00:40:58.206093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.519 [2024-11-18 00:40:58.206120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.519 qpair failed and we were unable to recover it. 00:35:34.519 [2024-11-18 00:40:58.206239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.519 [2024-11-18 00:40:58.206265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.519 qpair failed and we were unable to recover it. 00:35:34.519 [2024-11-18 00:40:58.206357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.519 [2024-11-18 00:40:58.206385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.519 qpair failed and we were unable to recover it. 00:35:34.519 [2024-11-18 00:40:58.206494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.519 [2024-11-18 00:40:58.206520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.519 qpair failed and we were unable to recover it. 00:35:34.519 [2024-11-18 00:40:58.206604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.519 [2024-11-18 00:40:58.206630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.519 qpair failed and we were unable to recover it. 00:35:34.519 [2024-11-18 00:40:58.206734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.519 [2024-11-18 00:40:58.206760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.519 qpair failed and we were unable to recover it. 00:35:34.519 [2024-11-18 00:40:58.206875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.519 [2024-11-18 00:40:58.206918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.519 qpair failed and we were unable to recover it. 00:35:34.519 [2024-11-18 00:40:58.207043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.519 [2024-11-18 00:40:58.207072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.519 qpair failed and we were unable to recover it. 00:35:34.519 [2024-11-18 00:40:58.207199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.519 [2024-11-18 00:40:58.207225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.519 qpair failed and we were unable to recover it. 00:35:34.519 [2024-11-18 00:40:58.207351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.519 [2024-11-18 00:40:58.207391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.519 qpair failed and we were unable to recover it. 00:35:34.519 [2024-11-18 00:40:58.207514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.519 [2024-11-18 00:40:58.207543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.519 qpair failed and we were unable to recover it. 00:35:34.519 [2024-11-18 00:40:58.207693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.519 [2024-11-18 00:40:58.207735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.519 qpair failed and we were unable to recover it. 00:35:34.519 [2024-11-18 00:40:58.207887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.519 [2024-11-18 00:40:58.207916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.519 qpair failed and we were unable to recover it. 00:35:34.519 [2024-11-18 00:40:58.208067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.519 [2024-11-18 00:40:58.208097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.519 qpair failed and we were unable to recover it. 00:35:34.519 [2024-11-18 00:40:58.208202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.519 [2024-11-18 00:40:58.208228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.519 qpair failed and we were unable to recover it. 00:35:34.520 [2024-11-18 00:40:58.208346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.520 [2024-11-18 00:40:58.208373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.520 qpair failed and we were unable to recover it. 00:35:34.520 [2024-11-18 00:40:58.208518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.520 [2024-11-18 00:40:58.208544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.520 qpair failed and we were unable to recover it. 00:35:34.520 [2024-11-18 00:40:58.208729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.520 [2024-11-18 00:40:58.208774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.520 qpair failed and we were unable to recover it. 00:35:34.520 [2024-11-18 00:40:58.208911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.520 [2024-11-18 00:40:58.208941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.520 qpair failed and we were unable to recover it. 00:35:34.520 [2024-11-18 00:40:58.209052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.520 [2024-11-18 00:40:58.209081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.520 qpair failed and we were unable to recover it. 00:35:34.520 [2024-11-18 00:40:58.209238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.520 [2024-11-18 00:40:58.209266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.520 qpair failed and we were unable to recover it. 00:35:34.520 [2024-11-18 00:40:58.209410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.520 [2024-11-18 00:40:58.209436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.520 qpair failed and we were unable to recover it. 00:35:34.520 [2024-11-18 00:40:58.209523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.520 [2024-11-18 00:40:58.209550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.520 qpair failed and we were unable to recover it. 00:35:34.520 [2024-11-18 00:40:58.209699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.520 [2024-11-18 00:40:58.209729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.520 qpair failed and we were unable to recover it. 00:35:34.520 [2024-11-18 00:40:58.209914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.520 [2024-11-18 00:40:58.209960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.520 qpair failed and we were unable to recover it. 00:35:34.520 [2024-11-18 00:40:58.210115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.520 [2024-11-18 00:40:58.210144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.520 qpair failed and we were unable to recover it. 00:35:34.520 [2024-11-18 00:40:58.210304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.520 [2024-11-18 00:40:58.210337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.520 qpair failed and we were unable to recover it. 00:35:34.520 [2024-11-18 00:40:58.210449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.520 [2024-11-18 00:40:58.210475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.520 qpair failed and we were unable to recover it. 00:35:34.520 [2024-11-18 00:40:58.210606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.520 [2024-11-18 00:40:58.210652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.520 qpair failed and we were unable to recover it. 00:35:34.520 [2024-11-18 00:40:58.210763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.520 [2024-11-18 00:40:58.210804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.520 qpair failed and we were unable to recover it. 00:35:34.520 [2024-11-18 00:40:58.210931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.520 [2024-11-18 00:40:58.210960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.520 qpair failed and we were unable to recover it. 00:35:34.520 [2024-11-18 00:40:58.211088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.520 [2024-11-18 00:40:58.211116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.520 qpair failed and we were unable to recover it. 00:35:34.520 [2024-11-18 00:40:58.211253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.520 [2024-11-18 00:40:58.211279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.520 qpair failed and we were unable to recover it. 00:35:34.520 [2024-11-18 00:40:58.211393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.520 [2024-11-18 00:40:58.211424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.520 qpair failed and we were unable to recover it. 00:35:34.520 [2024-11-18 00:40:58.211537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.520 [2024-11-18 00:40:58.211564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.520 qpair failed and we were unable to recover it. 00:35:34.520 [2024-11-18 00:40:58.211645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.520 [2024-11-18 00:40:58.211688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.520 qpair failed and we were unable to recover it. 00:35:34.520 [2024-11-18 00:40:58.211787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.520 [2024-11-18 00:40:58.211817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.520 qpair failed and we were unable to recover it. 00:35:34.520 [2024-11-18 00:40:58.211970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.520 [2024-11-18 00:40:58.212000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.520 qpair failed and we were unable to recover it. 00:35:34.520 [2024-11-18 00:40:58.212112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.520 [2024-11-18 00:40:58.212141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.520 qpair failed and we were unable to recover it. 00:35:34.520 [2024-11-18 00:40:58.212304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.520 [2024-11-18 00:40:58.212357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.520 qpair failed and we were unable to recover it. 00:35:34.520 [2024-11-18 00:40:58.212449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.520 [2024-11-18 00:40:58.212475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.520 qpair failed and we were unable to recover it. 00:35:34.520 [2024-11-18 00:40:58.212598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.520 [2024-11-18 00:40:58.212636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.520 qpair failed and we were unable to recover it. 00:35:34.520 [2024-11-18 00:40:58.212775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.520 [2024-11-18 00:40:58.212820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.520 qpair failed and we were unable to recover it. 00:35:34.520 [2024-11-18 00:40:58.212915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.520 [2024-11-18 00:40:58.212943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.520 qpair failed and we were unable to recover it. 00:35:34.520 [2024-11-18 00:40:58.213070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.520 [2024-11-18 00:40:58.213114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.520 qpair failed and we were unable to recover it. 00:35:34.520 [2024-11-18 00:40:58.213281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.520 [2024-11-18 00:40:58.213337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.520 qpair failed and we were unable to recover it. 00:35:34.520 [2024-11-18 00:40:58.213493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.520 [2024-11-18 00:40:58.213521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.520 qpair failed and we were unable to recover it. 00:35:34.520 [2024-11-18 00:40:58.213621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.520 [2024-11-18 00:40:58.213648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.520 qpair failed and we were unable to recover it. 00:35:34.520 [2024-11-18 00:40:58.213779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.520 [2024-11-18 00:40:58.213809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.520 qpair failed and we were unable to recover it. 00:35:34.520 [2024-11-18 00:40:58.213950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.520 [2024-11-18 00:40:58.213982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.520 qpair failed and we were unable to recover it. 00:35:34.520 [2024-11-18 00:40:58.214142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.520 [2024-11-18 00:40:58.214188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.520 qpair failed and we were unable to recover it. 00:35:34.520 [2024-11-18 00:40:58.214323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.520 [2024-11-18 00:40:58.214350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.520 qpair failed and we were unable to recover it. 00:35:34.520 [2024-11-18 00:40:58.214467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.520 [2024-11-18 00:40:58.214493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.520 qpair failed and we were unable to recover it. 00:35:34.520 [2024-11-18 00:40:58.214624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.520 [2024-11-18 00:40:58.214654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.520 qpair failed and we were unable to recover it. 00:35:34.520 [2024-11-18 00:40:58.214809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.520 [2024-11-18 00:40:58.214854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.520 qpair failed and we were unable to recover it. 00:35:34.520 [2024-11-18 00:40:58.215024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.520 [2024-11-18 00:40:58.215072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.520 qpair failed and we were unable to recover it. 00:35:34.520 [2024-11-18 00:40:58.215222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.520 [2024-11-18 00:40:58.215251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.520 qpair failed and we were unable to recover it. 00:35:34.520 [2024-11-18 00:40:58.215343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.520 [2024-11-18 00:40:58.215369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.520 qpair failed and we were unable to recover it. 00:35:34.520 [2024-11-18 00:40:58.215463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.520 [2024-11-18 00:40:58.215489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.520 qpair failed and we were unable to recover it. 00:35:34.520 [2024-11-18 00:40:58.215577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.520 [2024-11-18 00:40:58.215603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.520 qpair failed and we were unable to recover it. 00:35:34.520 [2024-11-18 00:40:58.215777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.520 [2024-11-18 00:40:58.215834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.520 qpair failed and we were unable to recover it. 00:35:34.520 [2024-11-18 00:40:58.215970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.520 [2024-11-18 00:40:58.216014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.520 qpair failed and we were unable to recover it. 00:35:34.520 [2024-11-18 00:40:58.216157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.520 [2024-11-18 00:40:58.216186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.520 qpair failed and we were unable to recover it. 00:35:34.520 [2024-11-18 00:40:58.216296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.520 [2024-11-18 00:40:58.216329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.520 qpair failed and we were unable to recover it. 00:35:34.520 [2024-11-18 00:40:58.216421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.520 [2024-11-18 00:40:58.216467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.521 qpair failed and we were unable to recover it. 00:35:34.521 [2024-11-18 00:40:58.216589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.521 [2024-11-18 00:40:58.216620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.521 qpair failed and we were unable to recover it. 00:35:34.521 [2024-11-18 00:40:58.216785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.521 [2024-11-18 00:40:58.216831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.521 qpair failed and we were unable to recover it. 00:35:34.521 [2024-11-18 00:40:58.216980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.521 [2024-11-18 00:40:58.217025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.521 qpair failed and we were unable to recover it. 00:35:34.521 [2024-11-18 00:40:58.217133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.521 [2024-11-18 00:40:58.217161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.521 qpair failed and we were unable to recover it. 00:35:34.521 [2024-11-18 00:40:58.217305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.521 [2024-11-18 00:40:58.217339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.521 qpair failed and we were unable to recover it. 00:35:34.521 [2024-11-18 00:40:58.217466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.521 [2024-11-18 00:40:58.217495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.521 qpair failed and we were unable to recover it. 00:35:34.521 [2024-11-18 00:40:58.217681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.521 [2024-11-18 00:40:58.217726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.521 qpair failed and we were unable to recover it. 00:35:34.521 [2024-11-18 00:40:58.217887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.521 [2024-11-18 00:40:58.217931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.521 qpair failed and we were unable to recover it. 00:35:34.521 [2024-11-18 00:40:58.218043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.521 [2024-11-18 00:40:58.218069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.521 qpair failed and we were unable to recover it. 00:35:34.521 [2024-11-18 00:40:58.218223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.521 [2024-11-18 00:40:58.218250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.521 qpair failed and we were unable to recover it. 00:35:34.521 [2024-11-18 00:40:58.218431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.521 [2024-11-18 00:40:58.218474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.521 qpair failed and we were unable to recover it. 00:35:34.521 [2024-11-18 00:40:58.218620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.521 [2024-11-18 00:40:58.218653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.521 qpair failed and we were unable to recover it. 00:35:34.521 [2024-11-18 00:40:58.218779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.521 [2024-11-18 00:40:58.218811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.521 qpair failed and we were unable to recover it. 00:35:34.521 [2024-11-18 00:40:58.218998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.521 [2024-11-18 00:40:58.219028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.521 qpair failed and we were unable to recover it. 00:35:34.521 [2024-11-18 00:40:58.219136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.521 [2024-11-18 00:40:58.219167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.521 qpair failed and we were unable to recover it. 00:35:34.521 [2024-11-18 00:40:58.219304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.521 [2024-11-18 00:40:58.219339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.521 qpair failed and we were unable to recover it. 00:35:34.521 [2024-11-18 00:40:58.219491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.521 [2024-11-18 00:40:58.219517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.521 qpair failed and we were unable to recover it. 00:35:34.521 [2024-11-18 00:40:58.219665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.521 [2024-11-18 00:40:58.219692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.521 qpair failed and we were unable to recover it. 00:35:34.521 [2024-11-18 00:40:58.219788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.521 [2024-11-18 00:40:58.219818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.521 qpair failed and we were unable to recover it. 00:35:34.521 [2024-11-18 00:40:58.219949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.521 [2024-11-18 00:40:58.219994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.521 qpair failed and we were unable to recover it. 00:35:34.521 [2024-11-18 00:40:58.220123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.521 [2024-11-18 00:40:58.220153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.521 qpair failed and we were unable to recover it. 00:35:34.521 [2024-11-18 00:40:58.220264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.521 [2024-11-18 00:40:58.220292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.521 qpair failed and we were unable to recover it. 00:35:34.521 [2024-11-18 00:40:58.220408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.521 [2024-11-18 00:40:58.220447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.521 qpair failed and we were unable to recover it. 00:35:34.521 [2024-11-18 00:40:58.220595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.521 [2024-11-18 00:40:58.220641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.521 qpair failed and we were unable to recover it. 00:35:34.521 [2024-11-18 00:40:58.220782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.521 [2024-11-18 00:40:58.220828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.521 qpair failed and we were unable to recover it. 00:35:34.521 [2024-11-18 00:40:58.220965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.521 [2024-11-18 00:40:58.221011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.521 qpair failed and we were unable to recover it. 00:35:34.521 [2024-11-18 00:40:58.221106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.521 [2024-11-18 00:40:58.221136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.521 qpair failed and we were unable to recover it. 00:35:34.521 [2024-11-18 00:40:58.221289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.521 [2024-11-18 00:40:58.221327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.521 qpair failed and we were unable to recover it. 00:35:34.521 [2024-11-18 00:40:58.221466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.521 [2024-11-18 00:40:58.221492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.521 qpair failed and we were unable to recover it. 00:35:34.521 [2024-11-18 00:40:58.221635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.521 [2024-11-18 00:40:58.221679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.521 qpair failed and we were unable to recover it. 00:35:34.521 [2024-11-18 00:40:58.221768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.521 [2024-11-18 00:40:58.221797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.521 qpair failed and we were unable to recover it. 00:35:34.521 [2024-11-18 00:40:58.221906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.521 [2024-11-18 00:40:58.221932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.521 qpair failed and we were unable to recover it. 00:35:34.521 [2024-11-18 00:40:58.222076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.521 [2024-11-18 00:40:58.222105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.521 qpair failed and we were unable to recover it. 00:35:34.522 [2024-11-18 00:40:58.222259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.522 [2024-11-18 00:40:58.222298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.522 qpair failed and we were unable to recover it. 00:35:34.522 [2024-11-18 00:40:58.222402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.522 [2024-11-18 00:40:58.222431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.522 qpair failed and we were unable to recover it. 00:35:34.522 [2024-11-18 00:40:58.222575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.522 [2024-11-18 00:40:58.222609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.522 qpair failed and we were unable to recover it. 00:35:34.522 [2024-11-18 00:40:58.222721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.522 [2024-11-18 00:40:58.222747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.522 qpair failed and we were unable to recover it. 00:35:34.522 [2024-11-18 00:40:58.222879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.522 [2024-11-18 00:40:58.222910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.522 qpair failed and we were unable to recover it. 00:35:34.522 [2024-11-18 00:40:58.223051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.522 [2024-11-18 00:40:58.223096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.522 qpair failed and we were unable to recover it. 00:35:34.522 [2024-11-18 00:40:58.223248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.522 [2024-11-18 00:40:58.223292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.522 qpair failed and we were unable to recover it. 00:35:34.522 [2024-11-18 00:40:58.223425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.522 [2024-11-18 00:40:58.223463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.522 qpair failed and we were unable to recover it. 00:35:34.522 [2024-11-18 00:40:58.223634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.522 [2024-11-18 00:40:58.223682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.522 qpair failed and we were unable to recover it. 00:35:34.522 [2024-11-18 00:40:58.223812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.522 [2024-11-18 00:40:58.223862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.522 qpair failed and we were unable to recover it. 00:35:34.522 [2024-11-18 00:40:58.223941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.522 [2024-11-18 00:40:58.223968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.522 qpair failed and we were unable to recover it. 00:35:34.522 [2024-11-18 00:40:58.224080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.522 [2024-11-18 00:40:58.224107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.522 qpair failed and we were unable to recover it. 00:35:34.522 [2024-11-18 00:40:58.224220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.522 [2024-11-18 00:40:58.224246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.522 qpair failed and we were unable to recover it. 00:35:34.522 [2024-11-18 00:40:58.224336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.522 [2024-11-18 00:40:58.224363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.522 qpair failed and we were unable to recover it. 00:35:34.522 [2024-11-18 00:40:58.224479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.522 [2024-11-18 00:40:58.224505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.522 qpair failed and we were unable to recover it. 00:35:34.522 [2024-11-18 00:40:58.224582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.522 [2024-11-18 00:40:58.224608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.522 qpair failed and we were unable to recover it. 00:35:34.522 [2024-11-18 00:40:58.224718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.522 [2024-11-18 00:40:58.224744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.523 qpair failed and we were unable to recover it. 00:35:34.523 [2024-11-18 00:40:58.224886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.523 [2024-11-18 00:40:58.224913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.523 qpair failed and we were unable to recover it. 00:35:34.523 [2024-11-18 00:40:58.224994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.523 [2024-11-18 00:40:58.225020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.523 qpair failed and we were unable to recover it. 00:35:34.523 [2024-11-18 00:40:58.225139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.523 [2024-11-18 00:40:58.225170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.523 qpair failed and we were unable to recover it. 00:35:34.523 [2024-11-18 00:40:58.225257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.523 [2024-11-18 00:40:58.225284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.523 qpair failed and we were unable to recover it. 00:35:34.523 [2024-11-18 00:40:58.225432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.523 [2024-11-18 00:40:58.225464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.523 qpair failed and we were unable to recover it. 00:35:34.523 [2024-11-18 00:40:58.225563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.523 [2024-11-18 00:40:58.225612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.523 qpair failed and we were unable to recover it. 00:35:34.523 [2024-11-18 00:40:58.225747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.523 [2024-11-18 00:40:58.225779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.523 qpair failed and we were unable to recover it. 00:35:34.523 [2024-11-18 00:40:58.225882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.523 [2024-11-18 00:40:58.225926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.523 qpair failed and we were unable to recover it. 00:35:34.523 [2024-11-18 00:40:58.226061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.523 [2024-11-18 00:40:58.226093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.523 qpair failed and we were unable to recover it. 00:35:34.523 [2024-11-18 00:40:58.226222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.523 [2024-11-18 00:40:58.226254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.523 qpair failed and we were unable to recover it. 00:35:34.523 [2024-11-18 00:40:58.226440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.523 [2024-11-18 00:40:58.226480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.523 qpair failed and we were unable to recover it. 00:35:34.523 [2024-11-18 00:40:58.226572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.523 [2024-11-18 00:40:58.226600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.523 qpair failed and we were unable to recover it. 00:35:34.523 [2024-11-18 00:40:58.226756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.523 [2024-11-18 00:40:58.226799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.523 qpair failed and we were unable to recover it. 00:35:34.523 [2024-11-18 00:40:58.226931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.523 [2024-11-18 00:40:58.226975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.523 qpair failed and we were unable to recover it. 00:35:34.523 [2024-11-18 00:40:58.227062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.523 [2024-11-18 00:40:58.227088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.523 qpair failed and we were unable to recover it. 00:35:34.523 [2024-11-18 00:40:58.227240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.523 [2024-11-18 00:40:58.227266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.523 qpair failed and we were unable to recover it. 00:35:34.523 [2024-11-18 00:40:58.227371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.523 [2024-11-18 00:40:58.227416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.523 qpair failed and we were unable to recover it. 00:35:34.523 [2024-11-18 00:40:58.227546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.523 [2024-11-18 00:40:58.227590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.523 qpair failed and we were unable to recover it. 00:35:34.523 [2024-11-18 00:40:58.227723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.523 [2024-11-18 00:40:58.227767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.523 qpair failed and we were unable to recover it. 00:35:34.523 [2024-11-18 00:40:58.227889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.523 [2024-11-18 00:40:58.227934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.523 qpair failed and we were unable to recover it. 00:35:34.523 [2024-11-18 00:40:58.228046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.523 [2024-11-18 00:40:58.228073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.523 qpair failed and we were unable to recover it. 00:35:34.523 [2024-11-18 00:40:58.228167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.523 [2024-11-18 00:40:58.228194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.523 qpair failed and we were unable to recover it. 00:35:34.523 [2024-11-18 00:40:58.228304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.524 [2024-11-18 00:40:58.228354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.524 qpair failed and we were unable to recover it. 00:35:34.524 [2024-11-18 00:40:58.228478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.524 [2024-11-18 00:40:58.228508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.524 qpair failed and we were unable to recover it. 00:35:34.524 [2024-11-18 00:40:58.228637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.524 [2024-11-18 00:40:58.228682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.524 qpair failed and we were unable to recover it. 00:35:34.524 [2024-11-18 00:40:58.228852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.524 [2024-11-18 00:40:58.228883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.524 qpair failed and we were unable to recover it. 00:35:34.524 [2024-11-18 00:40:58.229060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.524 [2024-11-18 00:40:58.229092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.524 qpair failed and we were unable to recover it. 00:35:34.524 [2024-11-18 00:40:58.229229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.524 [2024-11-18 00:40:58.229260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.524 qpair failed and we were unable to recover it. 00:35:34.524 [2024-11-18 00:40:58.229375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.524 [2024-11-18 00:40:58.229402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.524 qpair failed and we were unable to recover it. 00:35:34.524 [2024-11-18 00:40:58.229486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.524 [2024-11-18 00:40:58.229528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.524 qpair failed and we were unable to recover it. 00:35:34.524 [2024-11-18 00:40:58.229678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.524 [2024-11-18 00:40:58.229707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.524 qpair failed and we were unable to recover it. 00:35:34.524 [2024-11-18 00:40:58.229850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.524 [2024-11-18 00:40:58.229881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.524 qpair failed and we were unable to recover it. 00:35:34.524 [2024-11-18 00:40:58.230040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.524 [2024-11-18 00:40:58.230071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.524 qpair failed and we were unable to recover it. 00:35:34.524 [2024-11-18 00:40:58.230195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.524 [2024-11-18 00:40:58.230227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.524 qpair failed and we were unable to recover it. 00:35:34.524 [2024-11-18 00:40:58.230384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.524 [2024-11-18 00:40:58.230411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.524 qpair failed and we were unable to recover it. 00:35:34.524 [2024-11-18 00:40:58.230521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.524 [2024-11-18 00:40:58.230547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.524 qpair failed and we were unable to recover it. 00:35:34.524 [2024-11-18 00:40:58.230677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.524 [2024-11-18 00:40:58.230706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.524 qpair failed and we were unable to recover it. 00:35:34.524 [2024-11-18 00:40:58.230850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.524 [2024-11-18 00:40:58.230882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.524 qpair failed and we were unable to recover it. 00:35:34.524 [2024-11-18 00:40:58.231014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.524 [2024-11-18 00:40:58.231046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.524 qpair failed and we were unable to recover it. 00:35:34.524 [2024-11-18 00:40:58.231199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.524 [2024-11-18 00:40:58.231226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.524 qpair failed and we were unable to recover it. 00:35:34.524 [2024-11-18 00:40:58.231344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.524 [2024-11-18 00:40:58.231371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.524 qpair failed and we were unable to recover it. 00:35:34.524 [2024-11-18 00:40:58.231457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.524 [2024-11-18 00:40:58.231500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.524 qpair failed and we were unable to recover it. 00:35:34.524 [2024-11-18 00:40:58.231647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.524 [2024-11-18 00:40:58.231693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.524 qpair failed and we were unable to recover it. 00:35:34.524 [2024-11-18 00:40:58.231826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.524 [2024-11-18 00:40:58.231857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.524 qpair failed and we were unable to recover it. 00:35:34.524 [2024-11-18 00:40:58.231972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.524 [2024-11-18 00:40:58.232019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.524 qpair failed and we were unable to recover it. 00:35:34.524 [2024-11-18 00:40:58.232162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.524 [2024-11-18 00:40:58.232193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.524 qpair failed and we were unable to recover it. 00:35:34.524 [2024-11-18 00:40:58.232357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.524 [2024-11-18 00:40:58.232384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.524 qpair failed and we were unable to recover it. 00:35:34.524 [2024-11-18 00:40:58.232472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.524 [2024-11-18 00:40:58.232498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.524 qpair failed and we were unable to recover it. 00:35:34.524 [2024-11-18 00:40:58.232641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.524 [2024-11-18 00:40:58.232668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.524 qpair failed and we were unable to recover it. 00:35:34.524 [2024-11-18 00:40:58.232750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.524 [2024-11-18 00:40:58.232779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.524 qpair failed and we were unable to recover it. 00:35:34.524 [2024-11-18 00:40:58.232917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.524 [2024-11-18 00:40:58.232961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.524 qpair failed and we were unable to recover it. 00:35:34.524 [2024-11-18 00:40:58.233100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.524 [2024-11-18 00:40:58.233145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.524 qpair failed and we were unable to recover it. 00:35:34.524 [2024-11-18 00:40:58.233282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.524 [2024-11-18 00:40:58.233321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.524 qpair failed and we were unable to recover it. 00:35:34.524 [2024-11-18 00:40:58.233470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.524 [2024-11-18 00:40:58.233496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.524 qpair failed and we were unable to recover it. 00:35:34.524 [2024-11-18 00:40:58.233610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.524 [2024-11-18 00:40:58.233640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.524 qpair failed and we were unable to recover it. 00:35:34.524 [2024-11-18 00:40:58.233817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.524 [2024-11-18 00:40:58.233859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.524 qpair failed and we were unable to recover it. 00:35:34.524 [2024-11-18 00:40:58.234001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.524 [2024-11-18 00:40:58.234049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.524 qpair failed and we were unable to recover it. 00:35:34.524 [2024-11-18 00:40:58.234184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.524 [2024-11-18 00:40:58.234212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.524 qpair failed and we were unable to recover it. 00:35:34.524 [2024-11-18 00:40:58.234337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.524 [2024-11-18 00:40:58.234365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.524 qpair failed and we were unable to recover it. 00:35:34.524 [2024-11-18 00:40:58.234448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.524 [2024-11-18 00:40:58.234474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.524 qpair failed and we were unable to recover it. 00:35:34.524 [2024-11-18 00:40:58.234620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.524 [2024-11-18 00:40:58.234647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.524 qpair failed and we were unable to recover it. 00:35:34.524 [2024-11-18 00:40:58.234759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.524 [2024-11-18 00:40:58.234785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.524 qpair failed and we were unable to recover it. 00:35:34.524 [2024-11-18 00:40:58.234895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.524 [2024-11-18 00:40:58.234928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.524 qpair failed and we were unable to recover it. 00:35:34.524 [2024-11-18 00:40:58.235038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.524 [2024-11-18 00:40:58.235070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.524 qpair failed and we were unable to recover it. 00:35:34.525 [2024-11-18 00:40:58.235237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.525 [2024-11-18 00:40:58.235265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.525 qpair failed and we were unable to recover it. 00:35:34.525 [2024-11-18 00:40:58.235413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.525 [2024-11-18 00:40:58.235442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.525 qpair failed and we were unable to recover it. 00:35:34.525 [2024-11-18 00:40:58.235578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.525 [2024-11-18 00:40:58.235623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.525 qpair failed and we were unable to recover it. 00:35:34.525 [2024-11-18 00:40:58.235787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.525 [2024-11-18 00:40:58.235831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.525 qpair failed and we were unable to recover it. 00:35:34.525 [2024-11-18 00:40:58.235961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.525 [2024-11-18 00:40:58.236007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.525 qpair failed and we were unable to recover it. 00:35:34.525 [2024-11-18 00:40:58.236150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.525 [2024-11-18 00:40:58.236176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.525 qpair failed and we were unable to recover it. 00:35:34.525 [2024-11-18 00:40:58.236274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.525 [2024-11-18 00:40:58.236303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.525 qpair failed and we were unable to recover it. 00:35:34.525 [2024-11-18 00:40:58.236398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.525 [2024-11-18 00:40:58.236425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.525 qpair failed and we were unable to recover it. 00:35:34.525 [2024-11-18 00:40:58.236543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.525 [2024-11-18 00:40:58.236570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.525 qpair failed and we were unable to recover it. 00:35:34.525 [2024-11-18 00:40:58.236679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.525 [2024-11-18 00:40:58.236725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.525 qpair failed and we were unable to recover it. 00:35:34.525 [2024-11-18 00:40:58.236885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.525 [2024-11-18 00:40:58.236932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.525 qpair failed and we were unable to recover it. 00:35:34.525 [2024-11-18 00:40:58.237117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.525 [2024-11-18 00:40:58.237164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.525 qpair failed and we were unable to recover it. 00:35:34.525 [2024-11-18 00:40:58.237293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.525 [2024-11-18 00:40:58.237326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.525 qpair failed and we were unable to recover it. 00:35:34.525 [2024-11-18 00:40:58.237446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.525 [2024-11-18 00:40:58.237474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.525 qpair failed and we were unable to recover it. 00:35:34.525 [2024-11-18 00:40:58.237592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.525 [2024-11-18 00:40:58.237618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.525 qpair failed and we were unable to recover it. 00:35:34.525 [2024-11-18 00:40:58.237762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.525 [2024-11-18 00:40:58.237788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.525 qpair failed and we were unable to recover it. 00:35:34.525 [2024-11-18 00:40:58.237922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.525 [2024-11-18 00:40:58.237953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.525 qpair failed and we were unable to recover it. 00:35:34.525 [2024-11-18 00:40:58.238104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.525 [2024-11-18 00:40:58.238133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.525 qpair failed and we were unable to recover it. 00:35:34.525 [2024-11-18 00:40:58.238293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.525 [2024-11-18 00:40:58.238335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.525 qpair failed and we were unable to recover it. 00:35:34.525 [2024-11-18 00:40:58.238492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.525 [2024-11-18 00:40:58.238518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.525 qpair failed and we were unable to recover it. 00:35:34.525 [2024-11-18 00:40:58.238632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.525 [2024-11-18 00:40:58.238659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.525 qpair failed and we were unable to recover it. 00:35:34.811 [2024-11-18 00:40:58.238761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.811 [2024-11-18 00:40:58.238793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.811 qpair failed and we were unable to recover it. 00:35:34.811 [2024-11-18 00:40:58.238917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.811 [2024-11-18 00:40:58.238948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.811 qpair failed and we were unable to recover it. 00:35:34.811 [2024-11-18 00:40:58.239045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.811 [2024-11-18 00:40:58.239077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.811 qpair failed and we were unable to recover it. 00:35:34.811 [2024-11-18 00:40:58.239209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.811 [2024-11-18 00:40:58.239240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.811 qpair failed and we were unable to recover it. 00:35:34.811 [2024-11-18 00:40:58.239379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.811 [2024-11-18 00:40:58.239406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.811 qpair failed and we were unable to recover it. 00:35:34.811 [2024-11-18 00:40:58.239491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.811 [2024-11-18 00:40:58.239534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.811 qpair failed and we were unable to recover it. 00:35:34.811 [2024-11-18 00:40:58.239618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.811 [2024-11-18 00:40:58.239647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.811 qpair failed and we were unable to recover it. 00:35:34.811 [2024-11-18 00:40:58.239763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.811 [2024-11-18 00:40:58.239797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.811 qpair failed and we were unable to recover it. 00:35:34.811 [2024-11-18 00:40:58.239928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.811 [2024-11-18 00:40:58.239961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.811 qpair failed and we were unable to recover it. 00:35:34.811 [2024-11-18 00:40:58.240093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.811 [2024-11-18 00:40:58.240125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.811 qpair failed and we were unable to recover it. 00:35:34.811 [2024-11-18 00:40:58.240260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.811 [2024-11-18 00:40:58.240287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.811 qpair failed and we were unable to recover it. 00:35:34.811 [2024-11-18 00:40:58.240447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.812 [2024-11-18 00:40:58.240474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.812 qpair failed and we were unable to recover it. 00:35:34.812 [2024-11-18 00:40:58.240565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.812 [2024-11-18 00:40:58.240590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.812 qpair failed and we were unable to recover it. 00:35:34.812 [2024-11-18 00:40:58.240702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.812 [2024-11-18 00:40:58.240731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.812 qpair failed and we were unable to recover it. 00:35:34.812 [2024-11-18 00:40:58.240851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.812 [2024-11-18 00:40:58.240882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.812 qpair failed and we were unable to recover it. 00:35:34.812 [2024-11-18 00:40:58.241077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.812 [2024-11-18 00:40:58.241108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.812 qpair failed and we were unable to recover it. 00:35:34.812 [2024-11-18 00:40:58.241214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.812 [2024-11-18 00:40:58.241240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.812 qpair failed and we were unable to recover it. 00:35:34.812 [2024-11-18 00:40:58.241329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.812 [2024-11-18 00:40:58.241357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.812 qpair failed and we were unable to recover it. 00:35:34.812 [2024-11-18 00:40:58.241469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.812 [2024-11-18 00:40:58.241495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.812 qpair failed and we were unable to recover it. 00:35:34.812 [2024-11-18 00:40:58.241586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.812 [2024-11-18 00:40:58.241631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.812 qpair failed and we were unable to recover it. 00:35:34.812 [2024-11-18 00:40:58.241778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.812 [2024-11-18 00:40:58.241810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.812 qpair failed and we were unable to recover it. 00:35:34.812 [2024-11-18 00:40:58.241945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.812 [2024-11-18 00:40:58.241976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.812 qpair failed and we were unable to recover it. 00:35:34.812 [2024-11-18 00:40:58.242081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.812 [2024-11-18 00:40:58.242113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.812 qpair failed and we were unable to recover it. 00:35:34.812 [2024-11-18 00:40:58.242242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.812 [2024-11-18 00:40:58.242274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.812 qpair failed and we were unable to recover it. 00:35:34.812 [2024-11-18 00:40:58.242414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.812 [2024-11-18 00:40:58.242454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.812 qpair failed and we were unable to recover it. 00:35:34.812 [2024-11-18 00:40:58.242629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.812 [2024-11-18 00:40:58.242677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.812 qpair failed and we were unable to recover it. 00:35:34.812 [2024-11-18 00:40:58.242817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.812 [2024-11-18 00:40:58.242846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.812 qpair failed and we were unable to recover it. 00:35:34.812 [2024-11-18 00:40:58.242995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.812 [2024-11-18 00:40:58.243039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.812 qpair failed and we were unable to recover it. 00:35:34.812 [2024-11-18 00:40:58.243154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.812 [2024-11-18 00:40:58.243180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.812 qpair failed and we were unable to recover it. 00:35:34.812 [2024-11-18 00:40:58.243299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.812 [2024-11-18 00:40:58.243336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.812 qpair failed and we were unable to recover it. 00:35:34.812 [2024-11-18 00:40:58.243454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.812 [2024-11-18 00:40:58.243481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.812 qpair failed and we were unable to recover it. 00:35:34.812 [2024-11-18 00:40:58.243604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.812 [2024-11-18 00:40:58.243630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.812 qpair failed and we were unable to recover it. 00:35:34.812 [2024-11-18 00:40:58.243770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.812 [2024-11-18 00:40:58.243796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.812 qpair failed and we were unable to recover it. 00:35:34.812 [2024-11-18 00:40:58.243907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.812 [2024-11-18 00:40:58.243933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.812 qpair failed and we were unable to recover it. 00:35:34.812 [2024-11-18 00:40:58.244016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.812 [2024-11-18 00:40:58.244042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.812 qpair failed and we were unable to recover it. 00:35:34.812 [2024-11-18 00:40:58.244138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.812 [2024-11-18 00:40:58.244164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.812 qpair failed and we were unable to recover it. 00:35:34.812 [2024-11-18 00:40:58.244279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.812 [2024-11-18 00:40:58.244305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.812 qpair failed and we were unable to recover it. 00:35:34.812 [2024-11-18 00:40:58.244424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.812 [2024-11-18 00:40:58.244450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.812 qpair failed and we were unable to recover it. 00:35:34.812 [2024-11-18 00:40:58.244588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.812 [2024-11-18 00:40:58.244615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.812 qpair failed and we were unable to recover it. 00:35:34.812 [2024-11-18 00:40:58.244722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.812 [2024-11-18 00:40:58.244753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.812 qpair failed and we were unable to recover it. 00:35:34.812 [2024-11-18 00:40:58.244909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.812 [2024-11-18 00:40:58.244940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.812 qpair failed and we were unable to recover it. 00:35:34.812 [2024-11-18 00:40:58.245099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.812 [2024-11-18 00:40:58.245130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.812 qpair failed and we were unable to recover it. 00:35:34.812 [2024-11-18 00:40:58.245264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.812 [2024-11-18 00:40:58.245290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.812 qpair failed and we were unable to recover it. 00:35:34.812 [2024-11-18 00:40:58.245419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.812 [2024-11-18 00:40:58.245458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.812 qpair failed and we were unable to recover it. 00:35:34.812 [2024-11-18 00:40:58.245585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.812 [2024-11-18 00:40:58.245613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.812 qpair failed and we were unable to recover it. 00:35:34.812 [2024-11-18 00:40:58.245732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.812 [2024-11-18 00:40:58.245758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.812 qpair failed and we were unable to recover it. 00:35:34.812 [2024-11-18 00:40:58.245910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.813 [2024-11-18 00:40:58.245958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.813 qpair failed and we were unable to recover it. 00:35:34.813 [2024-11-18 00:40:58.246049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.813 [2024-11-18 00:40:58.246084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.813 qpair failed and we were unable to recover it. 00:35:34.813 [2024-11-18 00:40:58.246198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.813 [2024-11-18 00:40:58.246226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.813 qpair failed and we were unable to recover it. 00:35:34.813 [2024-11-18 00:40:58.246343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.813 [2024-11-18 00:40:58.246369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.813 qpair failed and we were unable to recover it. 00:35:34.813 [2024-11-18 00:40:58.246481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.813 [2024-11-18 00:40:58.246507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.813 qpair failed and we were unable to recover it. 00:35:34.813 [2024-11-18 00:40:58.246591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.813 [2024-11-18 00:40:58.246618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.813 qpair failed and we were unable to recover it. 00:35:34.813 [2024-11-18 00:40:58.246709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.813 [2024-11-18 00:40:58.246734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.813 qpair failed and we were unable to recover it. 00:35:34.813 [2024-11-18 00:40:58.246881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.813 [2024-11-18 00:40:58.246911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.813 qpair failed and we were unable to recover it. 00:35:34.813 [2024-11-18 00:40:58.247007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.813 [2024-11-18 00:40:58.247033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.813 qpair failed and we were unable to recover it. 00:35:34.813 [2024-11-18 00:40:58.247170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.813 [2024-11-18 00:40:58.247198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.813 qpair failed and we were unable to recover it. 00:35:34.813 [2024-11-18 00:40:58.247321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.813 [2024-11-18 00:40:58.247348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.813 qpair failed and we were unable to recover it. 00:35:34.813 [2024-11-18 00:40:58.247474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.813 [2024-11-18 00:40:58.247502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.813 qpair failed and we were unable to recover it. 00:35:34.813 [2024-11-18 00:40:58.247651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.813 [2024-11-18 00:40:58.247680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.813 qpair failed and we were unable to recover it. 00:35:34.813 [2024-11-18 00:40:58.247796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.813 [2024-11-18 00:40:58.247825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.813 qpair failed and we were unable to recover it. 00:35:34.813 [2024-11-18 00:40:58.247911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.813 [2024-11-18 00:40:58.247940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.813 qpair failed and we were unable to recover it. 00:35:34.813 [2024-11-18 00:40:58.248086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.813 [2024-11-18 00:40:58.248129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.813 qpair failed and we were unable to recover it. 00:35:34.813 [2024-11-18 00:40:58.248237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.813 [2024-11-18 00:40:58.248280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.813 qpair failed and we were unable to recover it. 00:35:34.813 [2024-11-18 00:40:58.248396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.813 [2024-11-18 00:40:58.248424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.813 qpair failed and we were unable to recover it. 00:35:34.813 [2024-11-18 00:40:58.248510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.813 [2024-11-18 00:40:58.248555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.813 qpair failed and we were unable to recover it. 00:35:34.813 [2024-11-18 00:40:58.248664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.813 [2024-11-18 00:40:58.248696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.813 qpair failed and we were unable to recover it. 00:35:34.813 [2024-11-18 00:40:58.248823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.813 [2024-11-18 00:40:58.248854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.813 qpair failed and we were unable to recover it. 00:35:34.813 [2024-11-18 00:40:58.248952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.813 [2024-11-18 00:40:58.248983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.813 qpair failed and we were unable to recover it. 00:35:34.813 [2024-11-18 00:40:58.249115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.813 [2024-11-18 00:40:58.249147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.813 qpair failed and we were unable to recover it. 00:35:34.813 [2024-11-18 00:40:58.249268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.813 [2024-11-18 00:40:58.249307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.813 qpair failed and we were unable to recover it. 00:35:34.813 [2024-11-18 00:40:58.249407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.813 [2024-11-18 00:40:58.249436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.813 qpair failed and we were unable to recover it. 00:35:34.813 [2024-11-18 00:40:58.249571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.813 [2024-11-18 00:40:58.249603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.813 qpair failed and we were unable to recover it. 00:35:34.813 [2024-11-18 00:40:58.249733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.813 [2024-11-18 00:40:58.249760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.813 qpair failed and we were unable to recover it. 00:35:34.813 [2024-11-18 00:40:58.249934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.813 [2024-11-18 00:40:58.249982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.813 qpair failed and we were unable to recover it. 00:35:34.813 [2024-11-18 00:40:58.250106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.813 [2024-11-18 00:40:58.250140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.813 qpair failed and we were unable to recover it. 00:35:34.813 [2024-11-18 00:40:58.250242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.813 [2024-11-18 00:40:58.250270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.813 qpair failed and we were unable to recover it. 00:35:34.813 [2024-11-18 00:40:58.250421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.813 [2024-11-18 00:40:58.250448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.813 qpair failed and we were unable to recover it. 00:35:34.813 [2024-11-18 00:40:58.250538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.813 [2024-11-18 00:40:58.250565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.813 qpair failed and we were unable to recover it. 00:35:34.813 [2024-11-18 00:40:58.250650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.813 [2024-11-18 00:40:58.250677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.813 qpair failed and we were unable to recover it. 00:35:34.813 [2024-11-18 00:40:58.250781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.813 [2024-11-18 00:40:58.250824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.813 qpair failed and we were unable to recover it. 00:35:34.813 [2024-11-18 00:40:58.250968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.813 [2024-11-18 00:40:58.250995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.813 qpair failed and we were unable to recover it. 00:35:34.813 [2024-11-18 00:40:58.251105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.813 [2024-11-18 00:40:58.251131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.813 qpair failed and we were unable to recover it. 00:35:34.813 [2024-11-18 00:40:58.251244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.813 [2024-11-18 00:40:58.251269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.813 qpair failed and we were unable to recover it. 00:35:34.813 [2024-11-18 00:40:58.251367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.813 [2024-11-18 00:40:58.251394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.813 qpair failed and we were unable to recover it. 00:35:34.813 [2024-11-18 00:40:58.251511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.814 [2024-11-18 00:40:58.251537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.814 qpair failed and we were unable to recover it. 00:35:34.814 [2024-11-18 00:40:58.251667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.814 [2024-11-18 00:40:58.251697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.814 qpair failed and we were unable to recover it. 00:35:34.814 [2024-11-18 00:40:58.251820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.814 [2024-11-18 00:40:58.251854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.814 qpair failed and we were unable to recover it. 00:35:34.814 [2024-11-18 00:40:58.251956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.814 [2024-11-18 00:40:58.251982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.814 qpair failed and we were unable to recover it. 00:35:34.814 [2024-11-18 00:40:58.252164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.814 [2024-11-18 00:40:58.252196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.814 qpair failed and we were unable to recover it. 00:35:34.814 [2024-11-18 00:40:58.252306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.814 [2024-11-18 00:40:58.252364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.814 qpair failed and we were unable to recover it. 00:35:34.814 [2024-11-18 00:40:58.252478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.814 [2024-11-18 00:40:58.252504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.814 qpair failed and we were unable to recover it. 00:35:34.814 [2024-11-18 00:40:58.252640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.814 [2024-11-18 00:40:58.252685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.814 qpair failed and we were unable to recover it. 00:35:34.814 [2024-11-18 00:40:58.252819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.814 [2024-11-18 00:40:58.252851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.814 qpair failed and we were unable to recover it. 00:35:34.814 [2024-11-18 00:40:58.252998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.814 [2024-11-18 00:40:58.253029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.814 qpair failed and we were unable to recover it. 00:35:34.814 [2024-11-18 00:40:58.253155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.814 [2024-11-18 00:40:58.253187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.814 qpair failed and we were unable to recover it. 00:35:34.814 [2024-11-18 00:40:58.253301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.814 [2024-11-18 00:40:58.253352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.814 qpair failed and we were unable to recover it. 00:35:34.814 [2024-11-18 00:40:58.253494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.814 [2024-11-18 00:40:58.253521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.814 qpair failed and we were unable to recover it. 00:35:34.814 [2024-11-18 00:40:58.253630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.814 [2024-11-18 00:40:58.253661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.814 qpair failed and we were unable to recover it. 00:35:34.814 [2024-11-18 00:40:58.253806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.814 [2024-11-18 00:40:58.253854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.814 qpair failed and we were unable to recover it. 00:35:34.814 [2024-11-18 00:40:58.253961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.814 [2024-11-18 00:40:58.254009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.814 qpair failed and we were unable to recover it. 00:35:34.814 [2024-11-18 00:40:58.254092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.814 [2024-11-18 00:40:58.254118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.814 qpair failed and we were unable to recover it. 00:35:34.814 [2024-11-18 00:40:58.254266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.814 [2024-11-18 00:40:58.254292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.814 qpair failed and we were unable to recover it. 00:35:34.814 [2024-11-18 00:40:58.254441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.814 [2024-11-18 00:40:58.254467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.814 qpair failed and we were unable to recover it. 00:35:34.814 [2024-11-18 00:40:58.254553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.814 [2024-11-18 00:40:58.254579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.814 qpair failed and we were unable to recover it. 00:35:34.814 [2024-11-18 00:40:58.254695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.814 [2024-11-18 00:40:58.254721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.814 qpair failed and we were unable to recover it. 00:35:34.814 [2024-11-18 00:40:58.254846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.814 [2024-11-18 00:40:58.254903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.814 qpair failed and we were unable to recover it. 00:35:34.814 [2024-11-18 00:40:58.255081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.814 [2024-11-18 00:40:58.255114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.814 qpair failed and we were unable to recover it. 00:35:34.814 [2024-11-18 00:40:58.255209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.814 [2024-11-18 00:40:58.255251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.814 qpair failed and we were unable to recover it. 00:35:34.814 [2024-11-18 00:40:58.255382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.814 [2024-11-18 00:40:58.255411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.814 qpair failed and we were unable to recover it. 00:35:34.814 [2024-11-18 00:40:58.255534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.814 [2024-11-18 00:40:58.255563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.814 qpair failed and we were unable to recover it. 00:35:34.814 [2024-11-18 00:40:58.255688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.814 [2024-11-18 00:40:58.255736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.814 qpair failed and we were unable to recover it. 00:35:34.814 [2024-11-18 00:40:58.255918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.814 [2024-11-18 00:40:58.255951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.814 qpair failed and we were unable to recover it. 00:35:34.814 [2024-11-18 00:40:58.256108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.814 [2024-11-18 00:40:58.256134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.814 qpair failed and we were unable to recover it. 00:35:34.814 [2024-11-18 00:40:58.256258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.814 [2024-11-18 00:40:58.256285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.814 qpair failed and we were unable to recover it. 00:35:34.814 [2024-11-18 00:40:58.256387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.814 [2024-11-18 00:40:58.256423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.814 qpair failed and we were unable to recover it. 00:35:34.814 [2024-11-18 00:40:58.256574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.814 [2024-11-18 00:40:58.256621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.814 qpair failed and we were unable to recover it. 00:35:34.814 [2024-11-18 00:40:58.256755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.814 [2024-11-18 00:40:58.256797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.814 qpair failed and we were unable to recover it. 00:35:34.814 [2024-11-18 00:40:58.256898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.814 [2024-11-18 00:40:58.256929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.814 qpair failed and we were unable to recover it. 00:35:34.814 [2024-11-18 00:40:58.257102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.814 [2024-11-18 00:40:58.257134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.814 qpair failed and we were unable to recover it. 00:35:34.814 [2024-11-18 00:40:58.257265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.814 [2024-11-18 00:40:58.257297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.814 qpair failed and we were unable to recover it. 00:35:34.814 [2024-11-18 00:40:58.257496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.814 [2024-11-18 00:40:58.257525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.814 qpair failed and we were unable to recover it. 00:35:34.814 [2024-11-18 00:40:58.257643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.814 [2024-11-18 00:40:58.257672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.814 qpair failed and we were unable to recover it. 00:35:34.815 [2024-11-18 00:40:58.257797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.815 [2024-11-18 00:40:58.257844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.815 qpair failed and we were unable to recover it. 00:35:34.815 [2024-11-18 00:40:58.258003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.815 [2024-11-18 00:40:58.258050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.815 qpair failed and we were unable to recover it. 00:35:34.815 [2024-11-18 00:40:58.258161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.815 [2024-11-18 00:40:58.258187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.815 qpair failed and we were unable to recover it. 00:35:34.815 [2024-11-18 00:40:58.258305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.815 [2024-11-18 00:40:58.258339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.815 qpair failed and we were unable to recover it. 00:35:34.815 [2024-11-18 00:40:58.258421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.815 [2024-11-18 00:40:58.258446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.815 qpair failed and we were unable to recover it. 00:35:34.815 [2024-11-18 00:40:58.258533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.815 [2024-11-18 00:40:58.258559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.815 qpair failed and we were unable to recover it. 00:35:34.815 [2024-11-18 00:40:58.258659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.815 [2024-11-18 00:40:58.258686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.815 qpair failed and we were unable to recover it. 00:35:34.815 [2024-11-18 00:40:58.258799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.815 [2024-11-18 00:40:58.258825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.815 qpair failed and we were unable to recover it. 00:35:34.815 [2024-11-18 00:40:58.258939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.815 [2024-11-18 00:40:58.258965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.815 qpair failed and we were unable to recover it. 00:35:34.815 [2024-11-18 00:40:58.259077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.815 [2024-11-18 00:40:58.259104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.815 qpair failed and we were unable to recover it. 00:35:34.815 [2024-11-18 00:40:58.259215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.815 [2024-11-18 00:40:58.259243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.815 qpair failed and we were unable to recover it. 00:35:34.815 [2024-11-18 00:40:58.259383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.815 [2024-11-18 00:40:58.259410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.815 qpair failed and we were unable to recover it. 00:35:34.815 [2024-11-18 00:40:58.259497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.815 [2024-11-18 00:40:58.259524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.815 qpair failed and we were unable to recover it. 00:35:34.815 [2024-11-18 00:40:58.259641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.815 [2024-11-18 00:40:58.259668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.815 qpair failed and we were unable to recover it. 00:35:34.815 [2024-11-18 00:40:58.259751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.815 [2024-11-18 00:40:58.259794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.815 qpair failed and we were unable to recover it. 00:35:34.815 [2024-11-18 00:40:58.259958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.815 [2024-11-18 00:40:58.259990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.815 qpair failed and we were unable to recover it. 00:35:34.815 [2024-11-18 00:40:58.260156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.815 [2024-11-18 00:40:58.260183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.815 qpair failed and we were unable to recover it. 00:35:34.815 [2024-11-18 00:40:58.260320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.815 [2024-11-18 00:40:58.260380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.815 qpair failed and we were unable to recover it. 00:35:34.815 [2024-11-18 00:40:58.260517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.815 [2024-11-18 00:40:58.260548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.815 qpair failed and we were unable to recover it. 00:35:34.815 [2024-11-18 00:40:58.260668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.815 [2024-11-18 00:40:58.260698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.815 qpair failed and we were unable to recover it. 00:35:34.815 [2024-11-18 00:40:58.260851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.815 [2024-11-18 00:40:58.260898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.815 qpair failed and we were unable to recover it. 00:35:34.815 [2024-11-18 00:40:58.261005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.815 [2024-11-18 00:40:58.261038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.815 qpair failed and we were unable to recover it. 00:35:34.815 [2024-11-18 00:40:58.261172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.815 [2024-11-18 00:40:58.261218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.815 qpair failed and we were unable to recover it. 00:35:34.815 [2024-11-18 00:40:58.261329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.815 [2024-11-18 00:40:58.261357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.815 qpair failed and we were unable to recover it. 00:35:34.815 [2024-11-18 00:40:58.261514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.815 [2024-11-18 00:40:58.261543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.815 qpair failed and we were unable to recover it. 00:35:34.815 [2024-11-18 00:40:58.261698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.815 [2024-11-18 00:40:58.261727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.815 qpair failed and we were unable to recover it. 00:35:34.815 [2024-11-18 00:40:58.261867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.815 [2024-11-18 00:40:58.261898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.815 qpair failed and we were unable to recover it. 00:35:34.815 [2024-11-18 00:40:58.262058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.815 [2024-11-18 00:40:58.262089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.815 qpair failed and we were unable to recover it. 00:35:34.815 [2024-11-18 00:40:58.262236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.815 [2024-11-18 00:40:58.262265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.815 qpair failed and we were unable to recover it. 00:35:34.815 [2024-11-18 00:40:58.262380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.815 [2024-11-18 00:40:58.262408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.815 qpair failed and we were unable to recover it. 00:35:34.815 [2024-11-18 00:40:58.262504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.815 [2024-11-18 00:40:58.262530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.815 qpair failed and we were unable to recover it. 00:35:34.815 [2024-11-18 00:40:58.262673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.815 [2024-11-18 00:40:58.262699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.815 qpair failed and we were unable to recover it. 00:35:34.815 [2024-11-18 00:40:58.262787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.815 [2024-11-18 00:40:58.262819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.815 qpair failed and we were unable to recover it. 00:35:34.815 [2024-11-18 00:40:58.262946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.815 [2024-11-18 00:40:58.262989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.815 qpair failed and we were unable to recover it. 00:35:34.815 [2024-11-18 00:40:58.263069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.815 [2024-11-18 00:40:58.263096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.815 qpair failed and we were unable to recover it. 00:35:34.815 [2024-11-18 00:40:58.263172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.815 [2024-11-18 00:40:58.263198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.815 qpair failed and we were unable to recover it. 00:35:34.815 [2024-11-18 00:40:58.263322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.815 [2024-11-18 00:40:58.263348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.815 qpair failed and we were unable to recover it. 00:35:34.815 [2024-11-18 00:40:58.263441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.815 [2024-11-18 00:40:58.263480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.816 qpair failed and we were unable to recover it. 00:35:34.816 [2024-11-18 00:40:58.263568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.816 [2024-11-18 00:40:58.263597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.816 qpair failed and we were unable to recover it. 00:35:34.816 [2024-11-18 00:40:58.263714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.816 [2024-11-18 00:40:58.263741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.816 qpair failed and we were unable to recover it. 00:35:34.816 [2024-11-18 00:40:58.263819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.816 [2024-11-18 00:40:58.263846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.816 qpair failed and we were unable to recover it. 00:35:34.816 [2024-11-18 00:40:58.263959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.816 [2024-11-18 00:40:58.263986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.816 qpair failed and we were unable to recover it. 00:35:34.816 [2024-11-18 00:40:58.264130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.816 [2024-11-18 00:40:58.264157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.816 qpair failed and we were unable to recover it. 00:35:34.816 [2024-11-18 00:40:58.264237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.816 [2024-11-18 00:40:58.264264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.816 qpair failed and we were unable to recover it. 00:35:34.816 [2024-11-18 00:40:58.264382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.816 [2024-11-18 00:40:58.264409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.816 qpair failed and we were unable to recover it. 00:35:34.816 [2024-11-18 00:40:58.264550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.816 [2024-11-18 00:40:58.264577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.816 qpair failed and we were unable to recover it. 00:35:34.816 [2024-11-18 00:40:58.264708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.816 [2024-11-18 00:40:58.264737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.816 qpair failed and we were unable to recover it. 00:35:34.816 [2024-11-18 00:40:58.264826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.816 [2024-11-18 00:40:58.264855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.816 qpair failed and we were unable to recover it. 00:35:34.816 [2024-11-18 00:40:58.264976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.816 [2024-11-18 00:40:58.265006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.816 qpair failed and we were unable to recover it. 00:35:34.816 [2024-11-18 00:40:58.265174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.816 [2024-11-18 00:40:58.265219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.816 qpair failed and we were unable to recover it. 00:35:34.816 [2024-11-18 00:40:58.265332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.816 [2024-11-18 00:40:58.265359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.816 qpair failed and we were unable to recover it. 00:35:34.816 [2024-11-18 00:40:58.265453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.816 [2024-11-18 00:40:58.265481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.816 qpair failed and we were unable to recover it. 00:35:34.816 [2024-11-18 00:40:58.265589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.816 [2024-11-18 00:40:58.265633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.816 qpair failed and we were unable to recover it. 00:35:34.816 [2024-11-18 00:40:58.265763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.816 [2024-11-18 00:40:58.265807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.816 qpair failed and we were unable to recover it. 00:35:34.816 [2024-11-18 00:40:58.265920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.816 [2024-11-18 00:40:58.265946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.816 qpair failed and we were unable to recover it. 00:35:34.816 [2024-11-18 00:40:58.266060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.816 [2024-11-18 00:40:58.266087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.816 qpair failed and we were unable to recover it. 00:35:34.816 [2024-11-18 00:40:58.266207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.816 [2024-11-18 00:40:58.266235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.816 qpair failed and we were unable to recover it. 00:35:34.816 [2024-11-18 00:40:58.266327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.816 [2024-11-18 00:40:58.266355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.816 qpair failed and we were unable to recover it. 00:35:34.816 [2024-11-18 00:40:58.266442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.816 [2024-11-18 00:40:58.266469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.816 qpair failed and we were unable to recover it. 00:35:34.816 [2024-11-18 00:40:58.266581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.816 [2024-11-18 00:40:58.266612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.816 qpair failed and we were unable to recover it. 00:35:34.816 [2024-11-18 00:40:58.266727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.816 [2024-11-18 00:40:58.266754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.816 qpair failed and we were unable to recover it. 00:35:34.816 [2024-11-18 00:40:58.266880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.816 [2024-11-18 00:40:58.266908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.816 qpair failed and we were unable to recover it. 00:35:34.816 [2024-11-18 00:40:58.267033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.816 [2024-11-18 00:40:58.267062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.816 qpair failed and we were unable to recover it. 00:35:34.816 [2024-11-18 00:40:58.267191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.816 [2024-11-18 00:40:58.267222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.816 qpair failed and we were unable to recover it. 00:35:34.816 [2024-11-18 00:40:58.267388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.816 [2024-11-18 00:40:58.267416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.816 qpair failed and we were unable to recover it. 00:35:34.816 [2024-11-18 00:40:58.267525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.816 [2024-11-18 00:40:58.267556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.816 qpair failed and we were unable to recover it. 00:35:34.816 [2024-11-18 00:40:58.267684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.816 [2024-11-18 00:40:58.267713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.816 qpair failed and we were unable to recover it. 00:35:34.816 [2024-11-18 00:40:58.267831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.816 [2024-11-18 00:40:58.267861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.816 qpair failed and we were unable to recover it. 00:35:34.816 [2024-11-18 00:40:58.267981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.816 [2024-11-18 00:40:58.268010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.816 qpair failed and we were unable to recover it. 00:35:34.817 [2024-11-18 00:40:58.268135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.817 [2024-11-18 00:40:58.268165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.817 qpair failed and we were unable to recover it. 00:35:34.817 [2024-11-18 00:40:58.268254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.817 [2024-11-18 00:40:58.268284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.817 qpair failed and we were unable to recover it. 00:35:34.817 [2024-11-18 00:40:58.268425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.817 [2024-11-18 00:40:58.268451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.817 qpair failed and we were unable to recover it. 00:35:34.817 [2024-11-18 00:40:58.268567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.817 [2024-11-18 00:40:58.268594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.817 qpair failed and we were unable to recover it. 00:35:34.817 [2024-11-18 00:40:58.268722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.817 [2024-11-18 00:40:58.268749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.817 qpair failed and we were unable to recover it. 00:35:34.817 [2024-11-18 00:40:58.268884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.817 [2024-11-18 00:40:58.268915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.817 qpair failed and we were unable to recover it. 00:35:34.817 [2024-11-18 00:40:58.269046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.817 [2024-11-18 00:40:58.269078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.817 qpair failed and we were unable to recover it. 00:35:34.817 [2024-11-18 00:40:58.269221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.817 [2024-11-18 00:40:58.269251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.817 qpair failed and we were unable to recover it. 00:35:34.817 [2024-11-18 00:40:58.269392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.817 [2024-11-18 00:40:58.269421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.817 qpair failed and we were unable to recover it. 00:35:34.817 [2024-11-18 00:40:58.269515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.817 [2024-11-18 00:40:58.269541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.817 qpair failed and we were unable to recover it. 00:35:34.817 [2024-11-18 00:40:58.269670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.817 [2024-11-18 00:40:58.269699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.817 qpair failed and we were unable to recover it. 00:35:34.817 [2024-11-18 00:40:58.269859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.817 [2024-11-18 00:40:58.269904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.817 qpair failed and we were unable to recover it. 00:35:34.817 [2024-11-18 00:40:58.269982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.817 [2024-11-18 00:40:58.270008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.817 qpair failed and we were unable to recover it. 00:35:34.817 [2024-11-18 00:40:58.270119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.817 [2024-11-18 00:40:58.270145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.817 qpair failed and we were unable to recover it. 00:35:34.817 [2024-11-18 00:40:58.270256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.817 [2024-11-18 00:40:58.270283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.817 qpair failed and we were unable to recover it. 00:35:34.817 [2024-11-18 00:40:58.270409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.817 [2024-11-18 00:40:58.270435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.817 qpair failed and we were unable to recover it. 00:35:34.817 [2024-11-18 00:40:58.270520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.817 [2024-11-18 00:40:58.270548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.817 qpair failed and we were unable to recover it. 00:35:34.817 [2024-11-18 00:40:58.270644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.817 [2024-11-18 00:40:58.270672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.817 qpair failed and we were unable to recover it. 00:35:34.817 [2024-11-18 00:40:58.270812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.817 [2024-11-18 00:40:58.270839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.817 qpair failed and we were unable to recover it. 00:35:34.817 [2024-11-18 00:40:58.270977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.817 [2024-11-18 00:40:58.271003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.817 qpair failed and we were unable to recover it. 00:35:34.817 [2024-11-18 00:40:58.271093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.817 [2024-11-18 00:40:58.271119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.817 qpair failed and we were unable to recover it. 00:35:34.817 [2024-11-18 00:40:58.271207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.817 [2024-11-18 00:40:58.271234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.817 qpair failed and we were unable to recover it. 00:35:34.817 [2024-11-18 00:40:58.271349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.817 [2024-11-18 00:40:58.271377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.817 qpair failed and we were unable to recover it. 00:35:34.817 [2024-11-18 00:40:58.271464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.817 [2024-11-18 00:40:58.271491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.817 qpair failed and we were unable to recover it. 00:35:34.817 [2024-11-18 00:40:58.271635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.817 [2024-11-18 00:40:58.271661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.817 qpair failed and we were unable to recover it. 00:35:34.817 [2024-11-18 00:40:58.271771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.817 [2024-11-18 00:40:58.271798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.817 qpair failed and we were unable to recover it. 00:35:34.817 [2024-11-18 00:40:58.271928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.817 [2024-11-18 00:40:58.271959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.817 qpair failed and we were unable to recover it. 00:35:34.817 [2024-11-18 00:40:58.272121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.817 [2024-11-18 00:40:58.272153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.817 qpair failed and we were unable to recover it. 00:35:34.817 [2024-11-18 00:40:58.272306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.817 [2024-11-18 00:40:58.272364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.817 qpair failed and we were unable to recover it. 00:35:34.817 [2024-11-18 00:40:58.272470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.817 [2024-11-18 00:40:58.272517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.817 qpair failed and we were unable to recover it. 00:35:34.817 [2024-11-18 00:40:58.272678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.817 [2024-11-18 00:40:58.272723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.817 qpair failed and we were unable to recover it. 00:35:34.817 [2024-11-18 00:40:58.272862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.817 [2024-11-18 00:40:58.272908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.817 qpair failed and we were unable to recover it. 00:35:34.817 [2024-11-18 00:40:58.273077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.817 [2024-11-18 00:40:58.273123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.817 qpair failed and we were unable to recover it. 00:35:34.817 [2024-11-18 00:40:58.273247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.817 [2024-11-18 00:40:58.273289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.817 qpair failed and we were unable to recover it. 00:35:34.817 [2024-11-18 00:40:58.273415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.817 [2024-11-18 00:40:58.273441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.817 qpair failed and we were unable to recover it. 00:35:34.817 [2024-11-18 00:40:58.273575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.817 [2024-11-18 00:40:58.273622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.817 qpair failed and we were unable to recover it. 00:35:34.817 [2024-11-18 00:40:58.273755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.817 [2024-11-18 00:40:58.273799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.817 qpair failed and we were unable to recover it. 00:35:34.817 [2024-11-18 00:40:58.273928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.817 [2024-11-18 00:40:58.273972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.817 qpair failed and we were unable to recover it. 00:35:34.818 [2024-11-18 00:40:58.274084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.818 [2024-11-18 00:40:58.274109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.818 qpair failed and we were unable to recover it. 00:35:34.818 [2024-11-18 00:40:58.274215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.818 [2024-11-18 00:40:58.274242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.818 qpair failed and we were unable to recover it. 00:35:34.818 [2024-11-18 00:40:58.274324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.818 [2024-11-18 00:40:58.274351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.818 qpair failed and we were unable to recover it. 00:35:34.818 [2024-11-18 00:40:58.274479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.818 [2024-11-18 00:40:58.274507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.818 qpair failed and we were unable to recover it. 00:35:34.818 [2024-11-18 00:40:58.274680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.818 [2024-11-18 00:40:58.274729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.818 qpair failed and we were unable to recover it. 00:35:34.818 [2024-11-18 00:40:58.274888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.818 [2024-11-18 00:40:58.274932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.818 qpair failed and we were unable to recover it. 00:35:34.818 [2024-11-18 00:40:58.275034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.818 [2024-11-18 00:40:58.275070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.818 qpair failed and we were unable to recover it. 00:35:34.818 [2024-11-18 00:40:58.275187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.818 [2024-11-18 00:40:58.275214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.818 qpair failed and we were unable to recover it. 00:35:34.818 [2024-11-18 00:40:58.275323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.818 [2024-11-18 00:40:58.275362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.818 qpair failed and we were unable to recover it. 00:35:34.818 [2024-11-18 00:40:58.275498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.818 [2024-11-18 00:40:58.275529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.818 qpair failed and we were unable to recover it. 00:35:34.818 [2024-11-18 00:40:58.275657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.818 [2024-11-18 00:40:58.275695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.818 qpair failed and we were unable to recover it. 00:35:34.818 [2024-11-18 00:40:58.275854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.818 [2024-11-18 00:40:58.275886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.818 qpair failed and we were unable to recover it. 00:35:34.818 [2024-11-18 00:40:58.276040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.818 [2024-11-18 00:40:58.276072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.818 qpair failed and we were unable to recover it. 00:35:34.818 [2024-11-18 00:40:58.276225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.818 [2024-11-18 00:40:58.276252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.818 qpair failed and we were unable to recover it. 00:35:34.818 [2024-11-18 00:40:58.276403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.818 [2024-11-18 00:40:58.276443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.818 qpair failed and we were unable to recover it. 00:35:34.818 [2024-11-18 00:40:58.276543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.818 [2024-11-18 00:40:58.276574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.818 qpair failed and we were unable to recover it. 00:35:34.818 [2024-11-18 00:40:58.276721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.818 [2024-11-18 00:40:58.276768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.818 qpair failed and we were unable to recover it. 00:35:34.818 [2024-11-18 00:40:58.276932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.818 [2024-11-18 00:40:58.276961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.818 qpair failed and we were unable to recover it. 00:35:34.818 [2024-11-18 00:40:58.277096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.818 [2024-11-18 00:40:58.277130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.818 qpair failed and we were unable to recover it. 00:35:34.818 [2024-11-18 00:40:58.277278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.818 [2024-11-18 00:40:58.277421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.818 qpair failed and we were unable to recover it. 00:35:34.818 [2024-11-18 00:40:58.277524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.818 [2024-11-18 00:40:58.277551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.818 qpair failed and we were unable to recover it. 00:35:34.818 [2024-11-18 00:40:58.277717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.818 [2024-11-18 00:40:58.277748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.818 qpair failed and we were unable to recover it. 00:35:34.818 [2024-11-18 00:40:58.277899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.818 [2024-11-18 00:40:58.277944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.818 qpair failed and we were unable to recover it. 00:35:34.818 [2024-11-18 00:40:58.278051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.818 [2024-11-18 00:40:58.278077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.818 qpair failed and we were unable to recover it. 00:35:34.818 [2024-11-18 00:40:58.278163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.818 [2024-11-18 00:40:58.278190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.818 qpair failed and we were unable to recover it. 00:35:34.818 [2024-11-18 00:40:58.278320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.818 [2024-11-18 00:40:58.278347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.818 qpair failed and we were unable to recover it. 00:35:34.818 [2024-11-18 00:40:58.278460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.818 [2024-11-18 00:40:58.278486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.818 qpair failed and we were unable to recover it. 00:35:34.818 [2024-11-18 00:40:58.278623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.818 [2024-11-18 00:40:58.278655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.818 qpair failed and we were unable to recover it. 00:35:34.818 [2024-11-18 00:40:58.278817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.818 [2024-11-18 00:40:58.278867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.818 qpair failed and we were unable to recover it. 00:35:34.818 [2024-11-18 00:40:58.279050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.818 [2024-11-18 00:40:58.279085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.818 qpair failed and we were unable to recover it. 00:35:34.818 [2024-11-18 00:40:58.279227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.818 [2024-11-18 00:40:58.279255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.818 qpair failed and we were unable to recover it. 00:35:34.818 [2024-11-18 00:40:58.279374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.818 [2024-11-18 00:40:58.279402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.818 qpair failed and we were unable to recover it. 00:35:34.818 [2024-11-18 00:40:58.279511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.818 [2024-11-18 00:40:58.279537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.818 qpair failed and we were unable to recover it. 00:35:34.818 [2024-11-18 00:40:58.279677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.818 [2024-11-18 00:40:58.279707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.818 qpair failed and we were unable to recover it. 00:35:34.818 [2024-11-18 00:40:58.279841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.818 [2024-11-18 00:40:58.279871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.818 qpair failed and we were unable to recover it. 00:35:34.818 [2024-11-18 00:40:58.279996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.818 [2024-11-18 00:40:58.280025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.818 qpair failed and we were unable to recover it. 00:35:34.818 [2024-11-18 00:40:58.280153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.818 [2024-11-18 00:40:58.280181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.818 qpair failed and we were unable to recover it. 00:35:34.818 [2024-11-18 00:40:58.280299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.818 [2024-11-18 00:40:58.280332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.818 qpair failed and we were unable to recover it. 00:35:34.819 [2024-11-18 00:40:58.280450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.819 [2024-11-18 00:40:58.280476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.819 qpair failed and we were unable to recover it. 00:35:34.819 [2024-11-18 00:40:58.280590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.819 [2024-11-18 00:40:58.280620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.819 qpair failed and we were unable to recover it. 00:35:34.819 [2024-11-18 00:40:58.280780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.819 [2024-11-18 00:40:58.280824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.819 qpair failed and we were unable to recover it. 00:35:34.819 [2024-11-18 00:40:58.280967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.819 [2024-11-18 00:40:58.280993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.819 qpair failed and we were unable to recover it. 00:35:34.819 [2024-11-18 00:40:58.281078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.819 [2024-11-18 00:40:58.281106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.819 qpair failed and we were unable to recover it. 00:35:34.819 [2024-11-18 00:40:58.281234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.819 [2024-11-18 00:40:58.281260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.819 qpair failed and we were unable to recover it. 00:35:34.819 [2024-11-18 00:40:58.281396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.819 [2024-11-18 00:40:58.281424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.819 qpair failed and we were unable to recover it. 00:35:34.819 [2024-11-18 00:40:58.281565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.819 [2024-11-18 00:40:58.281591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.819 qpair failed and we were unable to recover it. 00:35:34.819 [2024-11-18 00:40:58.281717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.819 [2024-11-18 00:40:58.281749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.819 qpair failed and we were unable to recover it. 00:35:34.819 [2024-11-18 00:40:58.281890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.819 [2024-11-18 00:40:58.281916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.819 qpair failed and we were unable to recover it. 00:35:34.819 [2024-11-18 00:40:58.282081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.819 [2024-11-18 00:40:58.282127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.819 qpair failed and we were unable to recover it. 00:35:34.819 [2024-11-18 00:40:58.282253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.819 [2024-11-18 00:40:58.282279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.819 qpair failed and we were unable to recover it. 00:35:34.819 [2024-11-18 00:40:58.282372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.819 [2024-11-18 00:40:58.282401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.819 qpair failed and we were unable to recover it. 00:35:34.819 [2024-11-18 00:40:58.282501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.819 [2024-11-18 00:40:58.282533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.819 qpair failed and we were unable to recover it. 00:35:34.819 [2024-11-18 00:40:58.282658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.819 [2024-11-18 00:40:58.282690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.819 qpair failed and we were unable to recover it. 00:35:34.819 [2024-11-18 00:40:58.282827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.819 [2024-11-18 00:40:58.282873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.819 qpair failed and we were unable to recover it. 00:35:34.819 [2024-11-18 00:40:58.283038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.819 [2024-11-18 00:40:58.283067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.819 qpair failed and we were unable to recover it. 00:35:34.819 [2024-11-18 00:40:58.283198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.819 [2024-11-18 00:40:58.283229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.819 qpair failed and we were unable to recover it. 00:35:34.819 [2024-11-18 00:40:58.283387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.819 [2024-11-18 00:40:58.283415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.819 qpair failed and we were unable to recover it. 00:35:34.819 [2024-11-18 00:40:58.283561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.819 [2024-11-18 00:40:58.283594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.819 qpair failed and we were unable to recover it. 00:35:34.819 [2024-11-18 00:40:58.283706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.819 [2024-11-18 00:40:58.283748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.819 qpair failed and we were unable to recover it. 00:35:34.819 [2024-11-18 00:40:58.283903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.819 [2024-11-18 00:40:58.283949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.819 qpair failed and we were unable to recover it. 00:35:34.819 [2024-11-18 00:40:58.284075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.819 [2024-11-18 00:40:58.284104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.819 qpair failed and we were unable to recover it. 00:35:34.819 [2024-11-18 00:40:58.284224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.819 [2024-11-18 00:40:58.284250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.819 qpair failed and we were unable to recover it. 00:35:34.819 [2024-11-18 00:40:58.284370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.819 [2024-11-18 00:40:58.284397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.819 qpair failed and we were unable to recover it. 00:35:34.819 [2024-11-18 00:40:58.284517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.819 [2024-11-18 00:40:58.284542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.819 qpair failed and we were unable to recover it. 00:35:34.819 [2024-11-18 00:40:58.284623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.819 [2024-11-18 00:40:58.284649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.819 qpair failed and we were unable to recover it. 00:35:34.819 [2024-11-18 00:40:58.284757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.819 [2024-11-18 00:40:58.284788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.819 qpair failed and we were unable to recover it. 00:35:34.819 [2024-11-18 00:40:58.284945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.819 [2024-11-18 00:40:58.284974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.819 qpair failed and we were unable to recover it. 00:35:34.819 [2024-11-18 00:40:58.285106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.819 [2024-11-18 00:40:58.285136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.819 qpair failed and we were unable to recover it. 00:35:34.819 [2024-11-18 00:40:58.285235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.819 [2024-11-18 00:40:58.285264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.819 qpair failed and we were unable to recover it. 00:35:34.819 [2024-11-18 00:40:58.285399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.819 [2024-11-18 00:40:58.285426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.819 qpair failed and we were unable to recover it. 00:35:34.819 [2024-11-18 00:40:58.285518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.819 [2024-11-18 00:40:58.285544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.819 qpair failed and we were unable to recover it. 00:35:34.819 [2024-11-18 00:40:58.285622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.819 [2024-11-18 00:40:58.285665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.819 qpair failed and we were unable to recover it. 00:35:34.819 [2024-11-18 00:40:58.285815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.819 [2024-11-18 00:40:58.285844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.819 qpair failed and we were unable to recover it. 00:35:34.819 [2024-11-18 00:40:58.286011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.819 [2024-11-18 00:40:58.286041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.819 qpair failed and we were unable to recover it. 00:35:34.819 [2024-11-18 00:40:58.286149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.819 [2024-11-18 00:40:58.286177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.819 qpair failed and we were unable to recover it. 00:35:34.819 [2024-11-18 00:40:58.286283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.819 [2024-11-18 00:40:58.286330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.819 qpair failed and we were unable to recover it. 00:35:34.820 [2024-11-18 00:40:58.286434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.820 [2024-11-18 00:40:58.286461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.820 qpair failed and we were unable to recover it. 00:35:34.820 [2024-11-18 00:40:58.286577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.820 [2024-11-18 00:40:58.286603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.820 qpair failed and we were unable to recover it. 00:35:34.820 [2024-11-18 00:40:58.286727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.820 [2024-11-18 00:40:58.286757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.820 qpair failed and we were unable to recover it. 00:35:34.820 [2024-11-18 00:40:58.286935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.820 [2024-11-18 00:40:58.286980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.820 qpair failed and we were unable to recover it. 00:35:34.820 [2024-11-18 00:40:58.287112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.820 [2024-11-18 00:40:58.287157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.820 qpair failed and we were unable to recover it. 00:35:34.820 [2024-11-18 00:40:58.287301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.820 [2024-11-18 00:40:58.287335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.820 qpair failed and we were unable to recover it. 00:35:34.820 [2024-11-18 00:40:58.287427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.820 [2024-11-18 00:40:58.287453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.820 qpair failed and we were unable to recover it. 00:35:34.820 [2024-11-18 00:40:58.287619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.820 [2024-11-18 00:40:58.287663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.820 qpair failed and we were unable to recover it. 00:35:34.820 [2024-11-18 00:40:58.287797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.820 [2024-11-18 00:40:58.287828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.820 qpair failed and we were unable to recover it. 00:35:34.820 [2024-11-18 00:40:58.287977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.820 [2024-11-18 00:40:58.288025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.820 qpair failed and we were unable to recover it. 00:35:34.820 [2024-11-18 00:40:58.288139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.820 [2024-11-18 00:40:58.288165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.820 qpair failed and we were unable to recover it. 00:35:34.820 [2024-11-18 00:40:58.288278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.820 [2024-11-18 00:40:58.288304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.820 qpair failed and we were unable to recover it. 00:35:34.820 [2024-11-18 00:40:58.288451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.820 [2024-11-18 00:40:58.288480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.820 qpair failed and we were unable to recover it. 00:35:34.820 [2024-11-18 00:40:58.288612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.820 [2024-11-18 00:40:58.288655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.820 qpair failed and we were unable to recover it. 00:35:34.820 [2024-11-18 00:40:58.288759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.820 [2024-11-18 00:40:58.288786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.820 qpair failed and we were unable to recover it. 00:35:34.820 [2024-11-18 00:40:58.288905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.820 [2024-11-18 00:40:58.288931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.820 qpair failed and we were unable to recover it. 00:35:34.820 [2024-11-18 00:40:58.289053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.820 [2024-11-18 00:40:58.289079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.820 qpair failed and we were unable to recover it. 00:35:34.820 [2024-11-18 00:40:58.289220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.820 [2024-11-18 00:40:58.289249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.820 qpair failed and we were unable to recover it. 00:35:34.820 [2024-11-18 00:40:58.289368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.820 [2024-11-18 00:40:58.289395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.820 qpair failed and we were unable to recover it. 00:35:34.820 [2024-11-18 00:40:58.289507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.820 [2024-11-18 00:40:58.289533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.820 qpair failed and we were unable to recover it. 00:35:34.820 [2024-11-18 00:40:58.289618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.820 [2024-11-18 00:40:58.289644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.820 qpair failed and we were unable to recover it. 00:35:34.820 [2024-11-18 00:40:58.289756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.820 [2024-11-18 00:40:58.289782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.820 qpair failed and we were unable to recover it. 00:35:34.820 [2024-11-18 00:40:58.289929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.820 [2024-11-18 00:40:58.289954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.820 qpair failed and we were unable to recover it. 00:35:34.820 [2024-11-18 00:40:58.290094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.820 [2024-11-18 00:40:58.290124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.820 qpair failed and we were unable to recover it. 00:35:34.820 [2024-11-18 00:40:58.290275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.820 [2024-11-18 00:40:58.290305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.820 qpair failed and we were unable to recover it. 00:35:34.820 [2024-11-18 00:40:58.290412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.820 [2024-11-18 00:40:58.290442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.820 qpair failed and we were unable to recover it. 00:35:34.820 [2024-11-18 00:40:58.290542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.820 [2024-11-18 00:40:58.290571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.820 qpair failed and we were unable to recover it. 00:35:34.820 [2024-11-18 00:40:58.290695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.820 [2024-11-18 00:40:58.290724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.820 qpair failed and we were unable to recover it. 00:35:34.820 [2024-11-18 00:40:58.290844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.820 [2024-11-18 00:40:58.290873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.820 qpair failed and we were unable to recover it. 00:35:34.820 [2024-11-18 00:40:58.290986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.820 [2024-11-18 00:40:58.291014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.820 qpair failed and we were unable to recover it. 00:35:34.820 [2024-11-18 00:40:58.291156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.820 [2024-11-18 00:40:58.291182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.820 qpair failed and we were unable to recover it. 00:35:34.820 [2024-11-18 00:40:58.291284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.820 [2024-11-18 00:40:58.291334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.820 qpair failed and we were unable to recover it. 00:35:34.820 [2024-11-18 00:40:58.291499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.820 [2024-11-18 00:40:58.291530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.820 qpair failed and we were unable to recover it. 00:35:34.820 [2024-11-18 00:40:58.291692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.820 [2024-11-18 00:40:58.291722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.820 qpair failed and we were unable to recover it. 00:35:34.821 [2024-11-18 00:40:58.291847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.821 [2024-11-18 00:40:58.291876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.821 qpair failed and we were unable to recover it. 00:35:34.821 [2024-11-18 00:40:58.291997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.821 [2024-11-18 00:40:58.292028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.821 qpair failed and we were unable to recover it. 00:35:34.821 [2024-11-18 00:40:58.292161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.821 [2024-11-18 00:40:58.292191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.821 qpair failed and we were unable to recover it. 00:35:34.821 [2024-11-18 00:40:58.292332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.821 [2024-11-18 00:40:58.292366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.821 qpair failed and we were unable to recover it. 00:35:34.821 [2024-11-18 00:40:58.292451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.821 [2024-11-18 00:40:58.292496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.821 qpair failed and we were unable to recover it. 00:35:34.821 [2024-11-18 00:40:58.292588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.821 [2024-11-18 00:40:58.292617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.821 qpair failed and we were unable to recover it. 00:35:34.821 [2024-11-18 00:40:58.292767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.821 [2024-11-18 00:40:58.292796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.821 qpair failed and we were unable to recover it. 00:35:34.821 [2024-11-18 00:40:58.292897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.821 [2024-11-18 00:40:58.292928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.821 qpair failed and we were unable to recover it. 00:35:34.821 [2024-11-18 00:40:58.293035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.821 [2024-11-18 00:40:58.293066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.821 qpair failed and we were unable to recover it. 00:35:34.821 [2024-11-18 00:40:58.293235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.821 [2024-11-18 00:40:58.293264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.821 qpair failed and we were unable to recover it. 00:35:34.821 [2024-11-18 00:40:58.293362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.821 [2024-11-18 00:40:58.293389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.821 qpair failed and we were unable to recover it. 00:35:34.821 [2024-11-18 00:40:58.293556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.821 [2024-11-18 00:40:58.293601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.821 qpair failed and we were unable to recover it. 00:35:34.821 [2024-11-18 00:40:58.293758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.821 [2024-11-18 00:40:58.293802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.821 qpair failed and we were unable to recover it. 00:35:34.821 [2024-11-18 00:40:58.293936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.821 [2024-11-18 00:40:58.293980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.821 qpair failed and we were unable to recover it. 00:35:34.821 [2024-11-18 00:40:58.294099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.821 [2024-11-18 00:40:58.294125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.821 qpair failed and we were unable to recover it. 00:35:34.821 [2024-11-18 00:40:58.294231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.821 [2024-11-18 00:40:58.294257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.821 qpair failed and we were unable to recover it. 00:35:34.821 [2024-11-18 00:40:58.294341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.821 [2024-11-18 00:40:58.294367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.821 qpair failed and we were unable to recover it. 00:35:34.821 [2024-11-18 00:40:58.294512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.821 [2024-11-18 00:40:58.294541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.821 qpair failed and we were unable to recover it. 00:35:34.821 [2024-11-18 00:40:58.294639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.821 [2024-11-18 00:40:58.294666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.821 qpair failed and we were unable to recover it. 00:35:34.821 [2024-11-18 00:40:58.294773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.821 [2024-11-18 00:40:58.294803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.821 qpair failed and we were unable to recover it. 00:35:34.821 [2024-11-18 00:40:58.294943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.821 [2024-11-18 00:40:58.294970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.821 qpair failed and we were unable to recover it. 00:35:34.821 [2024-11-18 00:40:58.295129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.821 [2024-11-18 00:40:58.295169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.821 qpair failed and we were unable to recover it. 00:35:34.821 [2024-11-18 00:40:58.295288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.821 [2024-11-18 00:40:58.295323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.821 qpair failed and we were unable to recover it. 00:35:34.821 [2024-11-18 00:40:58.295472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.821 [2024-11-18 00:40:58.295516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.821 qpair failed and we were unable to recover it. 00:35:34.821 [2024-11-18 00:40:58.295669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.821 [2024-11-18 00:40:58.295714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.821 qpair failed and we were unable to recover it. 00:35:34.821 [2024-11-18 00:40:58.295825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.821 [2024-11-18 00:40:58.295870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.821 qpair failed and we were unable to recover it. 00:35:34.821 [2024-11-18 00:40:58.295988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.821 [2024-11-18 00:40:58.296032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.821 qpair failed and we were unable to recover it. 00:35:34.821 [2024-11-18 00:40:58.296147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.821 [2024-11-18 00:40:58.296175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.821 qpair failed and we were unable to recover it. 00:35:34.821 [2024-11-18 00:40:58.296299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.821 [2024-11-18 00:40:58.296330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.821 qpair failed and we were unable to recover it. 00:35:34.821 [2024-11-18 00:40:58.296463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.821 [2024-11-18 00:40:58.296508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.821 qpair failed and we were unable to recover it. 00:35:34.821 [2024-11-18 00:40:58.296648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.821 [2024-11-18 00:40:58.296698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.821 qpair failed and we were unable to recover it. 00:35:34.821 [2024-11-18 00:40:58.296816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.821 [2024-11-18 00:40:58.296842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.821 qpair failed and we were unable to recover it. 00:35:34.821 [2024-11-18 00:40:58.296933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.821 [2024-11-18 00:40:58.296960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.821 qpair failed and we were unable to recover it. 00:35:34.821 [2024-11-18 00:40:58.297072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.821 [2024-11-18 00:40:58.297097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.821 qpair failed and we were unable to recover it. 00:35:34.821 [2024-11-18 00:40:58.297210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.821 [2024-11-18 00:40:58.297236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.821 qpair failed and we were unable to recover it. 00:35:34.821 [2024-11-18 00:40:58.297380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.821 [2024-11-18 00:40:58.297425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.821 qpair failed and we were unable to recover it. 00:35:34.821 [2024-11-18 00:40:58.297505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.821 [2024-11-18 00:40:58.297531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.821 qpair failed and we were unable to recover it. 00:35:34.821 [2024-11-18 00:40:58.297642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.821 [2024-11-18 00:40:58.297668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.821 qpair failed and we were unable to recover it. 00:35:34.822 [2024-11-18 00:40:58.297824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.822 [2024-11-18 00:40:58.297850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.822 qpair failed and we were unable to recover it. 00:35:34.822 [2024-11-18 00:40:58.297972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.822 [2024-11-18 00:40:58.297997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.822 qpair failed and we were unable to recover it. 00:35:34.822 [2024-11-18 00:40:58.298091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.822 [2024-11-18 00:40:58.298130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.822 qpair failed and we were unable to recover it. 00:35:34.822 [2024-11-18 00:40:58.298227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.822 [2024-11-18 00:40:58.298255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.822 qpair failed and we were unable to recover it. 00:35:34.822 [2024-11-18 00:40:58.298396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.822 [2024-11-18 00:40:58.298427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.822 qpair failed and we were unable to recover it. 00:35:34.822 [2024-11-18 00:40:58.298539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.822 [2024-11-18 00:40:58.298585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.822 qpair failed and we were unable to recover it. 00:35:34.822 [2024-11-18 00:40:58.298759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.822 [2024-11-18 00:40:58.298805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.822 qpair failed and we were unable to recover it. 00:35:34.822 [2024-11-18 00:40:58.298941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.822 [2024-11-18 00:40:58.298971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.822 qpair failed and we were unable to recover it. 00:35:34.822 [2024-11-18 00:40:58.299172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.822 [2024-11-18 00:40:58.299202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.822 qpair failed and we were unable to recover it. 00:35:34.822 [2024-11-18 00:40:58.299333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.822 [2024-11-18 00:40:58.299377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.822 qpair failed and we were unable to recover it. 00:35:34.822 [2024-11-18 00:40:58.299488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.822 [2024-11-18 00:40:58.299517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.822 qpair failed and we were unable to recover it. 00:35:34.822 [2024-11-18 00:40:58.299655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.822 [2024-11-18 00:40:58.299687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.822 qpair failed and we were unable to recover it. 00:35:34.822 [2024-11-18 00:40:58.299875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.822 [2024-11-18 00:40:58.299921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.822 qpair failed and we were unable to recover it. 00:35:34.822 [2024-11-18 00:40:58.300004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.822 [2024-11-18 00:40:58.300030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.822 qpair failed and we were unable to recover it. 00:35:34.822 [2024-11-18 00:40:58.300123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.822 [2024-11-18 00:40:58.300149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.822 qpair failed and we were unable to recover it. 00:35:34.822 [2024-11-18 00:40:58.300272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.822 [2024-11-18 00:40:58.300299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.822 qpair failed and we were unable to recover it. 00:35:34.822 [2024-11-18 00:40:58.300406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.822 [2024-11-18 00:40:58.300434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.822 qpair failed and we were unable to recover it. 00:35:34.822 [2024-11-18 00:40:58.300585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.822 [2024-11-18 00:40:58.300627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.822 qpair failed and we were unable to recover it. 00:35:34.822 [2024-11-18 00:40:58.300749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.822 [2024-11-18 00:40:58.300775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.822 qpair failed and we were unable to recover it. 00:35:34.822 [2024-11-18 00:40:58.300954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.822 [2024-11-18 00:40:58.301002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.822 qpair failed and we were unable to recover it. 00:35:34.822 [2024-11-18 00:40:58.301117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.822 [2024-11-18 00:40:58.301151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.822 qpair failed and we were unable to recover it. 00:35:34.822 [2024-11-18 00:40:58.301325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.822 [2024-11-18 00:40:58.301374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.822 qpair failed and we were unable to recover it. 00:35:34.822 [2024-11-18 00:40:58.301525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.822 [2024-11-18 00:40:58.301554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.822 qpair failed and we were unable to recover it. 00:35:34.822 [2024-11-18 00:40:58.301697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.822 [2024-11-18 00:40:58.301727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.822 qpair failed and we were unable to recover it. 00:35:34.822 [2024-11-18 00:40:58.301856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.822 [2024-11-18 00:40:58.301887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.822 qpair failed and we were unable to recover it. 00:35:34.822 [2024-11-18 00:40:58.302021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.822 [2024-11-18 00:40:58.302048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.822 qpair failed and we were unable to recover it. 00:35:34.822 [2024-11-18 00:40:58.302160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.822 [2024-11-18 00:40:58.302186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.822 qpair failed and we were unable to recover it. 00:35:34.822 [2024-11-18 00:40:58.302277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.822 [2024-11-18 00:40:58.302304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.822 qpair failed and we were unable to recover it. 00:35:34.822 [2024-11-18 00:40:58.302431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.822 [2024-11-18 00:40:58.302473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.822 qpair failed and we were unable to recover it. 00:35:34.822 [2024-11-18 00:40:58.302617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.822 [2024-11-18 00:40:58.302661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.822 qpair failed and we were unable to recover it. 00:35:34.822 [2024-11-18 00:40:58.302815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.822 [2024-11-18 00:40:58.302863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.822 qpair failed and we were unable to recover it. 00:35:34.822 [2024-11-18 00:40:58.302956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.822 [2024-11-18 00:40:58.302987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.822 qpair failed and we were unable to recover it. 00:35:34.822 [2024-11-18 00:40:58.303123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.822 [2024-11-18 00:40:58.303175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.822 qpair failed and we were unable to recover it. 00:35:34.822 [2024-11-18 00:40:58.303278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.822 [2024-11-18 00:40:58.303307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.822 qpair failed and we were unable to recover it. 00:35:34.822 [2024-11-18 00:40:58.303427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.822 [2024-11-18 00:40:58.303455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.822 qpair failed and we were unable to recover it. 00:35:34.822 [2024-11-18 00:40:58.303596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.822 [2024-11-18 00:40:58.303642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.822 qpair failed and we were unable to recover it. 00:35:34.822 [2024-11-18 00:40:58.303730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.822 [2024-11-18 00:40:58.303756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.822 qpair failed and we were unable to recover it. 00:35:34.822 [2024-11-18 00:40:58.303867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.822 [2024-11-18 00:40:58.303892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.823 qpair failed and we were unable to recover it. 00:35:34.823 [2024-11-18 00:40:58.304007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.823 [2024-11-18 00:40:58.304034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.823 qpair failed and we were unable to recover it. 00:35:34.823 [2024-11-18 00:40:58.304179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.823 [2024-11-18 00:40:58.304206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.823 qpair failed and we were unable to recover it. 00:35:34.823 [2024-11-18 00:40:58.304337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.823 [2024-11-18 00:40:58.304363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.823 qpair failed and we were unable to recover it. 00:35:34.823 [2024-11-18 00:40:58.304502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.823 [2024-11-18 00:40:58.304528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.823 qpair failed and we were unable to recover it. 00:35:34.823 [2024-11-18 00:40:58.304621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.823 [2024-11-18 00:40:58.304648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.823 qpair failed and we were unable to recover it. 00:35:34.823 [2024-11-18 00:40:58.304757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.823 [2024-11-18 00:40:58.304783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.823 qpair failed and we were unable to recover it. 00:35:34.823 [2024-11-18 00:40:58.304870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.823 [2024-11-18 00:40:58.304899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.823 qpair failed and we were unable to recover it. 00:35:34.823 [2024-11-18 00:40:58.305031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.823 [2024-11-18 00:40:58.305059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.823 qpair failed and we were unable to recover it. 00:35:34.823 [2024-11-18 00:40:58.305192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.823 [2024-11-18 00:40:58.305219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.823 qpair failed and we were unable to recover it. 00:35:34.823 [2024-11-18 00:40:58.305331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.823 [2024-11-18 00:40:58.305358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.823 qpair failed and we were unable to recover it. 00:35:34.823 [2024-11-18 00:40:58.305474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.823 [2024-11-18 00:40:58.305520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.823 qpair failed and we were unable to recover it. 00:35:34.823 [2024-11-18 00:40:58.305643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.823 [2024-11-18 00:40:58.305672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.823 qpair failed and we were unable to recover it. 00:35:34.823 [2024-11-18 00:40:58.305827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.823 [2024-11-18 00:40:58.305873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.823 qpair failed and we were unable to recover it. 00:35:34.823 [2024-11-18 00:40:58.305993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.823 [2024-11-18 00:40:58.306019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.823 qpair failed and we were unable to recover it. 00:35:34.823 [2024-11-18 00:40:58.306111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.823 [2024-11-18 00:40:58.306139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.823 qpair failed and we were unable to recover it. 00:35:34.823 [2024-11-18 00:40:58.306232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.823 [2024-11-18 00:40:58.306258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.823 qpair failed and we were unable to recover it. 00:35:34.823 [2024-11-18 00:40:58.306422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.823 [2024-11-18 00:40:58.306452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.823 qpair failed and we were unable to recover it. 00:35:34.823 [2024-11-18 00:40:58.306548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.823 [2024-11-18 00:40:58.306578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.823 qpair failed and we were unable to recover it. 00:35:34.823 [2024-11-18 00:40:58.306754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.823 [2024-11-18 00:40:58.306787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.823 qpair failed and we were unable to recover it. 00:35:34.823 [2024-11-18 00:40:58.306916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.823 [2024-11-18 00:40:58.306948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.823 qpair failed and we were unable to recover it. 00:35:34.823 [2024-11-18 00:40:58.307037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.823 [2024-11-18 00:40:58.307068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.823 qpair failed and we were unable to recover it. 00:35:34.823 [2024-11-18 00:40:58.307175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.823 [2024-11-18 00:40:58.307224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.823 qpair failed and we were unable to recover it. 00:35:34.823 [2024-11-18 00:40:58.307350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.823 [2024-11-18 00:40:58.307377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.823 qpair failed and we were unable to recover it. 00:35:34.823 [2024-11-18 00:40:58.307474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.823 [2024-11-18 00:40:58.307503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.823 qpair failed and we were unable to recover it. 00:35:34.823 [2024-11-18 00:40:58.307607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.823 [2024-11-18 00:40:58.307633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.823 qpair failed and we were unable to recover it. 00:35:34.823 [2024-11-18 00:40:58.307777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.823 [2024-11-18 00:40:58.307807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.823 qpair failed and we were unable to recover it. 00:35:34.823 [2024-11-18 00:40:58.307913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.823 [2024-11-18 00:40:58.307958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.823 qpair failed and we were unable to recover it. 00:35:34.823 [2024-11-18 00:40:58.308084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.823 [2024-11-18 00:40:58.308116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.823 qpair failed and we were unable to recover it. 00:35:34.823 [2024-11-18 00:40:58.308248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.823 [2024-11-18 00:40:58.308293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.823 qpair failed and we were unable to recover it. 00:35:34.823 [2024-11-18 00:40:58.308428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.823 [2024-11-18 00:40:58.308455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.823 qpair failed and we were unable to recover it. 00:35:34.823 [2024-11-18 00:40:58.308542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.823 [2024-11-18 00:40:58.308569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.823 qpair failed and we were unable to recover it. 00:35:34.823 [2024-11-18 00:40:58.308689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.823 [2024-11-18 00:40:58.308715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.823 qpair failed and we were unable to recover it. 00:35:34.823 [2024-11-18 00:40:58.308850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.823 [2024-11-18 00:40:58.308895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.823 qpair failed and we were unable to recover it. 00:35:34.823 [2024-11-18 00:40:58.309023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.823 [2024-11-18 00:40:58.309052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.823 qpair failed and we were unable to recover it. 00:35:34.823 [2024-11-18 00:40:58.309160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.823 [2024-11-18 00:40:58.309187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.823 qpair failed and we were unable to recover it. 00:35:34.823 [2024-11-18 00:40:58.309283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.823 [2024-11-18 00:40:58.309316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.823 qpair failed and we were unable to recover it. 00:35:34.823 [2024-11-18 00:40:58.309461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.823 [2024-11-18 00:40:58.309506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.823 qpair failed and we were unable to recover it. 00:35:34.823 [2024-11-18 00:40:58.309619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.824 [2024-11-18 00:40:58.309646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.824 qpair failed and we were unable to recover it. 00:35:34.824 [2024-11-18 00:40:58.309764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.824 [2024-11-18 00:40:58.309790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.824 qpair failed and we were unable to recover it. 00:35:34.824 [2024-11-18 00:40:58.309869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.824 [2024-11-18 00:40:58.309895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.824 qpair failed and we were unable to recover it. 00:35:34.824 [2024-11-18 00:40:58.309978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.824 [2024-11-18 00:40:58.310004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.824 qpair failed and we were unable to recover it. 00:35:34.824 [2024-11-18 00:40:58.310142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.824 [2024-11-18 00:40:58.310168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.824 qpair failed and we were unable to recover it. 00:35:34.824 [2024-11-18 00:40:58.310282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.824 [2024-11-18 00:40:58.310308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.824 qpair failed and we were unable to recover it. 00:35:34.824 [2024-11-18 00:40:58.310412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.824 [2024-11-18 00:40:58.310438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.824 qpair failed and we were unable to recover it. 00:35:34.824 [2024-11-18 00:40:58.310560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.824 [2024-11-18 00:40:58.310587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.824 qpair failed and we were unable to recover it. 00:35:34.824 [2024-11-18 00:40:58.310668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.824 [2024-11-18 00:40:58.310695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.824 qpair failed and we were unable to recover it. 00:35:34.824 [2024-11-18 00:40:58.310805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.824 [2024-11-18 00:40:58.310831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.824 qpair failed and we were unable to recover it. 00:35:34.824 [2024-11-18 00:40:58.310976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.824 [2024-11-18 00:40:58.311001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.824 qpair failed and we were unable to recover it. 00:35:34.824 [2024-11-18 00:40:58.311109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.824 [2024-11-18 00:40:58.311147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.824 qpair failed and we were unable to recover it. 00:35:34.824 [2024-11-18 00:40:58.311278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.824 [2024-11-18 00:40:58.311306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.824 qpair failed and we were unable to recover it. 00:35:34.824 [2024-11-18 00:40:58.311445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.824 [2024-11-18 00:40:58.311475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.824 qpair failed and we were unable to recover it. 00:35:34.824 [2024-11-18 00:40:58.311634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.824 [2024-11-18 00:40:58.311663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.824 qpair failed and we were unable to recover it. 00:35:34.824 [2024-11-18 00:40:58.311753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.824 [2024-11-18 00:40:58.311783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.824 qpair failed and we were unable to recover it. 00:35:34.824 [2024-11-18 00:40:58.311873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.824 [2024-11-18 00:40:58.311903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.824 qpair failed and we were unable to recover it. 00:35:34.824 [2024-11-18 00:40:58.312032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.824 [2024-11-18 00:40:58.312058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.824 qpair failed and we were unable to recover it. 00:35:34.824 [2024-11-18 00:40:58.312155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.824 [2024-11-18 00:40:58.312182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.824 qpair failed and we were unable to recover it. 00:35:34.824 [2024-11-18 00:40:58.312296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.824 [2024-11-18 00:40:58.312329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.824 qpair failed and we were unable to recover it. 00:35:34.824 [2024-11-18 00:40:58.312463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.824 [2024-11-18 00:40:58.312492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.824 qpair failed and we were unable to recover it. 00:35:34.824 [2024-11-18 00:40:58.312619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.824 [2024-11-18 00:40:58.312648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.824 qpair failed and we were unable to recover it. 00:35:34.824 [2024-11-18 00:40:58.312756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.824 [2024-11-18 00:40:58.312783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.824 qpair failed and we were unable to recover it. 00:35:34.824 [2024-11-18 00:40:58.312917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.824 [2024-11-18 00:40:58.312946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.824 qpair failed and we were unable to recover it. 00:35:34.824 [2024-11-18 00:40:58.313085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.824 [2024-11-18 00:40:58.313116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.824 qpair failed and we were unable to recover it. 00:35:34.824 [2024-11-18 00:40:58.313195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.824 [2024-11-18 00:40:58.313223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.824 qpair failed and we were unable to recover it. 00:35:34.824 [2024-11-18 00:40:58.313334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.824 [2024-11-18 00:40:58.313362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.824 qpair failed and we were unable to recover it. 00:35:34.824 [2024-11-18 00:40:58.313497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.824 [2024-11-18 00:40:58.313541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.824 qpair failed and we were unable to recover it. 00:35:34.824 [2024-11-18 00:40:58.313670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.824 [2024-11-18 00:40:58.313700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.824 qpair failed and we were unable to recover it. 00:35:34.824 [2024-11-18 00:40:58.313998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.824 [2024-11-18 00:40:58.314025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.824 qpair failed and we were unable to recover it. 00:35:34.824 [2024-11-18 00:40:58.314148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.824 [2024-11-18 00:40:58.314174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.824 qpair failed and we were unable to recover it. 00:35:34.824 [2024-11-18 00:40:58.314283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.824 [2024-11-18 00:40:58.314309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.824 qpair failed and we were unable to recover it. 00:35:34.824 [2024-11-18 00:40:58.314428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.824 [2024-11-18 00:40:58.314472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.824 qpair failed and we were unable to recover it. 00:35:34.824 [2024-11-18 00:40:58.314566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.824 [2024-11-18 00:40:58.314593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.824 qpair failed and we were unable to recover it. 00:35:34.824 [2024-11-18 00:40:58.314730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.824 [2024-11-18 00:40:58.314775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.824 qpair failed and we were unable to recover it. 00:35:34.824 [2024-11-18 00:40:58.314888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.824 [2024-11-18 00:40:58.314915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.824 qpair failed and we were unable to recover it. 00:35:34.824 [2024-11-18 00:40:58.315058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.824 [2024-11-18 00:40:58.315085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.824 qpair failed and we were unable to recover it. 00:35:34.824 [2024-11-18 00:40:58.315213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.824 [2024-11-18 00:40:58.315252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.824 qpair failed and we were unable to recover it. 00:35:34.824 [2024-11-18 00:40:58.315404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.825 [2024-11-18 00:40:58.315433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.825 qpair failed and we were unable to recover it. 00:35:34.825 [2024-11-18 00:40:58.315545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.825 [2024-11-18 00:40:58.315572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.825 qpair failed and we were unable to recover it. 00:35:34.825 [2024-11-18 00:40:58.315691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.825 [2024-11-18 00:40:58.315718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.825 qpair failed and we were unable to recover it. 00:35:34.825 [2024-11-18 00:40:58.315826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.825 [2024-11-18 00:40:58.315853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.825 qpair failed and we were unable to recover it. 00:35:34.825 [2024-11-18 00:40:58.316009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.825 [2024-11-18 00:40:58.316036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.825 qpair failed and we were unable to recover it. 00:35:34.825 [2024-11-18 00:40:58.316145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.825 [2024-11-18 00:40:58.316170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.825 qpair failed and we were unable to recover it. 00:35:34.825 [2024-11-18 00:40:58.316317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.825 [2024-11-18 00:40:58.316344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.825 qpair failed and we were unable to recover it. 00:35:34.825 [2024-11-18 00:40:58.316428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.825 [2024-11-18 00:40:58.316455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.825 qpair failed and we were unable to recover it. 00:35:34.825 [2024-11-18 00:40:58.316563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.825 [2024-11-18 00:40:58.316592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.825 qpair failed and we were unable to recover it. 00:35:34.825 [2024-11-18 00:40:58.316726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.825 [2024-11-18 00:40:58.316770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.825 qpair failed and we were unable to recover it. 00:35:34.825 [2024-11-18 00:40:58.316896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.825 [2024-11-18 00:40:58.316926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.825 qpair failed and we were unable to recover it. 00:35:34.825 [2024-11-18 00:40:58.317017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.825 [2024-11-18 00:40:58.317059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.825 qpair failed and we were unable to recover it. 00:35:34.825 [2024-11-18 00:40:58.317207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.825 [2024-11-18 00:40:58.317233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.825 qpair failed and we were unable to recover it. 00:35:34.825 [2024-11-18 00:40:58.317375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.825 [2024-11-18 00:40:58.317408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.825 qpair failed and we were unable to recover it. 00:35:34.825 [2024-11-18 00:40:58.317544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.825 [2024-11-18 00:40:58.317574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.825 qpair failed and we were unable to recover it. 00:35:34.825 [2024-11-18 00:40:58.317705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.825 [2024-11-18 00:40:58.317747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.825 qpair failed and we were unable to recover it. 00:35:34.825 [2024-11-18 00:40:58.317877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.825 [2024-11-18 00:40:58.317907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.825 qpair failed and we were unable to recover it. 00:35:34.825 [2024-11-18 00:40:58.318055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.825 [2024-11-18 00:40:58.318084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.825 qpair failed and we were unable to recover it. 00:35:34.825 [2024-11-18 00:40:58.318175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.825 [2024-11-18 00:40:58.318205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.825 qpair failed and we were unable to recover it. 00:35:34.825 [2024-11-18 00:40:58.318299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.825 [2024-11-18 00:40:58.318361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.825 qpair failed and we were unable to recover it. 00:35:34.825 [2024-11-18 00:40:58.318501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.825 [2024-11-18 00:40:58.318527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.825 qpair failed and we were unable to recover it. 00:35:34.825 [2024-11-18 00:40:58.318688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.825 [2024-11-18 00:40:58.318717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.825 qpair failed and we were unable to recover it. 00:35:34.825 [2024-11-18 00:40:58.318873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.825 [2024-11-18 00:40:58.318928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.825 qpair failed and we were unable to recover it. 00:35:34.825 [2024-11-18 00:40:58.319071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.825 [2024-11-18 00:40:58.319115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.825 qpair failed and we were unable to recover it. 00:35:34.825 [2024-11-18 00:40:58.319250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.825 [2024-11-18 00:40:58.319294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.825 qpair failed and we were unable to recover it. 00:35:34.825 [2024-11-18 00:40:58.319427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.825 [2024-11-18 00:40:58.319456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.825 qpair failed and we were unable to recover it. 00:35:34.825 [2024-11-18 00:40:58.319589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.825 [2024-11-18 00:40:58.319619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.825 qpair failed and we were unable to recover it. 00:35:34.825 [2024-11-18 00:40:58.319781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.825 [2024-11-18 00:40:58.319811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.825 qpair failed and we were unable to recover it. 00:35:34.825 [2024-11-18 00:40:58.319907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.825 [2024-11-18 00:40:58.319937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.825 qpair failed and we were unable to recover it. 00:35:34.825 [2024-11-18 00:40:58.320042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.825 [2024-11-18 00:40:58.320072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.825 qpair failed and we were unable to recover it. 00:35:34.825 [2024-11-18 00:40:58.320194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.825 [2024-11-18 00:40:58.320225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.825 qpair failed and we were unable to recover it. 00:35:34.825 [2024-11-18 00:40:58.320399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.825 [2024-11-18 00:40:58.320427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.825 qpair failed and we were unable to recover it. 00:35:34.825 [2024-11-18 00:40:58.320513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.825 [2024-11-18 00:40:58.320558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.825 qpair failed and we were unable to recover it. 00:35:34.825 [2024-11-18 00:40:58.320711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.826 [2024-11-18 00:40:58.320740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.826 qpair failed and we were unable to recover it. 00:35:34.826 [2024-11-18 00:40:58.320931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.826 [2024-11-18 00:40:58.320998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.826 qpair failed and we were unable to recover it. 00:35:34.826 [2024-11-18 00:40:58.321109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.826 [2024-11-18 00:40:58.321138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.826 qpair failed and we were unable to recover it. 00:35:34.826 [2024-11-18 00:40:58.321259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.826 [2024-11-18 00:40:58.321288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.826 qpair failed and we were unable to recover it. 00:35:34.826 [2024-11-18 00:40:58.321431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.826 [2024-11-18 00:40:58.321458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.826 qpair failed and we were unable to recover it. 00:35:34.826 [2024-11-18 00:40:58.321600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.826 [2024-11-18 00:40:58.321626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.826 qpair failed and we were unable to recover it. 00:35:34.826 [2024-11-18 00:40:58.321764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.826 [2024-11-18 00:40:58.321790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.826 qpair failed and we were unable to recover it. 00:35:34.826 [2024-11-18 00:40:58.321909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.826 [2024-11-18 00:40:58.321953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.826 qpair failed and we were unable to recover it. 00:35:34.826 [2024-11-18 00:40:58.322165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.826 [2024-11-18 00:40:58.322208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.826 qpair failed and we were unable to recover it. 00:35:34.826 [2024-11-18 00:40:58.322345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.826 [2024-11-18 00:40:58.322374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.826 qpair failed and we were unable to recover it. 00:35:34.826 [2024-11-18 00:40:58.322508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.826 [2024-11-18 00:40:58.322536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.826 qpair failed and we were unable to recover it. 00:35:34.826 [2024-11-18 00:40:58.322699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.826 [2024-11-18 00:40:58.322745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.826 qpair failed and we were unable to recover it. 00:35:34.826 [2024-11-18 00:40:58.322965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.826 [2024-11-18 00:40:58.323021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.826 qpair failed and we were unable to recover it. 00:35:34.826 [2024-11-18 00:40:58.323149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.826 [2024-11-18 00:40:58.323176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.826 qpair failed and we were unable to recover it. 00:35:34.826 [2024-11-18 00:40:58.323289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.826 [2024-11-18 00:40:58.323324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.826 qpair failed and we were unable to recover it. 00:35:34.826 [2024-11-18 00:40:58.323449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.826 [2024-11-18 00:40:58.323494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.826 qpair failed and we were unable to recover it. 00:35:34.826 [2024-11-18 00:40:58.323606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.826 [2024-11-18 00:40:58.323632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.826 qpair failed and we were unable to recover it. 00:35:34.826 [2024-11-18 00:40:58.323726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.826 [2024-11-18 00:40:58.323753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.826 qpair failed and we were unable to recover it. 00:35:34.826 [2024-11-18 00:40:58.323855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.826 [2024-11-18 00:40:58.323894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.826 qpair failed and we were unable to recover it. 00:35:34.826 [2024-11-18 00:40:58.324020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.826 [2024-11-18 00:40:58.324049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.826 qpair failed and we were unable to recover it. 00:35:34.826 [2024-11-18 00:40:58.324173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.826 [2024-11-18 00:40:58.324199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.826 qpair failed and we were unable to recover it. 00:35:34.826 [2024-11-18 00:40:58.324350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.826 [2024-11-18 00:40:58.324377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.826 qpair failed and we were unable to recover it. 00:35:34.826 [2024-11-18 00:40:58.324463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.826 [2024-11-18 00:40:58.324509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.826 qpair failed and we were unable to recover it. 00:35:34.826 [2024-11-18 00:40:58.324592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.826 [2024-11-18 00:40:58.324621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.826 qpair failed and we were unable to recover it. 00:35:34.826 [2024-11-18 00:40:58.324712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.826 [2024-11-18 00:40:58.324741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.826 qpair failed and we were unable to recover it. 00:35:34.826 [2024-11-18 00:40:58.324872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.826 [2024-11-18 00:40:58.324900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.826 qpair failed and we were unable to recover it. 00:35:34.826 [2024-11-18 00:40:58.325023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.826 [2024-11-18 00:40:58.325051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.826 qpair failed and we were unable to recover it. 00:35:34.826 [2024-11-18 00:40:58.325183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.826 [2024-11-18 00:40:58.325209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.826 qpair failed and we were unable to recover it. 00:35:34.826 [2024-11-18 00:40:58.325332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.826 [2024-11-18 00:40:58.325358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.826 qpair failed and we were unable to recover it. 00:35:34.826 [2024-11-18 00:40:58.325465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.826 [2024-11-18 00:40:58.325490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.826 qpair failed and we were unable to recover it. 00:35:34.826 [2024-11-18 00:40:58.325600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.826 [2024-11-18 00:40:58.325628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.826 qpair failed and we were unable to recover it. 00:35:34.826 [2024-11-18 00:40:58.325726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.826 [2024-11-18 00:40:58.325755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.826 qpair failed and we were unable to recover it. 00:35:34.826 [2024-11-18 00:40:58.325869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.826 [2024-11-18 00:40:58.325898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.826 qpair failed and we were unable to recover it. 00:35:34.826 [2024-11-18 00:40:58.326074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.826 [2024-11-18 00:40:58.326120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.826 qpair failed and we were unable to recover it. 00:35:34.826 [2024-11-18 00:40:58.326240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.826 [2024-11-18 00:40:58.326267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.826 qpair failed and we were unable to recover it. 00:35:34.826 [2024-11-18 00:40:58.326373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.826 [2024-11-18 00:40:58.326403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.826 qpair failed and we were unable to recover it. 00:35:34.826 [2024-11-18 00:40:58.326577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.826 [2024-11-18 00:40:58.326622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.826 qpair failed and we were unable to recover it. 00:35:34.826 [2024-11-18 00:40:58.326768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.826 [2024-11-18 00:40:58.326812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.826 qpair failed and we were unable to recover it. 00:35:34.826 [2024-11-18 00:40:58.326973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.827 [2024-11-18 00:40:58.327002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.827 qpair failed and we were unable to recover it. 00:35:34.827 [2024-11-18 00:40:58.327158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.827 [2024-11-18 00:40:58.327183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.827 qpair failed and we were unable to recover it. 00:35:34.827 [2024-11-18 00:40:58.327275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.827 [2024-11-18 00:40:58.327301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.827 qpair failed and we were unable to recover it. 00:35:34.827 [2024-11-18 00:40:58.327449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.827 [2024-11-18 00:40:58.327492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.827 qpair failed and we were unable to recover it. 00:35:34.827 [2024-11-18 00:40:58.327619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.827 [2024-11-18 00:40:58.327648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.827 qpair failed and we were unable to recover it. 00:35:34.827 [2024-11-18 00:40:58.327828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.827 [2024-11-18 00:40:58.327871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.827 qpair failed and we were unable to recover it. 00:35:34.827 [2024-11-18 00:40:58.328010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.827 [2024-11-18 00:40:58.328036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.827 qpair failed and we were unable to recover it. 00:35:34.827 [2024-11-18 00:40:58.328146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.827 [2024-11-18 00:40:58.328176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.827 qpair failed and we were unable to recover it. 00:35:34.827 [2024-11-18 00:40:58.328295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.827 [2024-11-18 00:40:58.328334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.827 qpair failed and we were unable to recover it. 00:35:34.827 [2024-11-18 00:40:58.328446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.827 [2024-11-18 00:40:58.328494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.827 qpair failed and we were unable to recover it. 00:35:34.827 [2024-11-18 00:40:58.328679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.827 [2024-11-18 00:40:58.328706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.827 qpair failed and we were unable to recover it. 00:35:34.827 [2024-11-18 00:40:58.328823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.827 [2024-11-18 00:40:58.328850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.827 qpair failed and we were unable to recover it. 00:35:34.827 [2024-11-18 00:40:58.328962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.827 [2024-11-18 00:40:58.328988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.827 qpair failed and we were unable to recover it. 00:35:34.827 [2024-11-18 00:40:58.329133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.827 [2024-11-18 00:40:58.329159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.827 qpair failed and we were unable to recover it. 00:35:34.827 [2024-11-18 00:40:58.329253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.827 [2024-11-18 00:40:58.329287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.827 qpair failed and we were unable to recover it. 00:35:34.827 [2024-11-18 00:40:58.329449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.827 [2024-11-18 00:40:58.329488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.827 qpair failed and we were unable to recover it. 00:35:34.827 [2024-11-18 00:40:58.329605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.827 [2024-11-18 00:40:58.329651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.827 qpair failed and we were unable to recover it. 00:35:34.827 [2024-11-18 00:40:58.329787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.827 [2024-11-18 00:40:58.329831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.827 qpair failed and we were unable to recover it. 00:35:34.827 [2024-11-18 00:40:58.329939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.827 [2024-11-18 00:40:58.329968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.827 qpair failed and we were unable to recover it. 00:35:34.827 [2024-11-18 00:40:58.330099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.827 [2024-11-18 00:40:58.330126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.827 qpair failed and we were unable to recover it. 00:35:34.827 [2024-11-18 00:40:58.330257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.827 [2024-11-18 00:40:58.330283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.827 qpair failed and we were unable to recover it. 00:35:34.827 [2024-11-18 00:40:58.330432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.827 [2024-11-18 00:40:58.330460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.827 qpair failed and we were unable to recover it. 00:35:34.827 [2024-11-18 00:40:58.330599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.827 [2024-11-18 00:40:58.330625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.827 qpair failed and we were unable to recover it. 00:35:34.827 [2024-11-18 00:40:58.330728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.827 [2024-11-18 00:40:58.330754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.827 qpair failed and we were unable to recover it. 00:35:34.827 [2024-11-18 00:40:58.330867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.827 [2024-11-18 00:40:58.330893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.827 qpair failed and we were unable to recover it. 00:35:34.827 [2024-11-18 00:40:58.331031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.827 [2024-11-18 00:40:58.331058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.827 qpair failed and we were unable to recover it. 00:35:34.827 [2024-11-18 00:40:58.331138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.827 [2024-11-18 00:40:58.331164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.827 qpair failed and we were unable to recover it. 00:35:34.827 [2024-11-18 00:40:58.331270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.827 [2024-11-18 00:40:58.331298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.827 qpair failed and we were unable to recover it. 00:35:34.827 [2024-11-18 00:40:58.331393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.827 [2024-11-18 00:40:58.331420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.827 qpair failed and we were unable to recover it. 00:35:34.827 [2024-11-18 00:40:58.331588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.827 [2024-11-18 00:40:58.331632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.827 qpair failed and we were unable to recover it. 00:35:34.827 [2024-11-18 00:40:58.331773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.827 [2024-11-18 00:40:58.331817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.827 qpair failed and we were unable to recover it. 00:35:34.827 [2024-11-18 00:40:58.331896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.827 [2024-11-18 00:40:58.331923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.827 qpair failed and we were unable to recover it. 00:35:34.827 [2024-11-18 00:40:58.332042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.827 [2024-11-18 00:40:58.332069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.827 qpair failed and we were unable to recover it. 00:35:34.827 [2024-11-18 00:40:58.332160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.827 [2024-11-18 00:40:58.332188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.827 qpair failed and we were unable to recover it. 00:35:34.827 [2024-11-18 00:40:58.332268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.827 [2024-11-18 00:40:58.332294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.827 qpair failed and we were unable to recover it. 00:35:34.827 [2024-11-18 00:40:58.332432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.827 [2024-11-18 00:40:58.332462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.827 qpair failed and we were unable to recover it. 00:35:34.827 [2024-11-18 00:40:58.332588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.827 [2024-11-18 00:40:58.332623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.827 qpair failed and we were unable to recover it. 00:35:34.827 [2024-11-18 00:40:58.332774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.827 [2024-11-18 00:40:58.332803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.827 qpair failed and we were unable to recover it. 00:35:34.828 [2024-11-18 00:40:58.332902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.828 [2024-11-18 00:40:58.332931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.828 qpair failed and we were unable to recover it. 00:35:34.828 [2024-11-18 00:40:58.333040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.828 [2024-11-18 00:40:58.333067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.828 qpair failed and we were unable to recover it. 00:35:34.828 [2024-11-18 00:40:58.333217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.828 [2024-11-18 00:40:58.333256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.828 qpair failed and we were unable to recover it. 00:35:34.828 [2024-11-18 00:40:58.333390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.828 [2024-11-18 00:40:58.333419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.828 qpair failed and we were unable to recover it. 00:35:34.828 [2024-11-18 00:40:58.333511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.828 [2024-11-18 00:40:58.333540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.828 qpair failed and we were unable to recover it. 00:35:34.828 [2024-11-18 00:40:58.333704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.828 [2024-11-18 00:40:58.333734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.828 qpair failed and we were unable to recover it. 00:35:34.828 [2024-11-18 00:40:58.333850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.828 [2024-11-18 00:40:58.333880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.828 qpair failed and we were unable to recover it. 00:35:34.828 [2024-11-18 00:40:58.333979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.828 [2024-11-18 00:40:58.334020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.828 qpair failed and we were unable to recover it. 00:35:34.828 [2024-11-18 00:40:58.334201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.828 [2024-11-18 00:40:58.334269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.828 qpair failed and we were unable to recover it. 00:35:34.828 [2024-11-18 00:40:58.334435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.828 [2024-11-18 00:40:58.334462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.828 qpair failed and we were unable to recover it. 00:35:34.828 [2024-11-18 00:40:58.334553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.828 [2024-11-18 00:40:58.334581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.828 qpair failed and we were unable to recover it. 00:35:34.828 [2024-11-18 00:40:58.334686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.828 [2024-11-18 00:40:58.334716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.828 qpair failed and we were unable to recover it. 00:35:34.828 [2024-11-18 00:40:58.334882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.828 [2024-11-18 00:40:58.334927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.828 qpair failed and we were unable to recover it. 00:35:34.828 [2024-11-18 00:40:58.335008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.828 [2024-11-18 00:40:58.335036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.828 qpair failed and we were unable to recover it. 00:35:34.828 [2024-11-18 00:40:58.335176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.828 [2024-11-18 00:40:58.335202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.828 qpair failed and we were unable to recover it. 00:35:34.828 [2024-11-18 00:40:58.335379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.828 [2024-11-18 00:40:58.335409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.828 qpair failed and we were unable to recover it. 00:35:34.828 [2024-11-18 00:40:58.335525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.828 [2024-11-18 00:40:58.335556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.828 qpair failed and we were unable to recover it. 00:35:34.828 [2024-11-18 00:40:58.335683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.828 [2024-11-18 00:40:58.335712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.828 qpair failed and we were unable to recover it. 00:35:34.828 [2024-11-18 00:40:58.335847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.828 [2024-11-18 00:40:58.335876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.828 qpair failed and we were unable to recover it. 00:35:34.828 [2024-11-18 00:40:58.335993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.828 [2024-11-18 00:40:58.336022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.828 qpair failed and we were unable to recover it. 00:35:34.828 [2024-11-18 00:40:58.336144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.828 [2024-11-18 00:40:58.336175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.828 qpair failed and we were unable to recover it. 00:35:34.828 [2024-11-18 00:40:58.336323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.828 [2024-11-18 00:40:58.336350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.828 qpair failed and we were unable to recover it. 00:35:34.828 [2024-11-18 00:40:58.336509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.828 [2024-11-18 00:40:58.336538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.828 qpair failed and we were unable to recover it. 00:35:34.828 [2024-11-18 00:40:58.336662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.828 [2024-11-18 00:40:58.336692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.828 qpair failed and we were unable to recover it. 00:35:34.828 [2024-11-18 00:40:58.336815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.828 [2024-11-18 00:40:58.336845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.828 qpair failed and we were unable to recover it. 00:35:34.828 [2024-11-18 00:40:58.336997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.828 [2024-11-18 00:40:58.337042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.828 qpair failed and we were unable to recover it. 00:35:34.828 [2024-11-18 00:40:58.337184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.828 [2024-11-18 00:40:58.337210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.828 qpair failed and we were unable to recover it. 00:35:34.828 [2024-11-18 00:40:58.337317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.828 [2024-11-18 00:40:58.337344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.828 qpair failed and we were unable to recover it. 00:35:34.828 [2024-11-18 00:40:58.337486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.828 [2024-11-18 00:40:58.337512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.828 qpair failed and we were unable to recover it. 00:35:34.828 [2024-11-18 00:40:58.337626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.828 [2024-11-18 00:40:58.337652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.828 qpair failed and we were unable to recover it. 00:35:34.828 [2024-11-18 00:40:58.337735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.828 [2024-11-18 00:40:58.337761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.828 qpair failed and we were unable to recover it. 00:35:34.828 [2024-11-18 00:40:58.337896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.828 [2024-11-18 00:40:58.337925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.828 qpair failed and we were unable to recover it. 00:35:34.828 [2024-11-18 00:40:58.338053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.828 [2024-11-18 00:40:58.338079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.828 qpair failed and we were unable to recover it. 00:35:34.828 [2024-11-18 00:40:58.338192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.828 [2024-11-18 00:40:58.338219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.828 qpair failed and we were unable to recover it. 00:35:34.828 [2024-11-18 00:40:58.338332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.828 [2024-11-18 00:40:58.338359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.828 qpair failed and we were unable to recover it. 00:35:34.828 [2024-11-18 00:40:58.338496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.828 [2024-11-18 00:40:58.338522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.828 qpair failed and we were unable to recover it. 00:35:34.828 [2024-11-18 00:40:58.338670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.828 [2024-11-18 00:40:58.338696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.829 qpair failed and we were unable to recover it. 00:35:34.829 [2024-11-18 00:40:58.338832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.829 [2024-11-18 00:40:58.338864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.829 qpair failed and we were unable to recover it. 00:35:34.829 [2024-11-18 00:40:58.338956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.829 [2024-11-18 00:40:58.338999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.829 qpair failed and we were unable to recover it. 00:35:34.829 [2024-11-18 00:40:58.339155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.829 [2024-11-18 00:40:58.339182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.829 qpair failed and we were unable to recover it. 00:35:34.829 [2024-11-18 00:40:58.339263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.829 [2024-11-18 00:40:58.339290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.829 qpair failed and we were unable to recover it. 00:35:34.829 [2024-11-18 00:40:58.339451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.829 [2024-11-18 00:40:58.339494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.829 qpair failed and we were unable to recover it. 00:35:34.829 [2024-11-18 00:40:58.339612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.829 [2024-11-18 00:40:58.339644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.829 qpair failed and we were unable to recover it. 00:35:34.829 [2024-11-18 00:40:58.339768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.829 [2024-11-18 00:40:58.339798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.829 qpair failed and we were unable to recover it. 00:35:34.829 [2024-11-18 00:40:58.339908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.829 [2024-11-18 00:40:58.339937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.829 qpair failed and we were unable to recover it. 00:35:34.829 [2024-11-18 00:40:58.340038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.829 [2024-11-18 00:40:58.340068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.829 qpair failed and we were unable to recover it. 00:35:34.829 [2024-11-18 00:40:58.340179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.829 [2024-11-18 00:40:58.340207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.829 qpair failed and we were unable to recover it. 00:35:34.829 [2024-11-18 00:40:58.340325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.829 [2024-11-18 00:40:58.340370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.829 qpair failed and we were unable to recover it. 00:35:34.829 [2024-11-18 00:40:58.340496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.829 [2024-11-18 00:40:58.340525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.829 qpair failed and we were unable to recover it. 00:35:34.829 [2024-11-18 00:40:58.340621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.829 [2024-11-18 00:40:58.340649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.829 qpair failed and we were unable to recover it. 00:35:34.829 [2024-11-18 00:40:58.340736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.829 [2024-11-18 00:40:58.340764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.829 qpair failed and we were unable to recover it. 00:35:34.829 [2024-11-18 00:40:58.340869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.829 [2024-11-18 00:40:58.340898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.829 qpair failed and we were unable to recover it. 00:35:34.829 [2024-11-18 00:40:58.341022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.829 [2024-11-18 00:40:58.341052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.829 qpair failed and we were unable to recover it. 00:35:34.829 [2024-11-18 00:40:58.341186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.829 [2024-11-18 00:40:58.341212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.829 qpair failed and we were unable to recover it. 00:35:34.829 [2024-11-18 00:40:58.341302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.829 [2024-11-18 00:40:58.341336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.829 qpair failed and we were unable to recover it. 00:35:34.829 [2024-11-18 00:40:58.341464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.829 [2024-11-18 00:40:58.341492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.829 qpair failed and we were unable to recover it. 00:35:34.829 [2024-11-18 00:40:58.341606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.829 [2024-11-18 00:40:58.341634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.829 qpair failed and we were unable to recover it. 00:35:34.829 [2024-11-18 00:40:58.341780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.829 [2024-11-18 00:40:58.341808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.829 qpair failed and we were unable to recover it. 00:35:34.829 [2024-11-18 00:40:58.341930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.829 [2024-11-18 00:40:58.341959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.829 qpair failed and we were unable to recover it. 00:35:34.829 [2024-11-18 00:40:58.342108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.829 [2024-11-18 00:40:58.342136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.829 qpair failed and we were unable to recover it. 00:35:34.829 [2024-11-18 00:40:58.342233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.829 [2024-11-18 00:40:58.342262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.829 qpair failed and we were unable to recover it. 00:35:34.829 [2024-11-18 00:40:58.342436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.829 [2024-11-18 00:40:58.342463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.829 qpair failed and we were unable to recover it. 00:35:34.829 [2024-11-18 00:40:58.342561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.829 [2024-11-18 00:40:58.342589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.829 qpair failed and we were unable to recover it. 00:35:34.829 [2024-11-18 00:40:58.342721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.829 [2024-11-18 00:40:58.342749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.829 qpair failed and we were unable to recover it. 00:35:34.829 [2024-11-18 00:40:58.342841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.829 [2024-11-18 00:40:58.342869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.829 qpair failed and we were unable to recover it. 00:35:34.829 [2024-11-18 00:40:58.343019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.829 [2024-11-18 00:40:58.343070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.829 qpair failed and we were unable to recover it. 00:35:34.829 [2024-11-18 00:40:58.343155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.829 [2024-11-18 00:40:58.343183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.829 qpair failed and we were unable to recover it. 00:35:34.829 [2024-11-18 00:40:58.343291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.829 [2024-11-18 00:40:58.343324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.829 qpair failed and we were unable to recover it. 00:35:34.829 [2024-11-18 00:40:58.343459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.829 [2024-11-18 00:40:58.343487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.829 qpair failed and we were unable to recover it. 00:35:34.829 [2024-11-18 00:40:58.343637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.829 [2024-11-18 00:40:58.343680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.829 qpair failed and we were unable to recover it. 00:35:34.829 [2024-11-18 00:40:58.343817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.829 [2024-11-18 00:40:58.343845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.829 qpair failed and we were unable to recover it. 00:35:34.829 [2024-11-18 00:40:58.343976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.829 [2024-11-18 00:40:58.344014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.829 qpair failed and we were unable to recover it. 00:35:34.829 [2024-11-18 00:40:58.344138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.829 [2024-11-18 00:40:58.344172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.829 qpair failed and we were unable to recover it. 00:35:34.829 [2024-11-18 00:40:58.344281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.829 [2024-11-18 00:40:58.344324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.829 qpair failed and we were unable to recover it. 00:35:34.829 [2024-11-18 00:40:58.344481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.829 [2024-11-18 00:40:58.344511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.830 qpair failed and we were unable to recover it. 00:35:34.830 [2024-11-18 00:40:58.344740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.830 [2024-11-18 00:40:58.344809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.830 qpair failed and we were unable to recover it. 00:35:34.830 [2024-11-18 00:40:58.344956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.830 [2024-11-18 00:40:58.344984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.830 qpair failed and we were unable to recover it. 00:35:34.830 [2024-11-18 00:40:58.345100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.830 [2024-11-18 00:40:58.345127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.830 qpair failed and we were unable to recover it. 00:35:34.830 [2024-11-18 00:40:58.345253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.830 [2024-11-18 00:40:58.345280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.830 qpair failed and we were unable to recover it. 00:35:34.830 [2024-11-18 00:40:58.345445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.830 [2024-11-18 00:40:58.345472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.830 qpair failed and we were unable to recover it. 00:35:34.830 [2024-11-18 00:40:58.345566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.830 [2024-11-18 00:40:58.345593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.830 qpair failed and we were unable to recover it. 00:35:34.830 [2024-11-18 00:40:58.345699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.830 [2024-11-18 00:40:58.345726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.830 qpair failed and we were unable to recover it. 00:35:34.830 [2024-11-18 00:40:58.345848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.830 [2024-11-18 00:40:58.345876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.830 qpair failed and we were unable to recover it. 00:35:34.830 [2024-11-18 00:40:58.345959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.830 [2024-11-18 00:40:58.345986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.830 qpair failed and we were unable to recover it. 00:35:34.830 [2024-11-18 00:40:58.346108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.830 [2024-11-18 00:40:58.346136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.830 qpair failed and we were unable to recover it. 00:35:34.830 [2024-11-18 00:40:58.346281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.830 [2024-11-18 00:40:58.346307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.830 qpair failed and we were unable to recover it. 00:35:34.830 [2024-11-18 00:40:58.346406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.830 [2024-11-18 00:40:58.346432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.830 qpair failed and we were unable to recover it. 00:35:34.830 [2024-11-18 00:40:58.346508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.830 [2024-11-18 00:40:58.346534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.830 qpair failed and we were unable to recover it. 00:35:34.830 [2024-11-18 00:40:58.346648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.830 [2024-11-18 00:40:58.346675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.830 qpair failed and we were unable to recover it. 00:35:34.830 [2024-11-18 00:40:58.346792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.830 [2024-11-18 00:40:58.346819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.830 qpair failed and we were unable to recover it. 00:35:34.830 [2024-11-18 00:40:58.346955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.830 [2024-11-18 00:40:58.346982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.830 qpair failed and we were unable to recover it. 00:35:34.830 [2024-11-18 00:40:58.347072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.830 [2024-11-18 00:40:58.347098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.830 qpair failed and we were unable to recover it. 00:35:34.830 [2024-11-18 00:40:58.347214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.830 [2024-11-18 00:40:58.347253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.830 qpair failed and we were unable to recover it. 00:35:34.830 [2024-11-18 00:40:58.347405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.830 [2024-11-18 00:40:58.347434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.830 qpair failed and we were unable to recover it. 00:35:34.830 [2024-11-18 00:40:58.347556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.830 [2024-11-18 00:40:58.347584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.830 qpair failed and we were unable to recover it. 00:35:34.830 [2024-11-18 00:40:58.347727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.830 [2024-11-18 00:40:58.347754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.830 qpair failed and we were unable to recover it. 00:35:34.830 [2024-11-18 00:40:58.347866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.830 [2024-11-18 00:40:58.347894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.830 qpair failed and we were unable to recover it. 00:35:34.830 [2024-11-18 00:40:58.348014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.830 [2024-11-18 00:40:58.348048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.830 qpair failed and we were unable to recover it. 00:35:34.830 [2024-11-18 00:40:58.348186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.830 [2024-11-18 00:40:58.348212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.830 qpair failed and we were unable to recover it. 00:35:34.830 [2024-11-18 00:40:58.348337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.830 [2024-11-18 00:40:58.348365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.830 qpair failed and we were unable to recover it. 00:35:34.830 [2024-11-18 00:40:58.348490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.830 [2024-11-18 00:40:58.348517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.830 qpair failed and we were unable to recover it. 00:35:34.830 [2024-11-18 00:40:58.348603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.830 [2024-11-18 00:40:58.348631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.830 qpair failed and we were unable to recover it. 00:35:34.830 [2024-11-18 00:40:58.348751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.830 [2024-11-18 00:40:58.348777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.830 qpair failed and we were unable to recover it. 00:35:34.830 [2024-11-18 00:40:58.348924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.830 [2024-11-18 00:40:58.348951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.830 qpair failed and we were unable to recover it. 00:35:34.830 [2024-11-18 00:40:58.349070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.830 [2024-11-18 00:40:58.349096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.830 qpair failed and we were unable to recover it. 00:35:34.830 [2024-11-18 00:40:58.349236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.830 [2024-11-18 00:40:58.349266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.830 qpair failed and we were unable to recover it. 00:35:34.830 [2024-11-18 00:40:58.349374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.830 [2024-11-18 00:40:58.349401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.830 qpair failed and we were unable to recover it. 00:35:34.830 [2024-11-18 00:40:58.349530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.830 [2024-11-18 00:40:58.349559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.830 qpair failed and we were unable to recover it. 00:35:34.830 [2024-11-18 00:40:58.349655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.831 [2024-11-18 00:40:58.349681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.831 qpair failed and we were unable to recover it. 00:35:34.831 [2024-11-18 00:40:58.349821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.831 [2024-11-18 00:40:58.349850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.831 qpair failed and we were unable to recover it. 00:35:34.831 [2024-11-18 00:40:58.349986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.831 [2024-11-18 00:40:58.350012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.831 qpair failed and we were unable to recover it. 00:35:34.831 [2024-11-18 00:40:58.350137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.831 [2024-11-18 00:40:58.350175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.831 qpair failed and we were unable to recover it. 00:35:34.831 [2024-11-18 00:40:58.350271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.831 [2024-11-18 00:40:58.350299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.831 qpair failed and we were unable to recover it. 00:35:34.831 [2024-11-18 00:40:58.350398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.831 [2024-11-18 00:40:58.350427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.831 qpair failed and we were unable to recover it. 00:35:34.831 [2024-11-18 00:40:58.350525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.831 [2024-11-18 00:40:58.350551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.831 qpair failed and we were unable to recover it. 00:35:34.831 [2024-11-18 00:40:58.350673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.831 [2024-11-18 00:40:58.350699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.831 qpair failed and we were unable to recover it. 00:35:34.831 [2024-11-18 00:40:58.350782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.831 [2024-11-18 00:40:58.350819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.831 qpair failed and we were unable to recover it. 00:35:34.831 [2024-11-18 00:40:58.350917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.831 [2024-11-18 00:40:58.350945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.831 qpair failed and we were unable to recover it. 00:35:34.831 [2024-11-18 00:40:58.351041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.831 [2024-11-18 00:40:58.351067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.831 qpair failed and we were unable to recover it. 00:35:34.831 [2024-11-18 00:40:58.351221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.831 [2024-11-18 00:40:58.351247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.831 qpair failed and we were unable to recover it. 00:35:34.831 [2024-11-18 00:40:58.351331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.831 [2024-11-18 00:40:58.351361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.831 qpair failed and we were unable to recover it. 00:35:34.831 [2024-11-18 00:40:58.351452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.831 [2024-11-18 00:40:58.351479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.831 qpair failed and we were unable to recover it. 00:35:34.831 [2024-11-18 00:40:58.351617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.831 [2024-11-18 00:40:58.351642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.831 qpair failed and we were unable to recover it. 00:35:34.831 [2024-11-18 00:40:58.351763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.831 [2024-11-18 00:40:58.351790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.831 qpair failed and we were unable to recover it. 00:35:34.831 [2024-11-18 00:40:58.351919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.831 [2024-11-18 00:40:58.351946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.831 qpair failed and we were unable to recover it. 00:35:34.831 [2024-11-18 00:40:58.352067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.831 [2024-11-18 00:40:58.352094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.831 qpair failed and we were unable to recover it. 00:35:34.831 [2024-11-18 00:40:58.352182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.831 [2024-11-18 00:40:58.352210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.831 qpair failed and we were unable to recover it. 00:35:34.831 [2024-11-18 00:40:58.352328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.831 [2024-11-18 00:40:58.352355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.831 qpair failed and we were unable to recover it. 00:35:34.831 [2024-11-18 00:40:58.352470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.831 [2024-11-18 00:40:58.352496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.831 qpair failed and we were unable to recover it. 00:35:34.831 [2024-11-18 00:40:58.352614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.831 [2024-11-18 00:40:58.352641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.831 qpair failed and we were unable to recover it. 00:35:34.831 [2024-11-18 00:40:58.352782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.831 [2024-11-18 00:40:58.352808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.831 qpair failed and we were unable to recover it. 00:35:34.831 [2024-11-18 00:40:58.352899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.831 [2024-11-18 00:40:58.352925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.831 qpair failed and we were unable to recover it. 00:35:34.831 [2024-11-18 00:40:58.353041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.831 [2024-11-18 00:40:58.353069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.831 qpair failed and we were unable to recover it. 00:35:34.831 [2024-11-18 00:40:58.353212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.831 [2024-11-18 00:40:58.353239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.831 qpair failed and we were unable to recover it. 00:35:34.831 [2024-11-18 00:40:58.353373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.831 [2024-11-18 00:40:58.353412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.831 qpair failed and we were unable to recover it. 00:35:34.831 [2024-11-18 00:40:58.353527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.831 [2024-11-18 00:40:58.353555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.831 qpair failed and we were unable to recover it. 00:35:34.831 [2024-11-18 00:40:58.353714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.831 [2024-11-18 00:40:58.353765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.831 qpair failed and we were unable to recover it. 00:35:34.831 [2024-11-18 00:40:58.353900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.831 [2024-11-18 00:40:58.353926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.831 qpair failed and we were unable to recover it. 00:35:34.831 [2024-11-18 00:40:58.354043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.831 [2024-11-18 00:40:58.354071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.831 qpair failed and we were unable to recover it. 00:35:34.831 [2024-11-18 00:40:58.354164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.831 [2024-11-18 00:40:58.354190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.831 qpair failed and we were unable to recover it. 00:35:34.831 [2024-11-18 00:40:58.354270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.831 [2024-11-18 00:40:58.354297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.831 qpair failed and we were unable to recover it. 00:35:34.831 [2024-11-18 00:40:58.354388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.831 [2024-11-18 00:40:58.354414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.831 qpair failed and we were unable to recover it. 00:35:34.831 [2024-11-18 00:40:58.354527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.831 [2024-11-18 00:40:58.354554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.831 qpair failed and we were unable to recover it. 00:35:34.831 [2024-11-18 00:40:58.354704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.831 [2024-11-18 00:40:58.354730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.831 qpair failed and we were unable to recover it. 00:35:34.831 [2024-11-18 00:40:58.354848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.831 [2024-11-18 00:40:58.354875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.831 qpair failed and we were unable to recover it. 00:35:34.831 [2024-11-18 00:40:58.354964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.831 [2024-11-18 00:40:58.354991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.831 qpair failed and we were unable to recover it. 00:35:34.832 [2024-11-18 00:40:58.355118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.832 [2024-11-18 00:40:58.355156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.832 qpair failed and we were unable to recover it. 00:35:34.832 [2024-11-18 00:40:58.355302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.832 [2024-11-18 00:40:58.355337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.832 qpair failed and we were unable to recover it. 00:35:34.832 [2024-11-18 00:40:58.355457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.832 [2024-11-18 00:40:58.355484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.832 qpair failed and we were unable to recover it. 00:35:34.832 [2024-11-18 00:40:58.355594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.832 [2024-11-18 00:40:58.355620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.832 qpair failed and we were unable to recover it. 00:35:34.832 [2024-11-18 00:40:58.355700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.832 [2024-11-18 00:40:58.355726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.832 qpair failed and we were unable to recover it. 00:35:34.832 [2024-11-18 00:40:58.355819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.832 [2024-11-18 00:40:58.355846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.832 qpair failed and we were unable to recover it. 00:35:34.832 [2024-11-18 00:40:58.355959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.832 [2024-11-18 00:40:58.355986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.832 qpair failed and we were unable to recover it. 00:35:34.832 [2024-11-18 00:40:58.356072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.832 [2024-11-18 00:40:58.356100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.832 qpair failed and we were unable to recover it. 00:35:34.832 [2024-11-18 00:40:58.356192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.832 [2024-11-18 00:40:58.356219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.832 qpair failed and we were unable to recover it. 00:35:34.832 [2024-11-18 00:40:58.356366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.832 [2024-11-18 00:40:58.356393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.832 qpair failed and we were unable to recover it. 00:35:34.832 [2024-11-18 00:40:58.356509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.832 [2024-11-18 00:40:58.356535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.832 qpair failed and we were unable to recover it. 00:35:34.832 [2024-11-18 00:40:58.356628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.832 [2024-11-18 00:40:58.356655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.832 qpair failed and we were unable to recover it. 00:35:34.832 [2024-11-18 00:40:58.356772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.832 [2024-11-18 00:40:58.356799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.832 qpair failed and we were unable to recover it. 00:35:34.832 [2024-11-18 00:40:58.356924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.832 [2024-11-18 00:40:58.356953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.832 qpair failed and we were unable to recover it. 00:35:34.832 [2024-11-18 00:40:58.357043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.832 [2024-11-18 00:40:58.357071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.832 qpair failed and we were unable to recover it. 00:35:34.832 [2024-11-18 00:40:58.357212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.832 [2024-11-18 00:40:58.357238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.832 qpair failed and we were unable to recover it. 00:35:34.832 [2024-11-18 00:40:58.357324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.832 [2024-11-18 00:40:58.357352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.832 qpair failed and we were unable to recover it. 00:35:34.832 [2024-11-18 00:40:58.357440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.832 [2024-11-18 00:40:58.357466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.832 qpair failed and we were unable to recover it. 00:35:34.832 [2024-11-18 00:40:58.357582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.832 [2024-11-18 00:40:58.357608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.832 qpair failed and we were unable to recover it. 00:35:34.832 [2024-11-18 00:40:58.357685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.832 [2024-11-18 00:40:58.357710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.832 qpair failed and we were unable to recover it. 00:35:34.832 [2024-11-18 00:40:58.357820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.832 [2024-11-18 00:40:58.357846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.832 qpair failed and we were unable to recover it. 00:35:34.832 [2024-11-18 00:40:58.357923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.832 [2024-11-18 00:40:58.357951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.832 qpair failed and we were unable to recover it. 00:35:34.832 [2024-11-18 00:40:58.358067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.832 [2024-11-18 00:40:58.358093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.832 qpair failed and we were unable to recover it. 00:35:34.832 [2024-11-18 00:40:58.358201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.832 [2024-11-18 00:40:58.358228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.832 qpair failed and we were unable to recover it. 00:35:34.832 [2024-11-18 00:40:58.358322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.832 [2024-11-18 00:40:58.358350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.832 qpair failed and we were unable to recover it. 00:35:34.832 [2024-11-18 00:40:58.358442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.832 [2024-11-18 00:40:58.358469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.832 qpair failed and we were unable to recover it. 00:35:34.832 [2024-11-18 00:40:58.358610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.832 [2024-11-18 00:40:58.358641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.832 qpair failed and we were unable to recover it. 00:35:34.832 [2024-11-18 00:40:58.358788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.832 [2024-11-18 00:40:58.358814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.832 qpair failed and we were unable to recover it. 00:35:34.832 [2024-11-18 00:40:58.358889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.832 [2024-11-18 00:40:58.358915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.832 qpair failed and we were unable to recover it. 00:35:34.832 [2024-11-18 00:40:58.359002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.832 [2024-11-18 00:40:58.359029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.832 qpair failed and we were unable to recover it. 00:35:34.832 [2024-11-18 00:40:58.359167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.832 [2024-11-18 00:40:58.359194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.832 qpair failed and we were unable to recover it. 00:35:34.832 [2024-11-18 00:40:58.359276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.832 [2024-11-18 00:40:58.359302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.832 qpair failed and we were unable to recover it. 00:35:34.832 [2024-11-18 00:40:58.359445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.832 [2024-11-18 00:40:58.359474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.832 qpair failed and we were unable to recover it. 00:35:34.832 [2024-11-18 00:40:58.359620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.832 [2024-11-18 00:40:58.359673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.832 qpair failed and we were unable to recover it. 00:35:34.832 [2024-11-18 00:40:58.359800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.832 [2024-11-18 00:40:58.359826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.832 qpair failed and we were unable to recover it. 00:35:34.832 [2024-11-18 00:40:58.359941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.832 [2024-11-18 00:40:58.359967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.832 qpair failed and we were unable to recover it. 00:35:34.832 [2024-11-18 00:40:58.360110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.832 [2024-11-18 00:40:58.360137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.832 qpair failed and we were unable to recover it. 00:35:34.832 [2024-11-18 00:40:58.360282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.832 [2024-11-18 00:40:58.360315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.832 qpair failed and we were unable to recover it. 00:35:34.833 [2024-11-18 00:40:58.360397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.833 [2024-11-18 00:40:58.360424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.833 qpair failed and we were unable to recover it. 00:35:34.833 [2024-11-18 00:40:58.360510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.833 [2024-11-18 00:40:58.360555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.833 qpair failed and we were unable to recover it. 00:35:34.833 [2024-11-18 00:40:58.360719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.833 [2024-11-18 00:40:58.360749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.833 qpair failed and we were unable to recover it. 00:35:34.833 [2024-11-18 00:40:58.360908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.833 [2024-11-18 00:40:58.360935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.833 qpair failed and we were unable to recover it. 00:35:34.833 [2024-11-18 00:40:58.361049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.833 [2024-11-18 00:40:58.361076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.833 qpair failed and we were unable to recover it. 00:35:34.833 [2024-11-18 00:40:58.361155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.833 [2024-11-18 00:40:58.361182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.833 qpair failed and we were unable to recover it. 00:35:34.833 [2024-11-18 00:40:58.361331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.833 [2024-11-18 00:40:58.361358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.833 qpair failed and we were unable to recover it. 00:35:34.833 [2024-11-18 00:40:58.361471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.833 [2024-11-18 00:40:58.361496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.833 qpair failed and we were unable to recover it. 00:35:34.833 [2024-11-18 00:40:58.361608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.833 [2024-11-18 00:40:58.361634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.833 qpair failed and we were unable to recover it. 00:35:34.833 [2024-11-18 00:40:58.361754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.833 [2024-11-18 00:40:58.361780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.833 qpair failed and we were unable to recover it. 00:35:34.833 [2024-11-18 00:40:58.361891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.833 [2024-11-18 00:40:58.361917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.833 qpair failed and we were unable to recover it. 00:35:34.833 [2024-11-18 00:40:58.362007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.833 [2024-11-18 00:40:58.362034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.833 qpair failed and we were unable to recover it. 00:35:34.833 [2024-11-18 00:40:58.362148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.833 [2024-11-18 00:40:58.362174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.833 qpair failed and we were unable to recover it. 00:35:34.833 [2024-11-18 00:40:58.362283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.833 [2024-11-18 00:40:58.362323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.833 qpair failed and we were unable to recover it. 00:35:34.833 [2024-11-18 00:40:58.362411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.833 [2024-11-18 00:40:58.362438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.833 qpair failed and we were unable to recover it. 00:35:34.833 [2024-11-18 00:40:58.362530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.833 [2024-11-18 00:40:58.362562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.833 qpair failed and we were unable to recover it. 00:35:34.833 [2024-11-18 00:40:58.362677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.833 [2024-11-18 00:40:58.362704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.833 qpair failed and we were unable to recover it. 00:35:34.833 [2024-11-18 00:40:58.362807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.833 [2024-11-18 00:40:58.362834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.833 qpair failed and we were unable to recover it. 00:35:34.833 [2024-11-18 00:40:58.362933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.833 [2024-11-18 00:40:58.362961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.833 qpair failed and we were unable to recover it. 00:35:34.833 [2024-11-18 00:40:58.363088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.833 [2024-11-18 00:40:58.363128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.833 qpair failed and we were unable to recover it. 00:35:34.833 [2024-11-18 00:40:58.363218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.833 [2024-11-18 00:40:58.363246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.833 qpair failed and we were unable to recover it. 00:35:34.833 [2024-11-18 00:40:58.363365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.833 [2024-11-18 00:40:58.363392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.833 qpair failed and we were unable to recover it. 00:35:34.833 [2024-11-18 00:40:58.363472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.833 [2024-11-18 00:40:58.363498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.833 qpair failed and we were unable to recover it. 00:35:34.833 [2024-11-18 00:40:58.363615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.833 [2024-11-18 00:40:58.363641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.833 qpair failed and we were unable to recover it. 00:35:34.833 [2024-11-18 00:40:58.363719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.833 [2024-11-18 00:40:58.363745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.833 qpair failed and we were unable to recover it. 00:35:34.833 [2024-11-18 00:40:58.363857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.833 [2024-11-18 00:40:58.363882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.833 qpair failed and we were unable to recover it. 00:35:34.833 [2024-11-18 00:40:58.363991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.833 [2024-11-18 00:40:58.364021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.833 qpair failed and we were unable to recover it. 00:35:34.833 [2024-11-18 00:40:58.364120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.833 [2024-11-18 00:40:58.364148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.833 qpair failed and we were unable to recover it. 00:35:34.833 [2024-11-18 00:40:58.364266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.833 [2024-11-18 00:40:58.364294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.833 qpair failed and we were unable to recover it. 00:35:34.833 [2024-11-18 00:40:58.364447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.833 [2024-11-18 00:40:58.364474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.833 qpair failed and we were unable to recover it. 00:35:34.833 [2024-11-18 00:40:58.364567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.833 [2024-11-18 00:40:58.364594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.833 qpair failed and we were unable to recover it. 00:35:34.833 [2024-11-18 00:40:58.364741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.833 [2024-11-18 00:40:58.364766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.833 qpair failed and we were unable to recover it. 00:35:34.833 [2024-11-18 00:40:58.364883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.833 [2024-11-18 00:40:58.364909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.833 qpair failed and we were unable to recover it. 00:35:34.833 [2024-11-18 00:40:58.365029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.833 [2024-11-18 00:40:58.365056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.833 qpair failed and we were unable to recover it. 00:35:34.833 [2024-11-18 00:40:58.365175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.833 [2024-11-18 00:40:58.365203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.833 qpair failed and we were unable to recover it. 00:35:34.833 [2024-11-18 00:40:58.365341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.833 [2024-11-18 00:40:58.365381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.833 qpair failed and we were unable to recover it. 00:35:34.833 [2024-11-18 00:40:58.365486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.833 [2024-11-18 00:40:58.365516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.833 qpair failed and we were unable to recover it. 00:35:34.833 [2024-11-18 00:40:58.365632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.834 [2024-11-18 00:40:58.365659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.834 qpair failed and we were unable to recover it. 00:35:34.834 [2024-11-18 00:40:58.365798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.834 [2024-11-18 00:40:58.365824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.834 qpair failed and we were unable to recover it. 00:35:34.834 [2024-11-18 00:40:58.365911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.834 [2024-11-18 00:40:58.365938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.834 qpair failed and we were unable to recover it. 00:35:34.834 [2024-11-18 00:40:58.366032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.834 [2024-11-18 00:40:58.366059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.834 qpair failed and we were unable to recover it. 00:35:34.834 [2024-11-18 00:40:58.366172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.834 [2024-11-18 00:40:58.366198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.834 qpair failed and we were unable to recover it. 00:35:34.834 [2024-11-18 00:40:58.366284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.834 [2024-11-18 00:40:58.366317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.834 qpair failed and we were unable to recover it. 00:35:34.834 [2024-11-18 00:40:58.366399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.834 [2024-11-18 00:40:58.366425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.834 qpair failed and we were unable to recover it. 00:35:34.834 [2024-11-18 00:40:58.366512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.834 [2024-11-18 00:40:58.366538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.834 qpair failed and we were unable to recover it. 00:35:34.834 [2024-11-18 00:40:58.366665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.834 [2024-11-18 00:40:58.366692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.834 qpair failed and we were unable to recover it. 00:35:34.834 [2024-11-18 00:40:58.366801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.834 [2024-11-18 00:40:58.366829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.834 qpair failed and we were unable to recover it. 00:35:34.834 [2024-11-18 00:40:58.366947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.834 [2024-11-18 00:40:58.366973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.834 qpair failed and we were unable to recover it. 00:35:34.834 [2024-11-18 00:40:58.367061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.834 [2024-11-18 00:40:58.367087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.834 qpair failed and we were unable to recover it. 00:35:34.834 [2024-11-18 00:40:58.367176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.834 [2024-11-18 00:40:58.367202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.834 qpair failed and we were unable to recover it. 00:35:34.834 [2024-11-18 00:40:58.367351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.834 [2024-11-18 00:40:58.367377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.834 qpair failed and we were unable to recover it. 00:35:34.834 [2024-11-18 00:40:58.367471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.834 [2024-11-18 00:40:58.367496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.834 qpair failed and we were unable to recover it. 00:35:34.834 [2024-11-18 00:40:58.367600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.834 [2024-11-18 00:40:58.367625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.834 qpair failed and we were unable to recover it. 00:35:34.834 [2024-11-18 00:40:58.367735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.834 [2024-11-18 00:40:58.367761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.834 qpair failed and we were unable to recover it. 00:35:34.834 [2024-11-18 00:40:58.367875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.834 [2024-11-18 00:40:58.367901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.834 qpair failed and we were unable to recover it. 00:35:34.834 [2024-11-18 00:40:58.368014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.834 [2024-11-18 00:40:58.368047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.834 qpair failed and we were unable to recover it. 00:35:34.834 [2024-11-18 00:40:58.368181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.834 [2024-11-18 00:40:58.368220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.834 qpair failed and we were unable to recover it. 00:35:34.834 [2024-11-18 00:40:58.368340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.834 [2024-11-18 00:40:58.368369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.834 qpair failed and we were unable to recover it. 00:35:34.834 [2024-11-18 00:40:58.368475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.834 [2024-11-18 00:40:58.368501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.834 qpair failed and we were unable to recover it. 00:35:34.834 [2024-11-18 00:40:58.368639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.834 [2024-11-18 00:40:58.368665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.834 qpair failed and we were unable to recover it. 00:35:34.834 [2024-11-18 00:40:58.368781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.834 [2024-11-18 00:40:58.368808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.834 qpair failed and we were unable to recover it. 00:35:34.834 [2024-11-18 00:40:58.368958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.834 [2024-11-18 00:40:58.368985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.834 qpair failed and we were unable to recover it. 00:35:34.834 [2024-11-18 00:40:58.369129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.834 [2024-11-18 00:40:58.369157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.834 qpair failed and we were unable to recover it. 00:35:34.834 [2024-11-18 00:40:58.369285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.834 [2024-11-18 00:40:58.369317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.834 qpair failed and we were unable to recover it. 00:35:34.834 [2024-11-18 00:40:58.369444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.834 [2024-11-18 00:40:58.369473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.834 qpair failed and we were unable to recover it. 00:35:34.834 [2024-11-18 00:40:58.369581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.834 [2024-11-18 00:40:58.369626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.834 qpair failed and we were unable to recover it. 00:35:34.834 [2024-11-18 00:40:58.369786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.834 [2024-11-18 00:40:58.369816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.834 qpair failed and we were unable to recover it. 00:35:34.834 [2024-11-18 00:40:58.370033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.834 [2024-11-18 00:40:58.370098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.834 qpair failed and we were unable to recover it. 00:35:34.834 [2024-11-18 00:40:58.370239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.834 [2024-11-18 00:40:58.370266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.834 qpair failed and we were unable to recover it. 00:35:34.834 [2024-11-18 00:40:58.370372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.834 [2024-11-18 00:40:58.370399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.834 qpair failed and we were unable to recover it. 00:35:34.834 [2024-11-18 00:40:58.370508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.834 [2024-11-18 00:40:58.370534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.834 qpair failed and we were unable to recover it. 00:35:34.835 [2024-11-18 00:40:58.370626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.835 [2024-11-18 00:40:58.370652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.835 qpair failed and we were unable to recover it. 00:35:34.835 [2024-11-18 00:40:58.370768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.835 [2024-11-18 00:40:58.370794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.835 qpair failed and we were unable to recover it. 00:35:34.835 [2024-11-18 00:40:58.370938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.835 [2024-11-18 00:40:58.370966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.835 qpair failed and we were unable to recover it. 00:35:34.835 [2024-11-18 00:40:58.371085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.835 [2024-11-18 00:40:58.371113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.835 qpair failed and we were unable to recover it. 00:35:34.835 [2024-11-18 00:40:58.371238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.835 [2024-11-18 00:40:58.371266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.835 qpair failed and we were unable to recover it. 00:35:34.835 [2024-11-18 00:40:58.371360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.835 [2024-11-18 00:40:58.371387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.835 qpair failed and we were unable to recover it. 00:35:34.835 [2024-11-18 00:40:58.371514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.835 [2024-11-18 00:40:58.371542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.835 qpair failed and we were unable to recover it. 00:35:34.835 [2024-11-18 00:40:58.371698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.835 [2024-11-18 00:40:58.371749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.835 qpair failed and we were unable to recover it. 00:35:34.835 [2024-11-18 00:40:58.371854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.835 [2024-11-18 00:40:58.371880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.835 qpair failed and we were unable to recover it. 00:35:34.835 [2024-11-18 00:40:58.371999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.835 [2024-11-18 00:40:58.372024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.835 qpair failed and we were unable to recover it. 00:35:34.835 [2024-11-18 00:40:58.372140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.835 [2024-11-18 00:40:58.372167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.835 qpair failed and we were unable to recover it. 00:35:34.835 [2024-11-18 00:40:58.372282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.835 [2024-11-18 00:40:58.372320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.835 qpair failed and we were unable to recover it. 00:35:34.835 [2024-11-18 00:40:58.372464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.835 [2024-11-18 00:40:58.372490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.835 qpair failed and we were unable to recover it. 00:35:34.835 [2024-11-18 00:40:58.372581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.835 [2024-11-18 00:40:58.372607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.835 qpair failed and we were unable to recover it. 00:35:34.835 [2024-11-18 00:40:58.372693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.835 [2024-11-18 00:40:58.372720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.835 qpair failed and we were unable to recover it. 00:35:34.835 [2024-11-18 00:40:58.372838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.835 [2024-11-18 00:40:58.372865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.835 qpair failed and we were unable to recover it. 00:35:34.835 [2024-11-18 00:40:58.372973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.835 [2024-11-18 00:40:58.372998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.835 qpair failed and we were unable to recover it. 00:35:34.835 [2024-11-18 00:40:58.373108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.835 [2024-11-18 00:40:58.373134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.835 qpair failed and we were unable to recover it. 00:35:34.835 [2024-11-18 00:40:58.373245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.835 [2024-11-18 00:40:58.373271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.835 qpair failed and we were unable to recover it. 00:35:34.835 [2024-11-18 00:40:58.373416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.835 [2024-11-18 00:40:58.373442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.835 qpair failed and we were unable to recover it. 00:35:34.835 [2024-11-18 00:40:58.373555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.835 [2024-11-18 00:40:58.373581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.835 qpair failed and we were unable to recover it. 00:35:34.835 [2024-11-18 00:40:58.373696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.835 [2024-11-18 00:40:58.373722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.835 qpair failed and we were unable to recover it. 00:35:34.835 [2024-11-18 00:40:58.373833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.835 [2024-11-18 00:40:58.373858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.835 qpair failed and we were unable to recover it. 00:35:34.835 [2024-11-18 00:40:58.373974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.835 [2024-11-18 00:40:58.374000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.835 qpair failed and we were unable to recover it. 00:35:34.835 [2024-11-18 00:40:58.374088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.835 [2024-11-18 00:40:58.374115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.835 qpair failed and we were unable to recover it. 00:35:34.835 [2024-11-18 00:40:58.374211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.835 [2024-11-18 00:40:58.374237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.835 qpair failed and we were unable to recover it. 00:35:34.835 [2024-11-18 00:40:58.374387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.835 [2024-11-18 00:40:58.374415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.835 qpair failed and we were unable to recover it. 00:35:34.835 [2024-11-18 00:40:58.374537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.835 [2024-11-18 00:40:58.374564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.835 qpair failed and we were unable to recover it. 00:35:34.835 [2024-11-18 00:40:58.374702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.835 [2024-11-18 00:40:58.374728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.835 qpair failed and we were unable to recover it. 00:35:34.835 [2024-11-18 00:40:58.374814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.835 [2024-11-18 00:40:58.374840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.835 qpair failed and we were unable to recover it. 00:35:34.835 [2024-11-18 00:40:58.374959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.835 [2024-11-18 00:40:58.374986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.835 qpair failed and we were unable to recover it. 00:35:34.835 [2024-11-18 00:40:58.375072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.835 [2024-11-18 00:40:58.375097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.835 qpair failed and we were unable to recover it. 00:35:34.835 [2024-11-18 00:40:58.375212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.835 [2024-11-18 00:40:58.375239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.835 qpair failed and we were unable to recover it. 00:35:34.835 [2024-11-18 00:40:58.375361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.835 [2024-11-18 00:40:58.375388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.835 qpair failed and we were unable to recover it. 00:35:34.835 [2024-11-18 00:40:58.375475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.835 [2024-11-18 00:40:58.375501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.835 qpair failed and we were unable to recover it. 00:35:34.835 [2024-11-18 00:40:58.375604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.835 [2024-11-18 00:40:58.375630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.835 qpair failed and we were unable to recover it. 00:35:34.835 [2024-11-18 00:40:58.375748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.835 [2024-11-18 00:40:58.375774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.835 qpair failed and we were unable to recover it. 00:35:34.835 [2024-11-18 00:40:58.375884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.835 [2024-11-18 00:40:58.375911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.835 qpair failed and we were unable to recover it. 00:35:34.836 [2024-11-18 00:40:58.376031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.836 [2024-11-18 00:40:58.376058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.836 qpair failed and we were unable to recover it. 00:35:34.836 [2024-11-18 00:40:58.376146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.836 [2024-11-18 00:40:58.376173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.836 qpair failed and we were unable to recover it. 00:35:34.836 [2024-11-18 00:40:58.376265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.836 [2024-11-18 00:40:58.376290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.836 qpair failed and we were unable to recover it. 00:35:34.836 [2024-11-18 00:40:58.376411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.836 [2024-11-18 00:40:58.376439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.836 qpair failed and we were unable to recover it. 00:35:34.836 [2024-11-18 00:40:58.376530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.836 [2024-11-18 00:40:58.376557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.836 qpair failed and we were unable to recover it. 00:35:34.836 [2024-11-18 00:40:58.376646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.836 [2024-11-18 00:40:58.376673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.836 qpair failed and we were unable to recover it. 00:35:34.836 [2024-11-18 00:40:58.376784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.836 [2024-11-18 00:40:58.376811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.836 qpair failed and we were unable to recover it. 00:35:34.836 [2024-11-18 00:40:58.376958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.836 [2024-11-18 00:40:58.376985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.836 qpair failed and we were unable to recover it. 00:35:34.836 [2024-11-18 00:40:58.377133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.836 [2024-11-18 00:40:58.377159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.836 qpair failed and we were unable to recover it. 00:35:34.836 [2024-11-18 00:40:58.377270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.836 [2024-11-18 00:40:58.377296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.836 qpair failed and we were unable to recover it. 00:35:34.836 [2024-11-18 00:40:58.377420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.836 [2024-11-18 00:40:58.377446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.836 qpair failed and we were unable to recover it. 00:35:34.836 [2024-11-18 00:40:58.377538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.836 [2024-11-18 00:40:58.377564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.836 qpair failed and we were unable to recover it. 00:35:34.836 [2024-11-18 00:40:58.377703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.836 [2024-11-18 00:40:58.377729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.836 qpair failed and we were unable to recover it. 00:35:34.836 [2024-11-18 00:40:58.377870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.836 [2024-11-18 00:40:58.377901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.836 qpair failed and we were unable to recover it. 00:35:34.836 [2024-11-18 00:40:58.378018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.836 [2024-11-18 00:40:58.378044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.836 qpair failed and we were unable to recover it. 00:35:34.836 [2024-11-18 00:40:58.378181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.836 [2024-11-18 00:40:58.378207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.836 qpair failed and we were unable to recover it. 00:35:34.836 [2024-11-18 00:40:58.378331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.836 [2024-11-18 00:40:58.378359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.836 qpair failed and we were unable to recover it. 00:35:34.836 [2024-11-18 00:40:58.378491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.836 [2024-11-18 00:40:58.378537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.836 qpair failed and we were unable to recover it. 00:35:34.836 [2024-11-18 00:40:58.378671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.836 [2024-11-18 00:40:58.378700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.836 qpair failed and we were unable to recover it. 00:35:34.836 [2024-11-18 00:40:58.378831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.836 [2024-11-18 00:40:58.378857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.836 qpair failed and we were unable to recover it. 00:35:34.836 [2024-11-18 00:40:58.379000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.836 [2024-11-18 00:40:58.379026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.836 qpair failed and we were unable to recover it. 00:35:34.836 [2024-11-18 00:40:58.379145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.836 [2024-11-18 00:40:58.379170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.836 qpair failed and we were unable to recover it. 00:35:34.836 [2024-11-18 00:40:58.379249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.836 [2024-11-18 00:40:58.379276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.836 qpair failed and we were unable to recover it. 00:35:34.836 [2024-11-18 00:40:58.379443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.836 [2024-11-18 00:40:58.379487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.836 qpair failed and we were unable to recover it. 00:35:34.836 [2024-11-18 00:40:58.379601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.836 [2024-11-18 00:40:58.379627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.836 qpair failed and we were unable to recover it. 00:35:34.836 [2024-11-18 00:40:58.379763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.836 [2024-11-18 00:40:58.379808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.836 qpair failed and we were unable to recover it. 00:35:34.836 [2024-11-18 00:40:58.379920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.836 [2024-11-18 00:40:58.379946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.836 qpair failed and we were unable to recover it. 00:35:34.836 [2024-11-18 00:40:58.380067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.836 [2024-11-18 00:40:58.380095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.836 qpair failed and we were unable to recover it. 00:35:34.836 [2024-11-18 00:40:58.380182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.836 [2024-11-18 00:40:58.380208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.836 qpair failed and we were unable to recover it. 00:35:34.836 [2024-11-18 00:40:58.380340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.836 [2024-11-18 00:40:58.380367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.836 qpair failed and we were unable to recover it. 00:35:34.836 [2024-11-18 00:40:58.380456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.836 [2024-11-18 00:40:58.380482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.836 qpair failed and we were unable to recover it. 00:35:34.836 [2024-11-18 00:40:58.380601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.836 [2024-11-18 00:40:58.380627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.836 qpair failed and we were unable to recover it. 00:35:34.836 [2024-11-18 00:40:58.380736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.836 [2024-11-18 00:40:58.380762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.836 qpair failed and we were unable to recover it. 00:35:34.836 [2024-11-18 00:40:58.380871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.836 [2024-11-18 00:40:58.380897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.836 qpair failed and we were unable to recover it. 00:35:34.836 [2024-11-18 00:40:58.381014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.836 [2024-11-18 00:40:58.381040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.836 qpair failed and we were unable to recover it. 00:35:34.836 [2024-11-18 00:40:58.381154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.836 [2024-11-18 00:40:58.381181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.836 qpair failed and we were unable to recover it. 00:35:34.836 [2024-11-18 00:40:58.381260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.836 [2024-11-18 00:40:58.381286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.836 qpair failed and we were unable to recover it. 00:35:34.836 [2024-11-18 00:40:58.381404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.836 [2024-11-18 00:40:58.381431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.837 qpair failed and we were unable to recover it. 00:35:34.837 [2024-11-18 00:40:58.381545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.837 [2024-11-18 00:40:58.381571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.837 qpair failed and we were unable to recover it. 00:35:34.837 [2024-11-18 00:40:58.381681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.837 [2024-11-18 00:40:58.381707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.837 qpair failed and we were unable to recover it. 00:35:34.837 [2024-11-18 00:40:58.381813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.837 [2024-11-18 00:40:58.381845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.837 qpair failed and we were unable to recover it. 00:35:34.837 [2024-11-18 00:40:58.381959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.837 [2024-11-18 00:40:58.381985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.837 qpair failed and we were unable to recover it. 00:35:34.837 [2024-11-18 00:40:58.382061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.837 [2024-11-18 00:40:58.382087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.837 qpair failed and we were unable to recover it. 00:35:34.837 [2024-11-18 00:40:58.382172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.837 [2024-11-18 00:40:58.382198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.837 qpair failed and we were unable to recover it. 00:35:34.837 [2024-11-18 00:40:58.382308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.837 [2024-11-18 00:40:58.382346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.837 qpair failed and we were unable to recover it. 00:35:34.837 [2024-11-18 00:40:58.382487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.837 [2024-11-18 00:40:58.382513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.837 qpair failed and we were unable to recover it. 00:35:34.837 [2024-11-18 00:40:58.382594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.837 [2024-11-18 00:40:58.382620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.837 qpair failed and we were unable to recover it. 00:35:34.837 [2024-11-18 00:40:58.382725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.837 [2024-11-18 00:40:58.382751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.837 qpair failed and we were unable to recover it. 00:35:34.837 [2024-11-18 00:40:58.382860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.837 [2024-11-18 00:40:58.382886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.837 qpair failed and we were unable to recover it. 00:35:34.837 [2024-11-18 00:40:58.382976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.837 [2024-11-18 00:40:58.383002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.837 qpair failed and we were unable to recover it. 00:35:34.837 [2024-11-18 00:40:58.383116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.837 [2024-11-18 00:40:58.383141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.837 qpair failed and we were unable to recover it. 00:35:34.837 [2024-11-18 00:40:58.383248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.837 [2024-11-18 00:40:58.383274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.837 qpair failed and we were unable to recover it. 00:35:34.837 [2024-11-18 00:40:58.383368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.837 [2024-11-18 00:40:58.383395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.837 qpair failed and we were unable to recover it. 00:35:34.837 [2024-11-18 00:40:58.383504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.837 [2024-11-18 00:40:58.383530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.837 qpair failed and we were unable to recover it. 00:35:34.837 [2024-11-18 00:40:58.383645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.837 [2024-11-18 00:40:58.383671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.837 qpair failed and we were unable to recover it. 00:35:34.837 [2024-11-18 00:40:58.383790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.837 [2024-11-18 00:40:58.383816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.837 qpair failed and we were unable to recover it. 00:35:34.837 [2024-11-18 00:40:58.383931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.837 [2024-11-18 00:40:58.383957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.837 qpair failed and we were unable to recover it. 00:35:34.837 [2024-11-18 00:40:58.384084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.837 [2024-11-18 00:40:58.384123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.837 qpair failed and we were unable to recover it. 00:35:34.837 [2024-11-18 00:40:58.384217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.837 [2024-11-18 00:40:58.384246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.837 qpair failed and we were unable to recover it. 00:35:34.837 [2024-11-18 00:40:58.384383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.837 [2024-11-18 00:40:58.384409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.837 qpair failed and we were unable to recover it. 00:35:34.837 [2024-11-18 00:40:58.384581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.837 [2024-11-18 00:40:58.384607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.837 qpair failed and we were unable to recover it. 00:35:34.837 [2024-11-18 00:40:58.384776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.837 [2024-11-18 00:40:58.384834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.837 qpair failed and we were unable to recover it. 00:35:34.837 [2024-11-18 00:40:58.384951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.837 [2024-11-18 00:40:58.384977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.837 qpair failed and we were unable to recover it. 00:35:34.837 [2024-11-18 00:40:58.385127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.837 [2024-11-18 00:40:58.385154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.837 qpair failed and we were unable to recover it. 00:35:34.837 [2024-11-18 00:40:58.385247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.837 [2024-11-18 00:40:58.385274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.837 qpair failed and we were unable to recover it. 00:35:34.837 [2024-11-18 00:40:58.385370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.837 [2024-11-18 00:40:58.385415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.837 qpair failed and we were unable to recover it. 00:35:34.837 [2024-11-18 00:40:58.385516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.837 [2024-11-18 00:40:58.385559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.837 qpair failed and we were unable to recover it. 00:35:34.837 [2024-11-18 00:40:58.385795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.837 [2024-11-18 00:40:58.385854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.837 qpair failed and we were unable to recover it. 00:35:34.837 [2024-11-18 00:40:58.385995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.837 [2024-11-18 00:40:58.386021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.837 qpair failed and we were unable to recover it. 00:35:34.837 [2024-11-18 00:40:58.386106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.837 [2024-11-18 00:40:58.386132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.837 qpair failed and we were unable to recover it. 00:35:34.837 [2024-11-18 00:40:58.386218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.837 [2024-11-18 00:40:58.386245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.837 qpair failed and we were unable to recover it. 00:35:34.837 [2024-11-18 00:40:58.386355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.837 [2024-11-18 00:40:58.386381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.837 qpair failed and we were unable to recover it. 00:35:34.837 [2024-11-18 00:40:58.386515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.837 [2024-11-18 00:40:58.386544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.837 qpair failed and we were unable to recover it. 00:35:34.837 [2024-11-18 00:40:58.386716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.837 [2024-11-18 00:40:58.386745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.837 qpair failed and we were unable to recover it. 00:35:34.837 [2024-11-18 00:40:58.386873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.837 [2024-11-18 00:40:58.386898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.837 qpair failed and we were unable to recover it. 00:35:34.837 [2024-11-18 00:40:58.387007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.837 [2024-11-18 00:40:58.387033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.837 qpair failed and we were unable to recover it. 00:35:34.838 [2024-11-18 00:40:58.387116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.838 [2024-11-18 00:40:58.387143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.838 qpair failed and we were unable to recover it. 00:35:34.838 [2024-11-18 00:40:58.387256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.838 [2024-11-18 00:40:58.387285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.838 qpair failed and we were unable to recover it. 00:35:34.838 [2024-11-18 00:40:58.387380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.838 [2024-11-18 00:40:58.387406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.838 qpair failed and we were unable to recover it. 00:35:34.838 [2024-11-18 00:40:58.387547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.838 [2024-11-18 00:40:58.387574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.838 qpair failed and we were unable to recover it. 00:35:34.838 [2024-11-18 00:40:58.387658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.838 [2024-11-18 00:40:58.387684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.838 qpair failed and we were unable to recover it. 00:35:34.838 [2024-11-18 00:40:58.387792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.838 [2024-11-18 00:40:58.387818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.838 qpair failed and we were unable to recover it. 00:35:34.838 [2024-11-18 00:40:58.387934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.838 [2024-11-18 00:40:58.387961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.838 qpair failed and we were unable to recover it. 00:35:34.838 [2024-11-18 00:40:58.388050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.838 [2024-11-18 00:40:58.388076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.838 qpair failed and we were unable to recover it. 00:35:34.838 [2024-11-18 00:40:58.388185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.838 [2024-11-18 00:40:58.388211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.838 qpair failed and we were unable to recover it. 00:35:34.838 [2024-11-18 00:40:58.388332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.838 [2024-11-18 00:40:58.388359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.838 qpair failed and we were unable to recover it. 00:35:34.838 [2024-11-18 00:40:58.388464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.838 [2024-11-18 00:40:58.388489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.838 qpair failed and we were unable to recover it. 00:35:34.838 [2024-11-18 00:40:58.388642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.838 [2024-11-18 00:40:58.388681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.838 qpair failed and we were unable to recover it. 00:35:34.838 [2024-11-18 00:40:58.388832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.838 [2024-11-18 00:40:58.388860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.838 qpair failed and we were unable to recover it. 00:35:34.838 [2024-11-18 00:40:58.389000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.838 [2024-11-18 00:40:58.389026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.838 qpair failed and we were unable to recover it. 00:35:34.838 [2024-11-18 00:40:58.389140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.838 [2024-11-18 00:40:58.389167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.838 qpair failed and we were unable to recover it. 00:35:34.838 [2024-11-18 00:40:58.389283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.838 [2024-11-18 00:40:58.389315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.838 qpair failed and we were unable to recover it. 00:35:34.838 [2024-11-18 00:40:58.389458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.838 [2024-11-18 00:40:58.389484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.838 qpair failed and we were unable to recover it. 00:35:34.838 [2024-11-18 00:40:58.389572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.838 [2024-11-18 00:40:58.389598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.838 qpair failed and we were unable to recover it. 00:35:34.838 [2024-11-18 00:40:58.389716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.838 [2024-11-18 00:40:58.389749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.838 qpair failed and we were unable to recover it. 00:35:34.838 [2024-11-18 00:40:58.389831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.838 [2024-11-18 00:40:58.389858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.838 qpair failed and we were unable to recover it. 00:35:34.838 [2024-11-18 00:40:58.389973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.838 [2024-11-18 00:40:58.389999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.838 qpair failed and we were unable to recover it. 00:35:34.838 [2024-11-18 00:40:58.390146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.838 [2024-11-18 00:40:58.390172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.838 qpair failed and we were unable to recover it. 00:35:34.838 [2024-11-18 00:40:58.390295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.838 [2024-11-18 00:40:58.390328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.838 qpair failed and we were unable to recover it. 00:35:34.838 [2024-11-18 00:40:58.390440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.838 [2024-11-18 00:40:58.390466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.838 qpair failed and we were unable to recover it. 00:35:34.838 [2024-11-18 00:40:58.390578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.838 [2024-11-18 00:40:58.390604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.838 qpair failed and we were unable to recover it. 00:35:34.838 [2024-11-18 00:40:58.390743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.838 [2024-11-18 00:40:58.390769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.838 qpair failed and we were unable to recover it. 00:35:34.838 [2024-11-18 00:40:58.390842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.838 [2024-11-18 00:40:58.390868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.838 qpair failed and we were unable to recover it. 00:35:34.838 [2024-11-18 00:40:58.390949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.838 [2024-11-18 00:40:58.390975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.838 qpair failed and we were unable to recover it. 00:35:34.838 [2024-11-18 00:40:58.391091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.838 [2024-11-18 00:40:58.391117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.838 qpair failed and we were unable to recover it. 00:35:34.838 [2024-11-18 00:40:58.391199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.838 [2024-11-18 00:40:58.391227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.838 qpair failed and we were unable to recover it. 00:35:34.838 [2024-11-18 00:40:58.391342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.838 [2024-11-18 00:40:58.391369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.838 qpair failed and we were unable to recover it. 00:35:34.838 [2024-11-18 00:40:58.391454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.838 [2024-11-18 00:40:58.391479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.838 qpair failed and we were unable to recover it. 00:35:34.838 [2024-11-18 00:40:58.391621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.838 [2024-11-18 00:40:58.391647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.838 qpair failed and we were unable to recover it. 00:35:34.838 [2024-11-18 00:40:58.391791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.838 [2024-11-18 00:40:58.391817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.838 qpair failed and we were unable to recover it. 00:35:34.838 [2024-11-18 00:40:58.391929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.838 [2024-11-18 00:40:58.391955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.838 qpair failed and we were unable to recover it. 00:35:34.838 [2024-11-18 00:40:58.392071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.838 [2024-11-18 00:40:58.392099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.838 qpair failed and we were unable to recover it. 00:35:34.838 [2024-11-18 00:40:58.392184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.838 [2024-11-18 00:40:58.392210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.838 qpair failed and we were unable to recover it. 00:35:34.838 [2024-11-18 00:40:58.392296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.839 [2024-11-18 00:40:58.392330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.839 qpair failed and we were unable to recover it. 00:35:34.839 [2024-11-18 00:40:58.392415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.839 [2024-11-18 00:40:58.392440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.839 qpair failed and we were unable to recover it. 00:35:34.839 [2024-11-18 00:40:58.392569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.839 [2024-11-18 00:40:58.392598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.839 qpair failed and we were unable to recover it. 00:35:34.839 [2024-11-18 00:40:58.392740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.839 [2024-11-18 00:40:58.392766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.839 qpair failed and we were unable to recover it. 00:35:34.839 [2024-11-18 00:40:58.392901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.839 [2024-11-18 00:40:58.392926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.839 qpair failed and we were unable to recover it. 00:35:34.839 [2024-11-18 00:40:58.393041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.839 [2024-11-18 00:40:58.393067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.839 qpair failed and we were unable to recover it. 00:35:34.839 [2024-11-18 00:40:58.393178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.839 [2024-11-18 00:40:58.393204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.839 qpair failed and we were unable to recover it. 00:35:34.839 [2024-11-18 00:40:58.393320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.839 [2024-11-18 00:40:58.393362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.839 qpair failed and we were unable to recover it. 00:35:34.839 [2024-11-18 00:40:58.393497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.839 [2024-11-18 00:40:58.393526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.839 qpair failed and we were unable to recover it. 00:35:34.839 [2024-11-18 00:40:58.393616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.839 [2024-11-18 00:40:58.393644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.839 qpair failed and we were unable to recover it. 00:35:34.839 [2024-11-18 00:40:58.393791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.839 [2024-11-18 00:40:58.393818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.839 qpair failed and we were unable to recover it. 00:35:34.839 [2024-11-18 00:40:58.393939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.839 [2024-11-18 00:40:58.393967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.839 qpair failed and we were unable to recover it. 00:35:34.839 [2024-11-18 00:40:58.394079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.839 [2024-11-18 00:40:58.394107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.839 qpair failed and we were unable to recover it. 00:35:34.839 [2024-11-18 00:40:58.394237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.839 [2024-11-18 00:40:58.394264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.839 qpair failed and we were unable to recover it. 00:35:34.839 [2024-11-18 00:40:58.394438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.839 [2024-11-18 00:40:58.394482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.839 qpair failed and we were unable to recover it. 00:35:34.839 [2024-11-18 00:40:58.394614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.839 [2024-11-18 00:40:58.394642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.839 qpair failed and we were unable to recover it. 00:35:34.839 [2024-11-18 00:40:58.394816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.839 [2024-11-18 00:40:58.394858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.839 qpair failed and we were unable to recover it. 00:35:34.839 [2024-11-18 00:40:58.394972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.839 [2024-11-18 00:40:58.394998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.839 qpair failed and we were unable to recover it. 00:35:34.839 [2024-11-18 00:40:58.395112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.839 [2024-11-18 00:40:58.395138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.839 qpair failed and we were unable to recover it. 00:35:34.839 [2024-11-18 00:40:58.395255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.839 [2024-11-18 00:40:58.395281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.839 qpair failed and we were unable to recover it. 00:35:34.839 [2024-11-18 00:40:58.395425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.839 [2024-11-18 00:40:58.395454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.839 qpair failed and we were unable to recover it. 00:35:34.839 [2024-11-18 00:40:58.395624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.839 [2024-11-18 00:40:58.395667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.839 qpair failed and we were unable to recover it. 00:35:34.839 [2024-11-18 00:40:58.395809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.839 [2024-11-18 00:40:58.395851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.839 qpair failed and we were unable to recover it. 00:35:34.839 [2024-11-18 00:40:58.395995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.839 [2024-11-18 00:40:58.396021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.839 qpair failed and we were unable to recover it. 00:35:34.839 [2024-11-18 00:40:58.396163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.839 [2024-11-18 00:40:58.396189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.839 qpair failed and we were unable to recover it. 00:35:34.839 [2024-11-18 00:40:58.396277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.839 [2024-11-18 00:40:58.396303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.839 qpair failed and we were unable to recover it. 00:35:34.839 [2024-11-18 00:40:58.396419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.839 [2024-11-18 00:40:58.396447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.839 qpair failed and we were unable to recover it. 00:35:34.839 [2024-11-18 00:40:58.396550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.839 [2024-11-18 00:40:58.396576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.839 qpair failed and we were unable to recover it. 00:35:34.839 [2024-11-18 00:40:58.396730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.839 [2024-11-18 00:40:58.396773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.839 qpair failed and we were unable to recover it. 00:35:34.839 [2024-11-18 00:40:58.396859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.839 [2024-11-18 00:40:58.396885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.839 qpair failed and we were unable to recover it. 00:35:34.839 [2024-11-18 00:40:58.397021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.839 [2024-11-18 00:40:58.397047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.839 qpair failed and we were unable to recover it. 00:35:34.839 [2024-11-18 00:40:58.397155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.839 [2024-11-18 00:40:58.397180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.839 qpair failed and we were unable to recover it. 00:35:34.839 [2024-11-18 00:40:58.397287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.839 [2024-11-18 00:40:58.397320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.839 qpair failed and we were unable to recover it. 00:35:34.839 [2024-11-18 00:40:58.397434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.839 [2024-11-18 00:40:58.397461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.839 qpair failed and we were unable to recover it. 00:35:34.839 [2024-11-18 00:40:58.397572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.840 [2024-11-18 00:40:58.397598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.840 qpair failed and we were unable to recover it. 00:35:34.840 [2024-11-18 00:40:58.397741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.840 [2024-11-18 00:40:58.397768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.840 qpair failed and we were unable to recover it. 00:35:34.840 [2024-11-18 00:40:58.397858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.840 [2024-11-18 00:40:58.397884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.840 qpair failed and we were unable to recover it. 00:35:34.840 [2024-11-18 00:40:58.398002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.840 [2024-11-18 00:40:58.398031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.840 qpair failed and we were unable to recover it. 00:35:34.840 [2024-11-18 00:40:58.398124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.840 [2024-11-18 00:40:58.398150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.840 qpair failed and we were unable to recover it. 00:35:34.840 [2024-11-18 00:40:58.398292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.840 [2024-11-18 00:40:58.398324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.840 qpair failed and we were unable to recover it. 00:35:34.840 [2024-11-18 00:40:58.398465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.840 [2024-11-18 00:40:58.398493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.840 qpair failed and we were unable to recover it. 00:35:34.840 [2024-11-18 00:40:58.398634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.840 [2024-11-18 00:40:58.398660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.840 qpair failed and we were unable to recover it. 00:35:34.840 [2024-11-18 00:40:58.398776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.840 [2024-11-18 00:40:58.398802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.840 qpair failed and we were unable to recover it. 00:35:34.840 [2024-11-18 00:40:58.398935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.840 [2024-11-18 00:40:58.398983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.840 qpair failed and we were unable to recover it. 00:35:34.840 [2024-11-18 00:40:58.399073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.840 [2024-11-18 00:40:58.399100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.840 qpair failed and we were unable to recover it. 00:35:34.840 [2024-11-18 00:40:58.399214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.840 [2024-11-18 00:40:58.399240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.840 qpair failed and we were unable to recover it. 00:35:34.840 [2024-11-18 00:40:58.399356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.840 [2024-11-18 00:40:58.399382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.840 qpair failed and we were unable to recover it. 00:35:34.840 [2024-11-18 00:40:58.399501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.840 [2024-11-18 00:40:58.399527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.840 qpair failed and we were unable to recover it. 00:35:34.840 [2024-11-18 00:40:58.399643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.840 [2024-11-18 00:40:58.399674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.840 qpair failed and we were unable to recover it. 00:35:34.840 [2024-11-18 00:40:58.399789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.840 [2024-11-18 00:40:58.399816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.840 qpair failed and we were unable to recover it. 00:35:34.840 [2024-11-18 00:40:58.399939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.840 [2024-11-18 00:40:58.399965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.840 qpair failed and we were unable to recover it. 00:35:34.840 [2024-11-18 00:40:58.400081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.840 [2024-11-18 00:40:58.400106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.840 qpair failed and we were unable to recover it. 00:35:34.840 [2024-11-18 00:40:58.400219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.840 [2024-11-18 00:40:58.400245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.840 qpair failed and we were unable to recover it. 00:35:34.840 [2024-11-18 00:40:58.400387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.840 [2024-11-18 00:40:58.400414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.840 qpair failed and we were unable to recover it. 00:35:34.840 [2024-11-18 00:40:58.400502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.840 [2024-11-18 00:40:58.400527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.840 qpair failed and we were unable to recover it. 00:35:34.840 [2024-11-18 00:40:58.400658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.840 [2024-11-18 00:40:58.400686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.840 qpair failed and we were unable to recover it. 00:35:34.840 [2024-11-18 00:40:58.400801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.840 [2024-11-18 00:40:58.400829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.840 qpair failed and we were unable to recover it. 00:35:34.840 [2024-11-18 00:40:58.400970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.840 [2024-11-18 00:40:58.400997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.840 qpair failed and we were unable to recover it. 00:35:34.840 [2024-11-18 00:40:58.401105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.840 [2024-11-18 00:40:58.401132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.840 qpair failed and we were unable to recover it. 00:35:34.840 [2024-11-18 00:40:58.401252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.840 [2024-11-18 00:40:58.401278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.840 qpair failed and we were unable to recover it. 00:35:34.840 [2024-11-18 00:40:58.401417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.840 [2024-11-18 00:40:58.401460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.840 qpair failed and we were unable to recover it. 00:35:34.840 [2024-11-18 00:40:58.401571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.840 [2024-11-18 00:40:58.401613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.840 qpair failed and we were unable to recover it. 00:35:34.840 [2024-11-18 00:40:58.401787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.840 [2024-11-18 00:40:58.401816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.840 qpair failed and we were unable to recover it. 00:35:34.840 [2024-11-18 00:40:58.401953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.840 [2024-11-18 00:40:58.401996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.840 qpair failed and we were unable to recover it. 00:35:34.840 [2024-11-18 00:40:58.402076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.840 [2024-11-18 00:40:58.402103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.840 qpair failed and we were unable to recover it. 00:35:34.840 [2024-11-18 00:40:58.402243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.840 [2024-11-18 00:40:58.402269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.840 qpair failed and we were unable to recover it. 00:35:34.840 [2024-11-18 00:40:58.402376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.840 [2024-11-18 00:40:58.402405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.840 qpair failed and we were unable to recover it. 00:35:34.840 [2024-11-18 00:40:58.402516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.840 [2024-11-18 00:40:58.402544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.840 qpair failed and we were unable to recover it. 00:35:34.840 [2024-11-18 00:40:58.402665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.840 [2024-11-18 00:40:58.402693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.840 qpair failed and we were unable to recover it. 00:35:34.840 [2024-11-18 00:40:58.402840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.840 [2024-11-18 00:40:58.402868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.840 qpair failed and we were unable to recover it. 00:35:34.840 [2024-11-18 00:40:58.403049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.840 [2024-11-18 00:40:58.403093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.840 qpair failed and we were unable to recover it. 00:35:34.840 [2024-11-18 00:40:58.403209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.840 [2024-11-18 00:40:58.403236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.840 qpair failed and we were unable to recover it. 00:35:34.840 [2024-11-18 00:40:58.403321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.841 [2024-11-18 00:40:58.403347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.841 qpair failed and we were unable to recover it. 00:35:34.841 [2024-11-18 00:40:58.403468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.841 [2024-11-18 00:40:58.403511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.841 qpair failed and we were unable to recover it. 00:35:34.841 [2024-11-18 00:40:58.403615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.841 [2024-11-18 00:40:58.403642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.841 qpair failed and we were unable to recover it. 00:35:34.841 [2024-11-18 00:40:58.403776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.841 [2024-11-18 00:40:58.403806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.841 qpair failed and we were unable to recover it. 00:35:34.841 [2024-11-18 00:40:58.403887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.841 [2024-11-18 00:40:58.403914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.841 qpair failed and we were unable to recover it. 00:35:34.841 [2024-11-18 00:40:58.404032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.841 [2024-11-18 00:40:58.404059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.841 qpair failed and we were unable to recover it. 00:35:34.841 [2024-11-18 00:40:58.404199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.841 [2024-11-18 00:40:58.404226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.841 qpair failed and we were unable to recover it. 00:35:34.841 [2024-11-18 00:40:58.404363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.841 [2024-11-18 00:40:58.404390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.841 qpair failed and we were unable to recover it. 00:35:34.841 [2024-11-18 00:40:58.404479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.841 [2024-11-18 00:40:58.404505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.841 qpair failed and we were unable to recover it. 00:35:34.841 [2024-11-18 00:40:58.404608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.841 [2024-11-18 00:40:58.404633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.841 qpair failed and we were unable to recover it. 00:35:34.841 [2024-11-18 00:40:58.404723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.841 [2024-11-18 00:40:58.404750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.841 qpair failed and we were unable to recover it. 00:35:34.841 [2024-11-18 00:40:58.404856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.841 [2024-11-18 00:40:58.404883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.841 qpair failed and we were unable to recover it. 00:35:34.841 [2024-11-18 00:40:58.404992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.841 [2024-11-18 00:40:58.405018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.841 qpair failed and we were unable to recover it. 00:35:34.841 [2024-11-18 00:40:58.405090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.841 [2024-11-18 00:40:58.405115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.841 qpair failed and we were unable to recover it. 00:35:34.841 [2024-11-18 00:40:58.405227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.841 [2024-11-18 00:40:58.405254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.841 qpair failed and we were unable to recover it. 00:35:34.841 [2024-11-18 00:40:58.405369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.841 [2024-11-18 00:40:58.405395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.841 qpair failed and we were unable to recover it. 00:35:34.841 [2024-11-18 00:40:58.405540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.841 [2024-11-18 00:40:58.405565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.841 qpair failed and we were unable to recover it. 00:35:34.841 [2024-11-18 00:40:58.405650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.841 [2024-11-18 00:40:58.405676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.841 qpair failed and we were unable to recover it. 00:35:34.841 [2024-11-18 00:40:58.405764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.841 [2024-11-18 00:40:58.405791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.841 qpair failed and we were unable to recover it. 00:35:34.841 [2024-11-18 00:40:58.405931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.841 [2024-11-18 00:40:58.405957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.841 qpair failed and we were unable to recover it. 00:35:34.841 [2024-11-18 00:40:58.406035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.841 [2024-11-18 00:40:58.406062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.841 qpair failed and we were unable to recover it. 00:35:34.841 [2024-11-18 00:40:58.406201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.841 [2024-11-18 00:40:58.406227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.841 qpair failed and we were unable to recover it. 00:35:34.841 [2024-11-18 00:40:58.406347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.841 [2024-11-18 00:40:58.406376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.841 qpair failed and we were unable to recover it. 00:35:34.841 [2024-11-18 00:40:58.406469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.841 [2024-11-18 00:40:58.406495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.841 qpair failed and we were unable to recover it. 00:35:34.841 [2024-11-18 00:40:58.406581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.841 [2024-11-18 00:40:58.406624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.841 qpair failed and we were unable to recover it. 00:35:34.841 [2024-11-18 00:40:58.406700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.841 [2024-11-18 00:40:58.406742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.841 qpair failed and we were unable to recover it. 00:35:34.841 [2024-11-18 00:40:58.406851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.841 [2024-11-18 00:40:58.406877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.841 qpair failed and we were unable to recover it. 00:35:34.841 [2024-11-18 00:40:58.406991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.841 [2024-11-18 00:40:58.407016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.841 qpair failed and we were unable to recover it. 00:35:34.841 [2024-11-18 00:40:58.407127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.841 [2024-11-18 00:40:58.407153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.841 qpair failed and we were unable to recover it. 00:35:34.841 [2024-11-18 00:40:58.407265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.841 [2024-11-18 00:40:58.407292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.841 qpair failed and we were unable to recover it. 00:35:34.841 [2024-11-18 00:40:58.407396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.841 [2024-11-18 00:40:58.407423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.841 qpair failed and we were unable to recover it. 00:35:34.841 [2024-11-18 00:40:58.407525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.841 [2024-11-18 00:40:58.407553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.841 qpair failed and we were unable to recover it. 00:35:34.841 [2024-11-18 00:40:58.407665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.841 [2024-11-18 00:40:58.407693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.841 qpair failed and we were unable to recover it. 00:35:34.841 [2024-11-18 00:40:58.407804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.841 [2024-11-18 00:40:58.407831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.841 qpair failed and we were unable to recover it. 00:35:34.841 [2024-11-18 00:40:58.407974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.841 [2024-11-18 00:40:58.408003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.841 qpair failed and we were unable to recover it. 00:35:34.841 [2024-11-18 00:40:58.408141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.841 [2024-11-18 00:40:58.408167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.841 qpair failed and we were unable to recover it. 00:35:34.841 [2024-11-18 00:40:58.408256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.841 [2024-11-18 00:40:58.408283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.841 qpair failed and we were unable to recover it. 00:35:34.841 [2024-11-18 00:40:58.408384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.841 [2024-11-18 00:40:58.408412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.841 qpair failed and we were unable to recover it. 00:35:34.841 [2024-11-18 00:40:58.408564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.842 [2024-11-18 00:40:58.408607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.842 qpair failed and we were unable to recover it. 00:35:34.842 [2024-11-18 00:40:58.408762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.842 [2024-11-18 00:40:58.408804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.842 qpair failed and we were unable to recover it. 00:35:34.842 [2024-11-18 00:40:58.408961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.842 [2024-11-18 00:40:58.409003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.842 qpair failed and we were unable to recover it. 00:35:34.842 [2024-11-18 00:40:58.409146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.842 [2024-11-18 00:40:58.409172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.842 qpair failed and we were unable to recover it. 00:35:34.842 [2024-11-18 00:40:58.409282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.842 [2024-11-18 00:40:58.409308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.842 qpair failed and we were unable to recover it. 00:35:34.842 [2024-11-18 00:40:58.409451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.842 [2024-11-18 00:40:58.409480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.842 qpair failed and we were unable to recover it. 00:35:34.842 [2024-11-18 00:40:58.409576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.842 [2024-11-18 00:40:58.409604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.842 qpair failed and we were unable to recover it. 00:35:34.842 [2024-11-18 00:40:58.409745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.842 [2024-11-18 00:40:58.409811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.842 qpair failed and we were unable to recover it. 00:35:34.842 [2024-11-18 00:40:58.409916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.842 [2024-11-18 00:40:58.409960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.842 qpair failed and we were unable to recover it. 00:35:34.842 [2024-11-18 00:40:58.410126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.842 [2024-11-18 00:40:58.410154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.842 qpair failed and we were unable to recover it. 00:35:34.842 [2024-11-18 00:40:58.410278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.842 [2024-11-18 00:40:58.410306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.842 qpair failed and we were unable to recover it. 00:35:34.842 [2024-11-18 00:40:58.410507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.842 [2024-11-18 00:40:58.410560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.842 qpair failed and we were unable to recover it. 00:35:34.842 [2024-11-18 00:40:58.410643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.842 [2024-11-18 00:40:58.410670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.842 qpair failed and we were unable to recover it. 00:35:34.842 [2024-11-18 00:40:58.410838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.842 [2024-11-18 00:40:58.410866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.842 qpair failed and we were unable to recover it. 00:35:34.842 [2024-11-18 00:40:58.410966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.842 [2024-11-18 00:40:58.410992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.842 qpair failed and we were unable to recover it. 00:35:34.842 [2024-11-18 00:40:58.411141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.842 [2024-11-18 00:40:58.411167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.842 qpair failed and we were unable to recover it. 00:35:34.842 [2024-11-18 00:40:58.411246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.842 [2024-11-18 00:40:58.411271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.842 qpair failed and we were unable to recover it. 00:35:34.842 [2024-11-18 00:40:58.411389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.842 [2024-11-18 00:40:58.411416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.842 qpair failed and we were unable to recover it. 00:35:34.842 [2024-11-18 00:40:58.411511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.842 [2024-11-18 00:40:58.411537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.842 qpair failed and we were unable to recover it. 00:35:34.842 [2024-11-18 00:40:58.411667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.842 [2024-11-18 00:40:58.411706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.842 qpair failed and we were unable to recover it. 00:35:34.842 [2024-11-18 00:40:58.411854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.842 [2024-11-18 00:40:58.411881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.842 qpair failed and we were unable to recover it. 00:35:34.842 [2024-11-18 00:40:58.411998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.842 [2024-11-18 00:40:58.412025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.842 qpair failed and we were unable to recover it. 00:35:34.842 [2024-11-18 00:40:58.412137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.842 [2024-11-18 00:40:58.412164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.842 qpair failed and we were unable to recover it. 00:35:34.842 [2024-11-18 00:40:58.412248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.842 [2024-11-18 00:40:58.412274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.842 qpair failed and we were unable to recover it. 00:35:34.842 [2024-11-18 00:40:58.412413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.842 [2024-11-18 00:40:58.412443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.842 qpair failed and we were unable to recover it. 00:35:34.842 [2024-11-18 00:40:58.412567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.842 [2024-11-18 00:40:58.412594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.842 qpair failed and we were unable to recover it. 00:35:34.842 [2024-11-18 00:40:58.412696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.842 [2024-11-18 00:40:58.412740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.842 qpair failed and we were unable to recover it. 00:35:34.842 [2024-11-18 00:40:58.412904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.842 [2024-11-18 00:40:58.412933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.842 qpair failed and we were unable to recover it. 00:35:34.842 [2024-11-18 00:40:58.413062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.842 [2024-11-18 00:40:58.413088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.842 qpair failed and we were unable to recover it. 00:35:34.842 [2024-11-18 00:40:58.413228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.842 [2024-11-18 00:40:58.413253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.842 qpair failed and we were unable to recover it. 00:35:34.842 [2024-11-18 00:40:58.413389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.842 [2024-11-18 00:40:58.413433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.842 qpair failed and we were unable to recover it. 00:35:34.842 [2024-11-18 00:40:58.413519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.842 [2024-11-18 00:40:58.413546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.842 qpair failed and we were unable to recover it. 00:35:34.842 [2024-11-18 00:40:58.413676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.842 [2024-11-18 00:40:58.413724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.842 qpair failed and we were unable to recover it. 00:35:34.842 [2024-11-18 00:40:58.413854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.842 [2024-11-18 00:40:58.413883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.842 qpair failed and we were unable to recover it. 00:35:34.842 [2024-11-18 00:40:58.414037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.842 [2024-11-18 00:40:58.414063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.842 qpair failed and we were unable to recover it. 00:35:34.842 [2024-11-18 00:40:58.414174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.842 [2024-11-18 00:40:58.414201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.842 qpair failed and we were unable to recover it. 00:35:34.842 [2024-11-18 00:40:58.414323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.842 [2024-11-18 00:40:58.414350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.842 qpair failed and we were unable to recover it. 00:35:34.842 [2024-11-18 00:40:58.414459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.842 [2024-11-18 00:40:58.414487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.842 qpair failed and we were unable to recover it. 00:35:34.842 [2024-11-18 00:40:58.414637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.843 [2024-11-18 00:40:58.414665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.843 qpair failed and we were unable to recover it. 00:35:34.843 [2024-11-18 00:40:58.414757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.843 [2024-11-18 00:40:58.414785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.843 qpair failed and we were unable to recover it. 00:35:34.843 [2024-11-18 00:40:58.414907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.843 [2024-11-18 00:40:58.414933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.843 qpair failed and we were unable to recover it. 00:35:34.843 [2024-11-18 00:40:58.415054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.843 [2024-11-18 00:40:58.415080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.843 qpair failed and we were unable to recover it. 00:35:34.843 [2024-11-18 00:40:58.415230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.843 [2024-11-18 00:40:58.415257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.843 qpair failed and we were unable to recover it. 00:35:34.843 [2024-11-18 00:40:58.415346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.843 [2024-11-18 00:40:58.415372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.843 qpair failed and we were unable to recover it. 00:35:34.843 [2024-11-18 00:40:58.415486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.843 [2024-11-18 00:40:58.415513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.843 qpair failed and we were unable to recover it. 00:35:34.843 [2024-11-18 00:40:58.415605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.843 [2024-11-18 00:40:58.415631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.843 qpair failed and we were unable to recover it. 00:35:34.843 [2024-11-18 00:40:58.415776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.843 [2024-11-18 00:40:58.415821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.843 qpair failed and we were unable to recover it. 00:35:34.843 [2024-11-18 00:40:58.415897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.843 [2024-11-18 00:40:58.415923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.843 qpair failed and we were unable to recover it. 00:35:34.843 [2024-11-18 00:40:58.416015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.843 [2024-11-18 00:40:58.416042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.843 qpair failed and we were unable to recover it. 00:35:34.843 [2024-11-18 00:40:58.416124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.843 [2024-11-18 00:40:58.416151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.843 qpair failed and we were unable to recover it. 00:35:34.843 [2024-11-18 00:40:58.416268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.843 [2024-11-18 00:40:58.416296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.843 qpair failed and we were unable to recover it. 00:35:34.843 [2024-11-18 00:40:58.416410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.843 [2024-11-18 00:40:58.416440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.843 qpair failed and we were unable to recover it. 00:35:34.843 [2024-11-18 00:40:58.416557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.843 [2024-11-18 00:40:58.416586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.843 qpair failed and we were unable to recover it. 00:35:34.843 [2024-11-18 00:40:58.416676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.843 [2024-11-18 00:40:58.416707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.843 qpair failed and we were unable to recover it. 00:35:34.843 [2024-11-18 00:40:58.416859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.843 [2024-11-18 00:40:58.416889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.843 qpair failed and we were unable to recover it. 00:35:34.843 [2024-11-18 00:40:58.417009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.843 [2024-11-18 00:40:58.417039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.843 qpair failed and we were unable to recover it. 00:35:34.843 [2024-11-18 00:40:58.417176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.843 [2024-11-18 00:40:58.417202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.843 qpair failed and we were unable to recover it. 00:35:34.843 [2024-11-18 00:40:58.417344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.843 [2024-11-18 00:40:58.417370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.843 qpair failed and we were unable to recover it. 00:35:34.843 [2024-11-18 00:40:58.417455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.843 [2024-11-18 00:40:58.417482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.843 qpair failed and we were unable to recover it. 00:35:34.843 [2024-11-18 00:40:58.417621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.843 [2024-11-18 00:40:58.417667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.843 qpair failed and we were unable to recover it. 00:35:34.843 [2024-11-18 00:40:58.417832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.843 [2024-11-18 00:40:58.417879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.843 qpair failed and we were unable to recover it. 00:35:34.843 [2024-11-18 00:40:58.418010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.843 [2024-11-18 00:40:58.418054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.843 qpair failed and we were unable to recover it. 00:35:34.843 [2024-11-18 00:40:58.418164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.843 [2024-11-18 00:40:58.418190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.843 qpair failed and we were unable to recover it. 00:35:34.843 [2024-11-18 00:40:58.418273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.843 [2024-11-18 00:40:58.418299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.843 qpair failed and we were unable to recover it. 00:35:34.843 [2024-11-18 00:40:58.418429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.843 [2024-11-18 00:40:58.418458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.843 qpair failed and we were unable to recover it. 00:35:34.843 [2024-11-18 00:40:58.418598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.843 [2024-11-18 00:40:58.418626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.843 qpair failed and we were unable to recover it. 00:35:34.843 [2024-11-18 00:40:58.418759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.843 [2024-11-18 00:40:58.418785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.843 qpair failed and we were unable to recover it. 00:35:34.843 [2024-11-18 00:40:58.418924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.843 [2024-11-18 00:40:58.418963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.843 qpair failed and we were unable to recover it. 00:35:34.843 [2024-11-18 00:40:58.419087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.843 [2024-11-18 00:40:58.419115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.843 qpair failed and we were unable to recover it. 00:35:34.843 [2024-11-18 00:40:58.419230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.843 [2024-11-18 00:40:58.419257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.843 qpair failed and we were unable to recover it. 00:35:34.843 [2024-11-18 00:40:58.419376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.843 [2024-11-18 00:40:58.419403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.843 qpair failed and we were unable to recover it. 00:35:34.843 [2024-11-18 00:40:58.419543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.844 [2024-11-18 00:40:58.419569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.844 qpair failed and we were unable to recover it. 00:35:34.844 [2024-11-18 00:40:58.419682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.844 [2024-11-18 00:40:58.419714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.844 qpair failed and we were unable to recover it. 00:35:34.844 [2024-11-18 00:40:58.419839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.844 [2024-11-18 00:40:58.419868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.844 qpair failed and we were unable to recover it. 00:35:34.844 [2024-11-18 00:40:58.419985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.844 [2024-11-18 00:40:58.420014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.844 qpair failed and we were unable to recover it. 00:35:34.844 [2024-11-18 00:40:58.420169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.844 [2024-11-18 00:40:58.420235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.844 qpair failed and we were unable to recover it. 00:35:34.844 [2024-11-18 00:40:58.420393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.844 [2024-11-18 00:40:58.420421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.844 qpair failed and we were unable to recover it. 00:35:34.844 [2024-11-18 00:40:58.420532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.844 [2024-11-18 00:40:58.420558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.844 qpair failed and we were unable to recover it. 00:35:34.844 [2024-11-18 00:40:58.420667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.844 [2024-11-18 00:40:58.420693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.844 qpair failed and we were unable to recover it. 00:35:34.844 [2024-11-18 00:40:58.420830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.844 [2024-11-18 00:40:58.420860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.844 qpair failed and we were unable to recover it. 00:35:34.844 [2024-11-18 00:40:58.421008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.844 [2024-11-18 00:40:58.421037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.844 qpair failed and we were unable to recover it. 00:35:34.844 [2024-11-18 00:40:58.421172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.844 [2024-11-18 00:40:58.421202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.844 qpair failed and we were unable to recover it. 00:35:34.844 [2024-11-18 00:40:58.421358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.844 [2024-11-18 00:40:58.421384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.844 qpair failed and we were unable to recover it. 00:35:34.844 [2024-11-18 00:40:58.421465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.844 [2024-11-18 00:40:58.421491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.844 qpair failed and we were unable to recover it. 00:35:34.844 [2024-11-18 00:40:58.421627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.844 [2024-11-18 00:40:58.421656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.844 qpair failed and we were unable to recover it. 00:35:34.844 [2024-11-18 00:40:58.421787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.844 [2024-11-18 00:40:58.421816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.844 qpair failed and we were unable to recover it. 00:35:34.844 [2024-11-18 00:40:58.421953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.844 [2024-11-18 00:40:58.421983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.844 qpair failed and we were unable to recover it. 00:35:34.844 [2024-11-18 00:40:58.422140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.844 [2024-11-18 00:40:58.422169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.844 qpair failed and we were unable to recover it. 00:35:34.844 [2024-11-18 00:40:58.422278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.844 [2024-11-18 00:40:58.422306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.844 qpair failed and we were unable to recover it. 00:35:34.844 [2024-11-18 00:40:58.422444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.844 [2024-11-18 00:40:58.422470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.844 qpair failed and we were unable to recover it. 00:35:34.844 [2024-11-18 00:40:58.422600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.844 [2024-11-18 00:40:58.422630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.844 qpair failed and we were unable to recover it. 00:35:34.844 [2024-11-18 00:40:58.422784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.844 [2024-11-18 00:40:58.422810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.844 qpair failed and we were unable to recover it. 00:35:34.844 [2024-11-18 00:40:58.422961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.844 [2024-11-18 00:40:58.423000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.844 qpair failed and we were unable to recover it. 00:35:34.844 [2024-11-18 00:40:58.423093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.844 [2024-11-18 00:40:58.423121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.844 qpair failed and we were unable to recover it. 00:35:34.844 [2024-11-18 00:40:58.423266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.844 [2024-11-18 00:40:58.423294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.844 qpair failed and we were unable to recover it. 00:35:34.844 [2024-11-18 00:40:58.423439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.844 [2024-11-18 00:40:58.423465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.844 qpair failed and we were unable to recover it. 00:35:34.844 [2024-11-18 00:40:58.423613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.844 [2024-11-18 00:40:58.423640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.844 qpair failed and we were unable to recover it. 00:35:34.844 [2024-11-18 00:40:58.423749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.844 [2024-11-18 00:40:58.423775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.844 qpair failed and we were unable to recover it. 00:35:34.844 [2024-11-18 00:40:58.423876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.844 [2024-11-18 00:40:58.423904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.844 qpair failed and we were unable to recover it. 00:35:34.844 [2024-11-18 00:40:58.424032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.844 [2024-11-18 00:40:58.424067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.844 qpair failed and we were unable to recover it. 00:35:34.844 [2024-11-18 00:40:58.424205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.844 [2024-11-18 00:40:58.424249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.844 qpair failed and we were unable to recover it. 00:35:34.844 [2024-11-18 00:40:58.424372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.844 [2024-11-18 00:40:58.424399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.844 qpair failed and we were unable to recover it. 00:35:34.844 [2024-11-18 00:40:58.424565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.844 [2024-11-18 00:40:58.424608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.844 qpair failed and we were unable to recover it. 00:35:34.844 [2024-11-18 00:40:58.424766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.844 [2024-11-18 00:40:58.424810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.844 qpair failed and we were unable to recover it. 00:35:34.844 [2024-11-18 00:40:58.424935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.844 [2024-11-18 00:40:58.424979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.844 qpair failed and we were unable to recover it. 00:35:34.844 [2024-11-18 00:40:58.425063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.844 [2024-11-18 00:40:58.425090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.844 qpair failed and we were unable to recover it. 00:35:34.844 [2024-11-18 00:40:58.425201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.844 [2024-11-18 00:40:58.425228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.844 qpair failed and we were unable to recover it. 00:35:34.844 [2024-11-18 00:40:58.425305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.844 [2024-11-18 00:40:58.425339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.844 qpair failed and we were unable to recover it. 00:35:34.844 [2024-11-18 00:40:58.425462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.844 [2024-11-18 00:40:58.425489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.844 qpair failed and we were unable to recover it. 00:35:34.845 [2024-11-18 00:40:58.425607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.845 [2024-11-18 00:40:58.425636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.845 qpair failed and we were unable to recover it. 00:35:34.845 [2024-11-18 00:40:58.425734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.845 [2024-11-18 00:40:58.425762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.845 qpair failed and we were unable to recover it. 00:35:34.845 [2024-11-18 00:40:58.425862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.845 [2024-11-18 00:40:58.425891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.845 qpair failed and we were unable to recover it. 00:35:34.845 [2024-11-18 00:40:58.426065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.845 [2024-11-18 00:40:58.426110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.845 qpair failed and we were unable to recover it. 00:35:34.845 [2024-11-18 00:40:58.426210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.845 [2024-11-18 00:40:58.426250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.845 qpair failed and we were unable to recover it. 00:35:34.845 [2024-11-18 00:40:58.426350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.845 [2024-11-18 00:40:58.426379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.845 qpair failed and we were unable to recover it. 00:35:34.845 [2024-11-18 00:40:58.426520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.845 [2024-11-18 00:40:58.426547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.845 qpair failed and we were unable to recover it. 00:35:34.845 [2024-11-18 00:40:58.426652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.845 [2024-11-18 00:40:58.426695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.845 qpair failed and we were unable to recover it. 00:35:34.845 [2024-11-18 00:40:58.426860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.845 [2024-11-18 00:40:58.426890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.845 qpair failed and we were unable to recover it. 00:35:34.845 [2024-11-18 00:40:58.427009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.845 [2024-11-18 00:40:58.427038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.845 qpair failed and we were unable to recover it. 00:35:34.845 [2024-11-18 00:40:58.427176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.845 [2024-11-18 00:40:58.427204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.845 qpair failed and we were unable to recover it. 00:35:34.845 [2024-11-18 00:40:58.427327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.845 [2024-11-18 00:40:58.427363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.845 qpair failed and we were unable to recover it. 00:35:34.845 [2024-11-18 00:40:58.427448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.845 [2024-11-18 00:40:58.427475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.845 qpair failed and we were unable to recover it. 00:35:34.845 [2024-11-18 00:40:58.427597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.845 [2024-11-18 00:40:58.427626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.845 qpair failed and we were unable to recover it. 00:35:34.845 [2024-11-18 00:40:58.427774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.845 [2024-11-18 00:40:58.427818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.845 qpair failed and we were unable to recover it. 00:35:34.845 [2024-11-18 00:40:58.427977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.845 [2024-11-18 00:40:58.428024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.845 qpair failed and we were unable to recover it. 00:35:34.845 [2024-11-18 00:40:58.428135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.845 [2024-11-18 00:40:58.428161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.845 qpair failed and we were unable to recover it. 00:35:34.845 [2024-11-18 00:40:58.428283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.845 [2024-11-18 00:40:58.428316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.845 qpair failed and we were unable to recover it. 00:35:34.845 [2024-11-18 00:40:58.428404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.845 [2024-11-18 00:40:58.428430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.845 qpair failed and we were unable to recover it. 00:35:34.845 [2024-11-18 00:40:58.428528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.845 [2024-11-18 00:40:58.428557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.845 qpair failed and we were unable to recover it. 00:35:34.845 [2024-11-18 00:40:58.428708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.845 [2024-11-18 00:40:58.428735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.845 qpair failed and we were unable to recover it. 00:35:34.845 [2024-11-18 00:40:58.428871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.845 [2024-11-18 00:40:58.428896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.845 qpair failed and we were unable to recover it. 00:35:34.845 [2024-11-18 00:40:58.429002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.845 [2024-11-18 00:40:58.429028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.845 qpair failed and we were unable to recover it. 00:35:34.845 [2024-11-18 00:40:58.429110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.845 [2024-11-18 00:40:58.429137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.845 qpair failed and we were unable to recover it. 00:35:34.845 [2024-11-18 00:40:58.429248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.845 [2024-11-18 00:40:58.429274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.845 qpair failed and we were unable to recover it. 00:35:34.845 [2024-11-18 00:40:58.429422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.845 [2024-11-18 00:40:58.429466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.845 qpair failed and we were unable to recover it. 00:35:34.845 [2024-11-18 00:40:58.429630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.845 [2024-11-18 00:40:58.429673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.845 qpair failed and we were unable to recover it. 00:35:34.845 [2024-11-18 00:40:58.429783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.845 [2024-11-18 00:40:58.429810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.845 qpair failed and we were unable to recover it. 00:35:34.845 [2024-11-18 00:40:58.429948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.845 [2024-11-18 00:40:58.429974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.845 qpair failed and we were unable to recover it. 00:35:34.845 [2024-11-18 00:40:58.430060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.845 [2024-11-18 00:40:58.430085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.845 qpair failed and we were unable to recover it. 00:35:34.845 [2024-11-18 00:40:58.430165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.845 [2024-11-18 00:40:58.430196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.845 qpair failed and we were unable to recover it. 00:35:34.845 [2024-11-18 00:40:58.430334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.845 [2024-11-18 00:40:58.430380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.845 qpair failed and we were unable to recover it. 00:35:34.845 [2024-11-18 00:40:58.430512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.845 [2024-11-18 00:40:58.430542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.845 qpair failed and we were unable to recover it. 00:35:34.845 [2024-11-18 00:40:58.430665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.845 [2024-11-18 00:40:58.430694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.845 qpair failed and we were unable to recover it. 00:35:34.845 [2024-11-18 00:40:58.430830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.845 [2024-11-18 00:40:58.430856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.845 qpair failed and we were unable to recover it. 00:35:34.845 [2024-11-18 00:40:58.430951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.845 [2024-11-18 00:40:58.430978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.845 qpair failed and we were unable to recover it. 00:35:34.845 [2024-11-18 00:40:58.431091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.845 [2024-11-18 00:40:58.431117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.845 qpair failed and we were unable to recover it. 00:35:34.845 [2024-11-18 00:40:58.431207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.846 [2024-11-18 00:40:58.431233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.846 qpair failed and we were unable to recover it. 00:35:34.846 [2024-11-18 00:40:58.431344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.846 [2024-11-18 00:40:58.431388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.846 qpair failed and we were unable to recover it. 00:35:34.846 [2024-11-18 00:40:58.431479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.846 [2024-11-18 00:40:58.431507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.846 qpair failed and we were unable to recover it. 00:35:34.846 [2024-11-18 00:40:58.431655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.846 [2024-11-18 00:40:58.431686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.846 qpair failed and we were unable to recover it. 00:35:34.846 [2024-11-18 00:40:58.431833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.846 [2024-11-18 00:40:58.431876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.846 qpair failed and we were unable to recover it. 00:35:34.846 [2024-11-18 00:40:58.432015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.846 [2024-11-18 00:40:58.432057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.846 qpair failed and we were unable to recover it. 00:35:34.846 [2024-11-18 00:40:58.432204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.846 [2024-11-18 00:40:58.432231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.846 qpair failed and we were unable to recover it. 00:35:34.846 [2024-11-18 00:40:58.432372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.846 [2024-11-18 00:40:58.432417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.846 qpair failed and we were unable to recover it. 00:35:34.846 [2024-11-18 00:40:58.432540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.846 [2024-11-18 00:40:58.432583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.846 qpair failed and we were unable to recover it. 00:35:34.846 [2024-11-18 00:40:58.432695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.846 [2024-11-18 00:40:58.432723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.846 qpair failed and we were unable to recover it. 00:35:34.846 [2024-11-18 00:40:58.432844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.846 [2024-11-18 00:40:58.432870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.846 qpair failed and we were unable to recover it. 00:35:34.846 [2024-11-18 00:40:58.432983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.846 [2024-11-18 00:40:58.433010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.846 qpair failed and we were unable to recover it. 00:35:34.846 [2024-11-18 00:40:58.433096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.846 [2024-11-18 00:40:58.433122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.846 qpair failed and we were unable to recover it. 00:35:34.846 [2024-11-18 00:40:58.433218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.846 [2024-11-18 00:40:58.433244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.846 qpair failed and we were unable to recover it. 00:35:34.846 [2024-11-18 00:40:58.433388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.846 [2024-11-18 00:40:58.433414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.846 qpair failed and we were unable to recover it. 00:35:34.846 [2024-11-18 00:40:58.433518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.846 [2024-11-18 00:40:58.433547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.846 qpair failed and we were unable to recover it. 00:35:34.846 [2024-11-18 00:40:58.433645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.846 [2024-11-18 00:40:58.433674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.846 qpair failed and we were unable to recover it. 00:35:34.846 [2024-11-18 00:40:58.433796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.846 [2024-11-18 00:40:58.433824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.846 qpair failed and we were unable to recover it. 00:35:34.846 [2024-11-18 00:40:58.433950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.846 [2024-11-18 00:40:58.433978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.846 qpair failed and we were unable to recover it. 00:35:34.846 [2024-11-18 00:40:58.434102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.846 [2024-11-18 00:40:58.434145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.846 qpair failed and we were unable to recover it. 00:35:34.846 [2024-11-18 00:40:58.434227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.846 [2024-11-18 00:40:58.434258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.846 qpair failed and we were unable to recover it. 00:35:34.846 [2024-11-18 00:40:58.434372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.846 [2024-11-18 00:40:58.434398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.846 qpair failed and we were unable to recover it. 00:35:34.846 [2024-11-18 00:40:58.434517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.846 [2024-11-18 00:40:58.434543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.846 qpair failed and we were unable to recover it. 00:35:34.846 [2024-11-18 00:40:58.434675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.846 [2024-11-18 00:40:58.434703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.846 qpair failed and we were unable to recover it. 00:35:34.846 [2024-11-18 00:40:58.434835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.846 [2024-11-18 00:40:58.434880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.846 qpair failed and we were unable to recover it. 00:35:34.846 [2024-11-18 00:40:58.435054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.846 [2024-11-18 00:40:58.435083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.846 qpair failed and we were unable to recover it. 00:35:34.846 [2024-11-18 00:40:58.435222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.846 [2024-11-18 00:40:58.435248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.846 qpair failed and we were unable to recover it. 00:35:34.846 [2024-11-18 00:40:58.435344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.846 [2024-11-18 00:40:58.435370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.846 qpair failed and we were unable to recover it. 00:35:34.846 [2024-11-18 00:40:58.435492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.846 [2024-11-18 00:40:58.435519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.846 qpair failed and we were unable to recover it. 00:35:34.846 [2024-11-18 00:40:58.435611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.846 [2024-11-18 00:40:58.435640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.846 qpair failed and we were unable to recover it. 00:35:34.846 [2024-11-18 00:40:58.435760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.846 [2024-11-18 00:40:58.435786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.846 qpair failed and we were unable to recover it. 00:35:34.846 [2024-11-18 00:40:58.435919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.846 [2024-11-18 00:40:58.435948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.846 qpair failed and we were unable to recover it. 00:35:34.846 [2024-11-18 00:40:58.436063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.846 [2024-11-18 00:40:58.436089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.846 qpair failed and we were unable to recover it. 00:35:34.846 [2024-11-18 00:40:58.436215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.846 [2024-11-18 00:40:58.436241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.846 qpair failed and we were unable to recover it. 00:35:34.846 [2024-11-18 00:40:58.436339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.846 [2024-11-18 00:40:58.436365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.846 qpair failed and we were unable to recover it. 00:35:34.846 [2024-11-18 00:40:58.436478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.846 [2024-11-18 00:40:58.436503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.846 qpair failed and we were unable to recover it. 00:35:34.846 [2024-11-18 00:40:58.436654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.846 [2024-11-18 00:40:58.436683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.846 qpair failed and we were unable to recover it. 00:35:34.846 [2024-11-18 00:40:58.436802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.846 [2024-11-18 00:40:58.436831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.846 qpair failed and we were unable to recover it. 00:35:34.846 [2024-11-18 00:40:58.436963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.847 [2024-11-18 00:40:58.437007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.847 qpair failed and we were unable to recover it. 00:35:34.847 [2024-11-18 00:40:58.437164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.847 [2024-11-18 00:40:58.437195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.847 qpair failed and we were unable to recover it. 00:35:34.847 [2024-11-18 00:40:58.437296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.847 [2024-11-18 00:40:58.437333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.847 qpair failed and we were unable to recover it. 00:35:34.847 [2024-11-18 00:40:58.437465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.847 [2024-11-18 00:40:58.437492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.847 qpair failed and we were unable to recover it. 00:35:34.847 [2024-11-18 00:40:58.437601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.847 [2024-11-18 00:40:58.437630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.847 qpair failed and we were unable to recover it. 00:35:34.847 [2024-11-18 00:40:58.437787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.847 [2024-11-18 00:40:58.437816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.847 qpair failed and we were unable to recover it. 00:35:34.847 [2024-11-18 00:40:58.437915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.847 [2024-11-18 00:40:58.437941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.847 qpair failed and we were unable to recover it. 00:35:34.847 [2024-11-18 00:40:58.438072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.847 [2024-11-18 00:40:58.438102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.847 qpair failed and we were unable to recover it. 00:35:34.847 [2024-11-18 00:40:58.438232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.847 [2024-11-18 00:40:58.438262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.847 qpair failed and we were unable to recover it. 00:35:34.847 [2024-11-18 00:40:58.438479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.847 [2024-11-18 00:40:58.438512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.847 qpair failed and we were unable to recover it. 00:35:34.847 [2024-11-18 00:40:58.438730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.847 [2024-11-18 00:40:58.438759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.847 qpair failed and we were unable to recover it. 00:35:34.847 [2024-11-18 00:40:58.438915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.847 [2024-11-18 00:40:58.438945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.847 qpair failed and we were unable to recover it. 00:35:34.847 [2024-11-18 00:40:58.439055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.847 [2024-11-18 00:40:58.439101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.847 qpair failed and we were unable to recover it. 00:35:34.847 [2024-11-18 00:40:58.439251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.847 [2024-11-18 00:40:58.439290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.847 qpair failed and we were unable to recover it. 00:35:34.847 [2024-11-18 00:40:58.439409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.847 [2024-11-18 00:40:58.439438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.847 qpair failed and we were unable to recover it. 00:35:34.847 [2024-11-18 00:40:58.439519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.847 [2024-11-18 00:40:58.439546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.847 qpair failed and we were unable to recover it. 00:35:34.847 [2024-11-18 00:40:58.439684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.847 [2024-11-18 00:40:58.439726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.847 qpair failed and we were unable to recover it. 00:35:34.847 [2024-11-18 00:40:58.439861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.847 [2024-11-18 00:40:58.439904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.847 qpair failed and we were unable to recover it. 00:35:34.847 [2024-11-18 00:40:58.440038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.847 [2024-11-18 00:40:58.440082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.847 qpair failed and we were unable to recover it. 00:35:34.847 [2024-11-18 00:40:58.440225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.847 [2024-11-18 00:40:58.440253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.847 qpair failed and we were unable to recover it. 00:35:34.847 [2024-11-18 00:40:58.440348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.847 [2024-11-18 00:40:58.440376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.847 qpair failed and we were unable to recover it. 00:35:34.847 [2024-11-18 00:40:58.440521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.847 [2024-11-18 00:40:58.440547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.847 qpair failed and we were unable to recover it. 00:35:34.847 [2024-11-18 00:40:58.440681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.847 [2024-11-18 00:40:58.440727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.847 qpair failed and we were unable to recover it. 00:35:34.847 [2024-11-18 00:40:58.440963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.847 [2024-11-18 00:40:58.441030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.847 qpair failed and we were unable to recover it. 00:35:34.847 [2024-11-18 00:40:58.441159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.847 [2024-11-18 00:40:58.441203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.847 qpair failed and we were unable to recover it. 00:35:34.847 [2024-11-18 00:40:58.441348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.847 [2024-11-18 00:40:58.441376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.847 qpair failed and we were unable to recover it. 00:35:34.847 [2024-11-18 00:40:58.441517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.847 [2024-11-18 00:40:58.441562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.847 qpair failed and we were unable to recover it. 00:35:34.847 [2024-11-18 00:40:58.441728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.847 [2024-11-18 00:40:58.441771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.847 qpair failed and we were unable to recover it. 00:35:34.847 [2024-11-18 00:40:58.441894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.847 [2024-11-18 00:40:58.441938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.847 qpair failed and we were unable to recover it. 00:35:34.847 [2024-11-18 00:40:58.442080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.847 [2024-11-18 00:40:58.442106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.847 qpair failed and we were unable to recover it. 00:35:34.847 [2024-11-18 00:40:58.442226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.847 [2024-11-18 00:40:58.442266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.847 qpair failed and we were unable to recover it. 00:35:34.847 [2024-11-18 00:40:58.442360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.847 [2024-11-18 00:40:58.442387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.847 qpair failed and we were unable to recover it. 00:35:34.847 [2024-11-18 00:40:58.442506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.847 [2024-11-18 00:40:58.442533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.847 qpair failed and we were unable to recover it. 00:35:34.847 [2024-11-18 00:40:58.442663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.847 [2024-11-18 00:40:58.442694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.847 qpair failed and we were unable to recover it. 00:35:34.847 [2024-11-18 00:40:58.442800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.847 [2024-11-18 00:40:58.442830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.847 qpair failed and we were unable to recover it. 00:35:34.847 [2024-11-18 00:40:58.442957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.847 [2024-11-18 00:40:58.442987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.847 qpair failed and we were unable to recover it. 00:35:34.847 [2024-11-18 00:40:58.443124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.847 [2024-11-18 00:40:58.443150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.847 qpair failed and we were unable to recover it. 00:35:34.847 [2024-11-18 00:40:58.443233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.847 [2024-11-18 00:40:58.443259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.847 qpair failed and we were unable to recover it. 00:35:34.847 [2024-11-18 00:40:58.443371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.848 [2024-11-18 00:40:58.443398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.848 qpair failed and we were unable to recover it. 00:35:34.848 [2024-11-18 00:40:58.443556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.848 [2024-11-18 00:40:58.443601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.848 qpair failed and we were unable to recover it. 00:35:34.848 [2024-11-18 00:40:58.443728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.848 [2024-11-18 00:40:58.443756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.848 qpair failed and we were unable to recover it. 00:35:34.848 [2024-11-18 00:40:58.443916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.848 [2024-11-18 00:40:58.443960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.848 qpair failed and we were unable to recover it. 00:35:34.848 [2024-11-18 00:40:58.444140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.848 [2024-11-18 00:40:58.444205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.848 qpair failed and we were unable to recover it. 00:35:34.848 [2024-11-18 00:40:58.444367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.848 [2024-11-18 00:40:58.444410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.848 qpair failed and we were unable to recover it. 00:35:34.848 [2024-11-18 00:40:58.444532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.848 [2024-11-18 00:40:58.444558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.848 qpair failed and we were unable to recover it. 00:35:34.848 [2024-11-18 00:40:58.444659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.848 [2024-11-18 00:40:58.444689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.848 qpair failed and we were unable to recover it. 00:35:34.848 [2024-11-18 00:40:58.444804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.848 [2024-11-18 00:40:58.444836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.848 qpair failed and we were unable to recover it. 00:35:34.848 [2024-11-18 00:40:58.444962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.848 [2024-11-18 00:40:58.444992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.848 qpair failed and we were unable to recover it. 00:35:34.848 [2024-11-18 00:40:58.445149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.848 [2024-11-18 00:40:58.445179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.848 qpair failed and we were unable to recover it. 00:35:34.848 [2024-11-18 00:40:58.445337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.848 [2024-11-18 00:40:58.445368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.848 qpair failed and we were unable to recover it. 00:35:34.848 [2024-11-18 00:40:58.445463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.848 [2024-11-18 00:40:58.445489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.848 qpair failed and we were unable to recover it. 00:35:34.848 [2024-11-18 00:40:58.445600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.848 [2024-11-18 00:40:58.445627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.848 qpair failed and we were unable to recover it. 00:35:34.848 [2024-11-18 00:40:58.445759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.848 [2024-11-18 00:40:58.445788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.848 qpair failed and we were unable to recover it. 00:35:34.848 [2024-11-18 00:40:58.445887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.848 [2024-11-18 00:40:58.445918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.848 qpair failed and we were unable to recover it. 00:35:34.848 [2024-11-18 00:40:58.446058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.848 [2024-11-18 00:40:58.446089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.848 qpair failed and we were unable to recover it. 00:35:34.848 [2024-11-18 00:40:58.446274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.848 [2024-11-18 00:40:58.446304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.848 qpair failed and we were unable to recover it. 00:35:34.848 [2024-11-18 00:40:58.446444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.848 [2024-11-18 00:40:58.446470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.848 qpair failed and we were unable to recover it. 00:35:34.848 [2024-11-18 00:40:58.446623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.848 [2024-11-18 00:40:58.446652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.848 qpair failed and we were unable to recover it. 00:35:34.848 [2024-11-18 00:40:58.446769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.848 [2024-11-18 00:40:58.446797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.848 qpair failed and we were unable to recover it. 00:35:34.848 [2024-11-18 00:40:58.446928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.848 [2024-11-18 00:40:58.446958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.848 qpair failed and we were unable to recover it. 00:35:34.848 [2024-11-18 00:40:58.447103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.848 [2024-11-18 00:40:58.447133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.848 qpair failed and we were unable to recover it. 00:35:34.848 [2024-11-18 00:40:58.447249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.848 [2024-11-18 00:40:58.447292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.848 qpair failed and we were unable to recover it. 00:35:34.848 [2024-11-18 00:40:58.447448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.848 [2024-11-18 00:40:58.447475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.848 qpair failed and we were unable to recover it. 00:35:34.848 [2024-11-18 00:40:58.447566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.848 [2024-11-18 00:40:58.447592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.848 qpair failed and we were unable to recover it. 00:35:34.848 [2024-11-18 00:40:58.447673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.848 [2024-11-18 00:40:58.447699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.848 qpair failed and we were unable to recover it. 00:35:34.848 [2024-11-18 00:40:58.447804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.848 [2024-11-18 00:40:58.447834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.848 qpair failed and we were unable to recover it. 00:35:34.848 [2024-11-18 00:40:58.447925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.848 [2024-11-18 00:40:58.447969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.848 qpair failed and we were unable to recover it. 00:35:34.848 [2024-11-18 00:40:58.448098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.848 [2024-11-18 00:40:58.448143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.848 qpair failed and we were unable to recover it. 00:35:34.848 [2024-11-18 00:40:58.448231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.848 [2024-11-18 00:40:58.448275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.848 qpair failed and we were unable to recover it. 00:35:34.848 [2024-11-18 00:40:58.448423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.848 [2024-11-18 00:40:58.448449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.848 qpair failed and we were unable to recover it. 00:35:34.848 [2024-11-18 00:40:58.448537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.848 [2024-11-18 00:40:58.448564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.848 qpair failed and we were unable to recover it. 00:35:34.848 [2024-11-18 00:40:58.448684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.849 [2024-11-18 00:40:58.448711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.849 qpair failed and we were unable to recover it. 00:35:34.849 [2024-11-18 00:40:58.448881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.849 [2024-11-18 00:40:58.448910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.849 qpair failed and we were unable to recover it. 00:35:34.849 [2024-11-18 00:40:58.449031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.849 [2024-11-18 00:40:58.449077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.849 qpair failed and we were unable to recover it. 00:35:34.849 [2024-11-18 00:40:58.449242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.849 [2024-11-18 00:40:58.449273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.849 qpair failed and we were unable to recover it. 00:35:34.849 [2024-11-18 00:40:58.449402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.849 [2024-11-18 00:40:58.449429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.849 qpair failed and we were unable to recover it. 00:35:34.849 [2024-11-18 00:40:58.449568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.849 [2024-11-18 00:40:58.449606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.849 qpair failed and we were unable to recover it. 00:35:34.849 [2024-11-18 00:40:58.449855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.849 [2024-11-18 00:40:58.449909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.849 qpair failed and we were unable to recover it. 00:35:34.849 [2024-11-18 00:40:58.450020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.849 [2024-11-18 00:40:58.450047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.849 qpair failed and we were unable to recover it. 00:35:34.849 [2024-11-18 00:40:58.450157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.849 [2024-11-18 00:40:58.450185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.849 qpair failed and we were unable to recover it. 00:35:34.849 [2024-11-18 00:40:58.450284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.849 [2024-11-18 00:40:58.450323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.849 qpair failed and we were unable to recover it. 00:35:34.849 [2024-11-18 00:40:58.450424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.849 [2024-11-18 00:40:58.450451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.849 qpair failed and we were unable to recover it. 00:35:34.849 [2024-11-18 00:40:58.450561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.849 [2024-11-18 00:40:58.450587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.849 qpair failed and we were unable to recover it. 00:35:34.849 [2024-11-18 00:40:58.450695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.849 [2024-11-18 00:40:58.450725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.849 qpair failed and we were unable to recover it. 00:35:34.849 [2024-11-18 00:40:58.450846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.849 [2024-11-18 00:40:58.450876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.849 qpair failed and we were unable to recover it. 00:35:34.849 [2024-11-18 00:40:58.451030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.849 [2024-11-18 00:40:58.451061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.849 qpair failed and we were unable to recover it. 00:35:34.849 [2024-11-18 00:40:58.451174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.849 [2024-11-18 00:40:58.451201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.849 qpair failed and we were unable to recover it. 00:35:34.849 [2024-11-18 00:40:58.451318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.849 [2024-11-18 00:40:58.451345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.849 qpair failed and we were unable to recover it. 00:35:34.849 [2024-11-18 00:40:58.451459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.849 [2024-11-18 00:40:58.451485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.849 qpair failed and we were unable to recover it. 00:35:34.849 [2024-11-18 00:40:58.451617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.849 [2024-11-18 00:40:58.451646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.849 qpair failed and we were unable to recover it. 00:35:34.849 [2024-11-18 00:40:58.451745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.849 [2024-11-18 00:40:58.451775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.849 qpair failed and we were unable to recover it. 00:35:34.849 [2024-11-18 00:40:58.451917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.849 [2024-11-18 00:40:58.451963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.849 qpair failed and we were unable to recover it. 00:35:34.849 [2024-11-18 00:40:58.452136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.849 [2024-11-18 00:40:58.452182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.849 qpair failed and we were unable to recover it. 00:35:34.849 [2024-11-18 00:40:58.452286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.849 [2024-11-18 00:40:58.452327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.849 qpair failed and we were unable to recover it. 00:35:34.849 [2024-11-18 00:40:58.452468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.849 [2024-11-18 00:40:58.452493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.849 qpair failed and we were unable to recover it. 00:35:34.849 [2024-11-18 00:40:58.452626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.849 [2024-11-18 00:40:58.452656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.849 qpair failed and we were unable to recover it. 00:35:34.849 [2024-11-18 00:40:58.452785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.849 [2024-11-18 00:40:58.452826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.849 qpair failed and we were unable to recover it. 00:35:34.849 [2024-11-18 00:40:58.452981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.849 [2024-11-18 00:40:58.453012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.849 qpair failed and we were unable to recover it. 00:35:34.849 [2024-11-18 00:40:58.453166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.849 [2024-11-18 00:40:58.453196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.849 qpair failed and we were unable to recover it. 00:35:34.849 [2024-11-18 00:40:58.453355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.849 [2024-11-18 00:40:58.453382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.849 qpair failed and we were unable to recover it. 00:35:34.849 [2024-11-18 00:40:58.453495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.849 [2024-11-18 00:40:58.453521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.849 qpair failed and we were unable to recover it. 00:35:34.849 [2024-11-18 00:40:58.453632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.849 [2024-11-18 00:40:58.453659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.849 qpair failed and we were unable to recover it. 00:35:34.849 [2024-11-18 00:40:58.453811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.849 [2024-11-18 00:40:58.453840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.849 qpair failed and we were unable to recover it. 00:35:34.849 [2024-11-18 00:40:58.453935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.849 [2024-11-18 00:40:58.453978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.849 qpair failed and we were unable to recover it. 00:35:34.849 [2024-11-18 00:40:58.454149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.849 [2024-11-18 00:40:58.454178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.849 qpair failed and we were unable to recover it. 00:35:34.849 [2024-11-18 00:40:58.454289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.849 [2024-11-18 00:40:58.454336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.849 qpair failed and we were unable to recover it. 00:35:34.849 [2024-11-18 00:40:58.454436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.849 [2024-11-18 00:40:58.454464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.849 qpair failed and we were unable to recover it. 00:35:34.849 [2024-11-18 00:40:58.454612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.849 [2024-11-18 00:40:58.454639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.849 qpair failed and we were unable to recover it. 00:35:34.849 [2024-11-18 00:40:58.454771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.849 [2024-11-18 00:40:58.454814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.849 qpair failed and we were unable to recover it. 00:35:34.849 [2024-11-18 00:40:58.454912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.850 [2024-11-18 00:40:58.454944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.850 qpair failed and we were unable to recover it. 00:35:34.850 [2024-11-18 00:40:58.455085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.850 [2024-11-18 00:40:58.455130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.850 qpair failed and we were unable to recover it. 00:35:34.850 [2024-11-18 00:40:58.455239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.850 [2024-11-18 00:40:58.455270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.850 qpair failed and we were unable to recover it. 00:35:34.850 [2024-11-18 00:40:58.455414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.850 [2024-11-18 00:40:58.455441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.850 qpair failed and we were unable to recover it. 00:35:34.850 [2024-11-18 00:40:58.455537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.850 [2024-11-18 00:40:58.455564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.850 qpair failed and we were unable to recover it. 00:35:34.850 [2024-11-18 00:40:58.455711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.850 [2024-11-18 00:40:58.455757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.850 qpair failed and we were unable to recover it. 00:35:34.850 [2024-11-18 00:40:58.455866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.850 [2024-11-18 00:40:58.455892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.850 qpair failed and we were unable to recover it. 00:35:34.850 [2024-11-18 00:40:58.456040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.850 [2024-11-18 00:40:58.456094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.850 qpair failed and we were unable to recover it. 00:35:34.850 [2024-11-18 00:40:58.456215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.850 [2024-11-18 00:40:58.456244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.850 qpair failed and we were unable to recover it. 00:35:34.850 [2024-11-18 00:40:58.456364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.850 [2024-11-18 00:40:58.456392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.850 qpair failed and we were unable to recover it. 00:35:34.850 [2024-11-18 00:40:58.456505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.850 [2024-11-18 00:40:58.456532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.850 qpair failed and we were unable to recover it. 00:35:34.850 [2024-11-18 00:40:58.456649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.850 [2024-11-18 00:40:58.456675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.850 qpair failed and we were unable to recover it. 00:35:34.850 [2024-11-18 00:40:58.456771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.850 [2024-11-18 00:40:58.456801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.850 qpair failed and we were unable to recover it. 00:35:34.850 [2024-11-18 00:40:58.456892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.850 [2024-11-18 00:40:58.456922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.850 qpair failed and we were unable to recover it. 00:35:34.850 [2024-11-18 00:40:58.457085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.850 [2024-11-18 00:40:58.457115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.850 qpair failed and we were unable to recover it. 00:35:34.850 [2024-11-18 00:40:58.457270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.850 [2024-11-18 00:40:58.457296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.850 qpair failed and we were unable to recover it. 00:35:34.850 [2024-11-18 00:40:58.457425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.850 [2024-11-18 00:40:58.457451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.850 qpair failed and we were unable to recover it. 00:35:34.850 [2024-11-18 00:40:58.457567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.850 [2024-11-18 00:40:58.457593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.850 qpair failed and we were unable to recover it. 00:35:34.850 [2024-11-18 00:40:58.457680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.850 [2024-11-18 00:40:58.457706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.850 qpair failed and we were unable to recover it. 00:35:34.850 [2024-11-18 00:40:58.457820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.850 [2024-11-18 00:40:58.457846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.850 qpair failed and we were unable to recover it. 00:35:34.850 [2024-11-18 00:40:58.457922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.850 [2024-11-18 00:40:58.457947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.850 qpair failed and we were unable to recover it. 00:35:34.850 [2024-11-18 00:40:58.458099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.850 [2024-11-18 00:40:58.458153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.850 qpair failed and we were unable to recover it. 00:35:34.850 [2024-11-18 00:40:58.458248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.850 [2024-11-18 00:40:58.458276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.850 qpair failed and we were unable to recover it. 00:35:34.850 [2024-11-18 00:40:58.458409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.850 [2024-11-18 00:40:58.458449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.850 qpair failed and we were unable to recover it. 00:35:34.850 [2024-11-18 00:40:58.458600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.850 [2024-11-18 00:40:58.458628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.850 qpair failed and we were unable to recover it. 00:35:34.850 [2024-11-18 00:40:58.458791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.850 [2024-11-18 00:40:58.458838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.850 qpair failed and we were unable to recover it. 00:35:34.850 [2024-11-18 00:40:58.458981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.850 [2024-11-18 00:40:58.459026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.850 qpair failed and we were unable to recover it. 00:35:34.850 [2024-11-18 00:40:58.459153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.850 [2024-11-18 00:40:58.459196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.850 qpair failed and we were unable to recover it. 00:35:34.850 [2024-11-18 00:40:58.459287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.850 [2024-11-18 00:40:58.459327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.850 qpair failed and we were unable to recover it. 00:35:34.850 [2024-11-18 00:40:58.459422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.850 [2024-11-18 00:40:58.459449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.850 qpair failed and we were unable to recover it. 00:35:34.850 [2024-11-18 00:40:58.459533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.850 [2024-11-18 00:40:58.459560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.850 qpair failed and we were unable to recover it. 00:35:34.850 [2024-11-18 00:40:58.459650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.850 [2024-11-18 00:40:58.459693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.850 qpair failed and we were unable to recover it. 00:35:34.850 [2024-11-18 00:40:58.459806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.850 [2024-11-18 00:40:58.459835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.850 qpair failed and we were unable to recover it. 00:35:34.850 [2024-11-18 00:40:58.459920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.850 [2024-11-18 00:40:58.459948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.850 qpair failed and we were unable to recover it. 00:35:34.850 [2024-11-18 00:40:58.460086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.850 [2024-11-18 00:40:58.460122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.850 qpair failed and we were unable to recover it. 00:35:34.850 [2024-11-18 00:40:58.460295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.850 [2024-11-18 00:40:58.460342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.850 qpair failed and we were unable to recover it. 00:35:34.850 [2024-11-18 00:40:58.460461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.850 [2024-11-18 00:40:58.460489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.850 qpair failed and we were unable to recover it. 00:35:34.850 [2024-11-18 00:40:58.460588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.850 [2024-11-18 00:40:58.460617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.850 qpair failed and we were unable to recover it. 00:35:34.850 [2024-11-18 00:40:58.460694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.851 [2024-11-18 00:40:58.460720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.851 qpair failed and we were unable to recover it. 00:35:34.851 [2024-11-18 00:40:58.460817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.851 [2024-11-18 00:40:58.460849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.851 qpair failed and we were unable to recover it. 00:35:34.851 [2024-11-18 00:40:58.460967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.851 [2024-11-18 00:40:58.460996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.851 qpair failed and we were unable to recover it. 00:35:34.851 [2024-11-18 00:40:58.461155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.851 [2024-11-18 00:40:58.461185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.851 qpair failed and we were unable to recover it. 00:35:34.851 [2024-11-18 00:40:58.461323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.851 [2024-11-18 00:40:58.461350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.851 qpair failed and we were unable to recover it. 00:35:34.851 [2024-11-18 00:40:58.461472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.851 [2024-11-18 00:40:58.461498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.851 qpair failed and we were unable to recover it. 00:35:34.851 [2024-11-18 00:40:58.461663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.851 [2024-11-18 00:40:58.461707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.851 qpair failed and we were unable to recover it. 00:35:34.851 [2024-11-18 00:40:58.461851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.851 [2024-11-18 00:40:58.461896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.851 qpair failed and we were unable to recover it. 00:35:34.851 [2024-11-18 00:40:58.462040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.851 [2024-11-18 00:40:58.462087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.851 qpair failed and we were unable to recover it. 00:35:34.851 [2024-11-18 00:40:58.462205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.851 [2024-11-18 00:40:58.462248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.851 qpair failed and we were unable to recover it. 00:35:34.851 [2024-11-18 00:40:58.462380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.851 [2024-11-18 00:40:58.462408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.851 qpair failed and we were unable to recover it. 00:35:34.851 [2024-11-18 00:40:58.462524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.851 [2024-11-18 00:40:58.462551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.851 qpair failed and we were unable to recover it. 00:35:34.851 [2024-11-18 00:40:58.462646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.851 [2024-11-18 00:40:58.462690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.851 qpair failed and we were unable to recover it. 00:35:34.851 [2024-11-18 00:40:58.462836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.851 [2024-11-18 00:40:58.462865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.851 qpair failed and we were unable to recover it. 00:35:34.851 [2024-11-18 00:40:58.462960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.851 [2024-11-18 00:40:58.462990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.851 qpair failed and we were unable to recover it. 00:35:34.851 [2024-11-18 00:40:58.463093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.851 [2024-11-18 00:40:58.463118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.851 qpair failed and we were unable to recover it. 00:35:34.851 [2024-11-18 00:40:58.463264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.851 [2024-11-18 00:40:58.463289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.851 qpair failed and we were unable to recover it. 00:35:34.851 [2024-11-18 00:40:58.463410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.851 [2024-11-18 00:40:58.463437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.851 qpair failed and we were unable to recover it. 00:35:34.851 [2024-11-18 00:40:58.463544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.851 [2024-11-18 00:40:58.463569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.851 qpair failed and we were unable to recover it. 00:35:34.851 [2024-11-18 00:40:58.463654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.851 [2024-11-18 00:40:58.463680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.851 qpair failed and we were unable to recover it. 00:35:34.851 [2024-11-18 00:40:58.463759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.851 [2024-11-18 00:40:58.463803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.851 qpair failed and we were unable to recover it. 00:35:34.851 [2024-11-18 00:40:58.463927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.851 [2024-11-18 00:40:58.463956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.851 qpair failed and we were unable to recover it. 00:35:34.851 [2024-11-18 00:40:58.464081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.851 [2024-11-18 00:40:58.464110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.851 qpair failed and we were unable to recover it. 00:35:34.851 [2024-11-18 00:40:58.464231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.851 [2024-11-18 00:40:58.464279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.851 qpair failed and we were unable to recover it. 00:35:34.851 [2024-11-18 00:40:58.464401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.851 [2024-11-18 00:40:58.464445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.851 qpair failed and we were unable to recover it. 00:35:34.851 [2024-11-18 00:40:58.464568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.851 [2024-11-18 00:40:58.464597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.851 qpair failed and we were unable to recover it. 00:35:34.851 [2024-11-18 00:40:58.464725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.851 [2024-11-18 00:40:58.464754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.851 qpair failed and we were unable to recover it. 00:35:34.851 [2024-11-18 00:40:58.464875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.851 [2024-11-18 00:40:58.464904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.851 qpair failed and we were unable to recover it. 00:35:34.851 [2024-11-18 00:40:58.465044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.851 [2024-11-18 00:40:58.465103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.851 qpair failed and we were unable to recover it. 00:35:34.851 [2024-11-18 00:40:58.465229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.851 [2024-11-18 00:40:58.465257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.851 qpair failed and we were unable to recover it. 00:35:34.851 [2024-11-18 00:40:58.465402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.851 [2024-11-18 00:40:58.465448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.851 qpair failed and we were unable to recover it. 00:35:34.851 [2024-11-18 00:40:58.465572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.851 [2024-11-18 00:40:58.465608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.851 qpair failed and we were unable to recover it. 00:35:34.851 [2024-11-18 00:40:58.465718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.851 [2024-11-18 00:40:58.465745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.851 qpair failed and we were unable to recover it. 00:35:34.851 [2024-11-18 00:40:58.465859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.851 [2024-11-18 00:40:58.465886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.851 qpair failed and we were unable to recover it. 00:35:34.851 [2024-11-18 00:40:58.466029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.851 [2024-11-18 00:40:58.466056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.851 qpair failed and we were unable to recover it. 00:35:34.851 [2024-11-18 00:40:58.466198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.851 [2024-11-18 00:40:58.466225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.851 qpair failed and we were unable to recover it. 00:35:34.851 [2024-11-18 00:40:58.466329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.851 [2024-11-18 00:40:58.466356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.851 qpair failed and we were unable to recover it. 00:35:34.851 [2024-11-18 00:40:58.466451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.851 [2024-11-18 00:40:58.466478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.851 qpair failed and we were unable to recover it. 00:35:34.851 [2024-11-18 00:40:58.466601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.852 [2024-11-18 00:40:58.466627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.852 qpair failed and we were unable to recover it. 00:35:34.852 [2024-11-18 00:40:58.466747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.852 [2024-11-18 00:40:58.466773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.852 qpair failed and we were unable to recover it. 00:35:34.852 [2024-11-18 00:40:58.466854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.852 [2024-11-18 00:40:58.466880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.852 qpair failed and we were unable to recover it. 00:35:34.852 [2024-11-18 00:40:58.466965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.852 [2024-11-18 00:40:58.466991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.852 qpair failed and we were unable to recover it. 00:35:34.852 [2024-11-18 00:40:58.467070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.852 [2024-11-18 00:40:58.467096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.852 qpair failed and we were unable to recover it. 00:35:34.852 [2024-11-18 00:40:58.467201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.852 [2024-11-18 00:40:58.467227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.852 qpair failed and we were unable to recover it. 00:35:34.852 [2024-11-18 00:40:58.467351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.852 [2024-11-18 00:40:58.467378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.852 qpair failed and we were unable to recover it. 00:35:34.852 [2024-11-18 00:40:58.467462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.852 [2024-11-18 00:40:58.467489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.852 qpair failed and we were unable to recover it. 00:35:34.852 [2024-11-18 00:40:58.467577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.852 [2024-11-18 00:40:58.467604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.852 qpair failed and we were unable to recover it. 00:35:34.852 [2024-11-18 00:40:58.467687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.852 [2024-11-18 00:40:58.467715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.852 qpair failed and we were unable to recover it. 00:35:34.852 [2024-11-18 00:40:58.467857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.852 [2024-11-18 00:40:58.467884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.852 qpair failed and we were unable to recover it. 00:35:34.852 [2024-11-18 00:40:58.468052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.852 [2024-11-18 00:40:58.468079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.852 qpair failed and we were unable to recover it. 00:35:34.852 [2024-11-18 00:40:58.468207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.852 [2024-11-18 00:40:58.468234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.852 qpair failed and we were unable to recover it. 00:35:34.852 [2024-11-18 00:40:58.468380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.852 [2024-11-18 00:40:58.468406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.852 qpair failed and we were unable to recover it. 00:35:34.852 [2024-11-18 00:40:58.468493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.852 [2024-11-18 00:40:58.468520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.852 qpair failed and we were unable to recover it. 00:35:34.852 [2024-11-18 00:40:58.468606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.852 [2024-11-18 00:40:58.468634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.852 qpair failed and we were unable to recover it. 00:35:34.852 [2024-11-18 00:40:58.468716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.852 [2024-11-18 00:40:58.468743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.852 qpair failed and we were unable to recover it. 00:35:34.852 [2024-11-18 00:40:58.468857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.852 [2024-11-18 00:40:58.468883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.852 qpair failed and we were unable to recover it. 00:35:34.852 [2024-11-18 00:40:58.468968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.852 [2024-11-18 00:40:58.468994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.852 qpair failed and we were unable to recover it. 00:35:34.852 [2024-11-18 00:40:58.469106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.852 [2024-11-18 00:40:58.469133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.852 qpair failed and we were unable to recover it. 00:35:34.852 [2024-11-18 00:40:58.469223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.852 [2024-11-18 00:40:58.469248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.852 qpair failed and we were unable to recover it. 00:35:34.852 [2024-11-18 00:40:58.469392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.852 [2024-11-18 00:40:58.469419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.852 qpair failed and we were unable to recover it. 00:35:34.852 [2024-11-18 00:40:58.469543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.852 [2024-11-18 00:40:58.469569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.852 qpair failed and we were unable to recover it. 00:35:34.852 [2024-11-18 00:40:58.469688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.852 [2024-11-18 00:40:58.469713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.852 qpair failed and we were unable to recover it. 00:35:34.852 [2024-11-18 00:40:58.469837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.852 [2024-11-18 00:40:58.469863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.852 qpair failed and we were unable to recover it. 00:35:34.852 [2024-11-18 00:40:58.469959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.852 [2024-11-18 00:40:58.469985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.852 qpair failed and we were unable to recover it. 00:35:34.852 [2024-11-18 00:40:58.470127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.852 [2024-11-18 00:40:58.470154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.852 qpair failed and we were unable to recover it. 00:35:34.852 [2024-11-18 00:40:58.470270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.852 [2024-11-18 00:40:58.470298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.852 qpair failed and we were unable to recover it. 00:35:34.852 [2024-11-18 00:40:58.470388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.852 [2024-11-18 00:40:58.470414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.852 qpair failed and we were unable to recover it. 00:35:34.852 [2024-11-18 00:40:58.470525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.852 [2024-11-18 00:40:58.470551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.852 qpair failed and we were unable to recover it. 00:35:34.852 [2024-11-18 00:40:58.470659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.852 [2024-11-18 00:40:58.470688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.852 qpair failed and we were unable to recover it. 00:35:34.852 [2024-11-18 00:40:58.470826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.852 [2024-11-18 00:40:58.470876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.852 qpair failed and we were unable to recover it. 00:35:34.852 [2024-11-18 00:40:58.471002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.852 [2024-11-18 00:40:58.471035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.852 qpair failed and we were unable to recover it. 00:35:34.852 [2024-11-18 00:40:58.471129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.852 [2024-11-18 00:40:58.471155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.852 qpair failed and we were unable to recover it. 00:35:34.852 [2024-11-18 00:40:58.471238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.852 [2024-11-18 00:40:58.471264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.852 qpair failed and we were unable to recover it. 00:35:34.852 [2024-11-18 00:40:58.471358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.852 [2024-11-18 00:40:58.471384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.852 qpair failed and we were unable to recover it. 00:35:34.852 [2024-11-18 00:40:58.471524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.852 [2024-11-18 00:40:58.471553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.852 qpair failed and we were unable to recover it. 00:35:34.852 [2024-11-18 00:40:58.471683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.852 [2024-11-18 00:40:58.471712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.852 qpair failed and we were unable to recover it. 00:35:34.852 [2024-11-18 00:40:58.471802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.852 [2024-11-18 00:40:58.471831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.852 qpair failed and we were unable to recover it. 00:35:34.852 [2024-11-18 00:40:58.471978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.852 [2024-11-18 00:40:58.472009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.852 qpair failed and we were unable to recover it. 00:35:34.852 [2024-11-18 00:40:58.472133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.852 [2024-11-18 00:40:58.472159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.853 qpair failed and we were unable to recover it. 00:35:34.853 [2024-11-18 00:40:58.472269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.853 [2024-11-18 00:40:58.472295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.853 qpair failed and we were unable to recover it. 00:35:34.853 [2024-11-18 00:40:58.472445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.853 [2024-11-18 00:40:58.472472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.853 qpair failed and we were unable to recover it. 00:35:34.853 [2024-11-18 00:40:58.472792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.853 [2024-11-18 00:40:58.472822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.853 qpair failed and we were unable to recover it. 00:35:34.853 [2024-11-18 00:40:58.472945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.853 [2024-11-18 00:40:58.472971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.853 qpair failed and we were unable to recover it. 00:35:34.853 [2024-11-18 00:40:58.473068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.853 [2024-11-18 00:40:58.473093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.853 qpair failed and we were unable to recover it. 00:35:34.853 [2024-11-18 00:40:58.473174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.853 [2024-11-18 00:40:58.473201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.853 qpair failed and we were unable to recover it. 00:35:34.853 [2024-11-18 00:40:58.473298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.853 [2024-11-18 00:40:58.473348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.853 qpair failed and we were unable to recover it. 00:35:34.853 [2024-11-18 00:40:58.473473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.853 [2024-11-18 00:40:58.473502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.853 qpair failed and we were unable to recover it. 00:35:34.853 [2024-11-18 00:40:58.473629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.853 [2024-11-18 00:40:58.473657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.853 qpair failed and we were unable to recover it. 00:35:34.853 [2024-11-18 00:40:58.473799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.853 [2024-11-18 00:40:58.473826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.853 qpair failed and we were unable to recover it. 00:35:34.853 [2024-11-18 00:40:58.473968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.853 [2024-11-18 00:40:58.473995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.853 qpair failed and we were unable to recover it. 00:35:34.853 [2024-11-18 00:40:58.474135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.853 [2024-11-18 00:40:58.474167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.853 qpair failed and we were unable to recover it. 00:35:34.853 [2024-11-18 00:40:58.474257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.853 [2024-11-18 00:40:58.474285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.853 qpair failed and we were unable to recover it. 00:35:34.853 [2024-11-18 00:40:58.474393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.853 [2024-11-18 00:40:58.474422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.853 qpair failed and we were unable to recover it. 00:35:34.853 [2024-11-18 00:40:58.474571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.853 [2024-11-18 00:40:58.474616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.853 qpair failed and we were unable to recover it. 00:35:34.853 [2024-11-18 00:40:58.474760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.853 [2024-11-18 00:40:58.474803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.853 qpair failed and we were unable to recover it. 00:35:34.853 [2024-11-18 00:40:58.474932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.853 [2024-11-18 00:40:58.474984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.853 qpair failed and we were unable to recover it. 00:35:34.853 [2024-11-18 00:40:58.475099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.853 [2024-11-18 00:40:58.475125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.853 qpair failed and we were unable to recover it. 00:35:34.853 [2024-11-18 00:40:58.475216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.853 [2024-11-18 00:40:58.475256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.853 qpair failed and we were unable to recover it. 00:35:34.853 [2024-11-18 00:40:58.475401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.853 [2024-11-18 00:40:58.475440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.853 qpair failed and we were unable to recover it. 00:35:34.853 [2024-11-18 00:40:58.475542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.853 [2024-11-18 00:40:58.475581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.853 qpair failed and we were unable to recover it. 00:35:34.853 [2024-11-18 00:40:58.475785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.853 [2024-11-18 00:40:58.475838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.853 qpair failed and we were unable to recover it. 00:35:34.853 [2024-11-18 00:40:58.475956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.853 [2024-11-18 00:40:58.476007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.853 qpair failed and we were unable to recover it. 00:35:34.853 [2024-11-18 00:40:58.476182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.853 [2024-11-18 00:40:58.476220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.853 qpair failed and we were unable to recover it. 00:35:34.853 [2024-11-18 00:40:58.476364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.853 [2024-11-18 00:40:58.476403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.853 qpair failed and we were unable to recover it. 00:35:34.853 [2024-11-18 00:40:58.476539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.853 [2024-11-18 00:40:58.476585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.853 qpair failed and we were unable to recover it. 00:35:34.853 [2024-11-18 00:40:58.476727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.853 [2024-11-18 00:40:58.476771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.853 qpair failed and we were unable to recover it. 00:35:34.853 [2024-11-18 00:40:58.476931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.853 [2024-11-18 00:40:58.476978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.853 qpair failed and we were unable to recover it. 00:35:34.853 [2024-11-18 00:40:58.477092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.853 [2024-11-18 00:40:58.477118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.853 qpair failed and we were unable to recover it. 00:35:34.853 [2024-11-18 00:40:58.477198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.853 [2024-11-18 00:40:58.477224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.853 qpair failed and we were unable to recover it. 00:35:34.853 [2024-11-18 00:40:58.477322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.853 [2024-11-18 00:40:58.477349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.853 qpair failed and we were unable to recover it. 00:35:34.853 [2024-11-18 00:40:58.477457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.853 [2024-11-18 00:40:58.477490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.853 qpair failed and we were unable to recover it. 00:35:34.853 [2024-11-18 00:40:58.477590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.853 [2024-11-18 00:40:58.477620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.853 qpair failed and we were unable to recover it. 00:35:34.853 [2024-11-18 00:40:58.477710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.853 [2024-11-18 00:40:58.477740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.853 qpair failed and we were unable to recover it. 00:35:34.853 [2024-11-18 00:40:58.477873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.853 [2024-11-18 00:40:58.477900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.853 qpair failed and we were unable to recover it. 00:35:34.853 [2024-11-18 00:40:58.477990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.853 [2024-11-18 00:40:58.478035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.854 qpair failed and we were unable to recover it. 00:35:34.854 [2024-11-18 00:40:58.478192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.854 [2024-11-18 00:40:58.478220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.854 qpair failed and we were unable to recover it. 00:35:34.854 [2024-11-18 00:40:58.478326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.854 [2024-11-18 00:40:58.478353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.854 qpair failed and we were unable to recover it. 00:35:34.854 [2024-11-18 00:40:58.478483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.854 [2024-11-18 00:40:58.478522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.854 qpair failed and we were unable to recover it. 00:35:34.854 [2024-11-18 00:40:58.478693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.854 [2024-11-18 00:40:58.478736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.854 qpair failed and we were unable to recover it. 00:35:34.854 [2024-11-18 00:40:58.478837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.854 [2024-11-18 00:40:58.478867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.854 qpair failed and we were unable to recover it. 00:35:34.854 [2024-11-18 00:40:58.478967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.854 [2024-11-18 00:40:58.478997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.854 qpair failed and we were unable to recover it. 00:35:34.854 [2024-11-18 00:40:58.479099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.854 [2024-11-18 00:40:58.479131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.854 qpair failed and we were unable to recover it. 00:35:34.854 [2024-11-18 00:40:58.479295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.854 [2024-11-18 00:40:58.479330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.854 qpair failed and we were unable to recover it. 00:35:34.854 [2024-11-18 00:40:58.479473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.854 [2024-11-18 00:40:58.479499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.854 qpair failed and we were unable to recover it. 00:35:34.854 [2024-11-18 00:40:58.479583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.854 [2024-11-18 00:40:58.479640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.854 qpair failed and we were unable to recover it. 00:35:34.854 [2024-11-18 00:40:58.479791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.854 [2024-11-18 00:40:58.479829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.854 qpair failed and we were unable to recover it. 00:35:34.854 [2024-11-18 00:40:58.480042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.854 [2024-11-18 00:40:58.480092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.854 qpair failed and we were unable to recover it. 00:35:34.854 [2024-11-18 00:40:58.480198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.854 [2024-11-18 00:40:58.480228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.854 qpair failed and we were unable to recover it. 00:35:34.854 [2024-11-18 00:40:58.480361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.854 [2024-11-18 00:40:58.480388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.854 qpair failed and we were unable to recover it. 00:35:34.854 [2024-11-18 00:40:58.480500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.854 [2024-11-18 00:40:58.480526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.854 qpair failed and we were unable to recover it. 00:35:34.854 [2024-11-18 00:40:58.480617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.854 [2024-11-18 00:40:58.480654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.854 qpair failed and we were unable to recover it. 00:35:34.854 [2024-11-18 00:40:58.480832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.854 [2024-11-18 00:40:58.480883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.854 qpair failed and we were unable to recover it. 00:35:34.854 [2024-11-18 00:40:58.481070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.854 [2024-11-18 00:40:58.481109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.854 qpair failed and we were unable to recover it. 00:35:34.854 [2024-11-18 00:40:58.481236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.854 [2024-11-18 00:40:58.481267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.854 qpair failed and we were unable to recover it. 00:35:34.854 [2024-11-18 00:40:58.481409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.854 [2024-11-18 00:40:58.481437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.854 qpair failed and we were unable to recover it. 00:35:34.854 [2024-11-18 00:40:58.481517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.854 [2024-11-18 00:40:58.481545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.854 qpair failed and we were unable to recover it. 00:35:34.854 [2024-11-18 00:40:58.481659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.854 [2024-11-18 00:40:58.481686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.854 qpair failed and we were unable to recover it. 00:35:34.854 [2024-11-18 00:40:58.481775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.854 [2024-11-18 00:40:58.481805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.854 qpair failed and we were unable to recover it. 00:35:34.854 [2024-11-18 00:40:58.481943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.854 [2024-11-18 00:40:58.481986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.854 qpair failed and we were unable to recover it. 00:35:34.854 [2024-11-18 00:40:58.482082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.854 [2024-11-18 00:40:58.482113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.854 qpair failed and we were unable to recover it. 00:35:34.854 [2024-11-18 00:40:58.482212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.854 [2024-11-18 00:40:58.482239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.854 qpair failed and we were unable to recover it. 00:35:34.854 [2024-11-18 00:40:58.482346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.854 [2024-11-18 00:40:58.482374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.854 qpair failed and we were unable to recover it. 00:35:34.854 [2024-11-18 00:40:58.482459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.854 [2024-11-18 00:40:58.482486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.854 qpair failed and we were unable to recover it. 00:35:34.854 [2024-11-18 00:40:58.482572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.854 [2024-11-18 00:40:58.482598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.854 qpair failed and we were unable to recover it. 00:35:34.854 [2024-11-18 00:40:58.482681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.854 [2024-11-18 00:40:58.482707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.854 qpair failed and we were unable to recover it. 00:35:34.854 [2024-11-18 00:40:58.482851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.854 [2024-11-18 00:40:58.482901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.854 qpair failed and we were unable to recover it. 00:35:34.854 [2024-11-18 00:40:58.483003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.854 [2024-11-18 00:40:58.483029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.854 qpair failed and we were unable to recover it. 00:35:34.854 [2024-11-18 00:40:58.483177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.854 [2024-11-18 00:40:58.483203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.854 qpair failed and we were unable to recover it. 00:35:34.854 [2024-11-18 00:40:58.483288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.854 [2024-11-18 00:40:58.483324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.854 qpair failed and we were unable to recover it. 00:35:34.854 [2024-11-18 00:40:58.483437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.854 [2024-11-18 00:40:58.483464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.854 qpair failed and we were unable to recover it. 00:35:34.854 [2024-11-18 00:40:58.483573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.854 [2024-11-18 00:40:58.483600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.854 qpair failed and we were unable to recover it. 00:35:34.854 [2024-11-18 00:40:58.483714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.854 [2024-11-18 00:40:58.483740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.854 qpair failed and we were unable to recover it. 00:35:34.854 [2024-11-18 00:40:58.483838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.854 [2024-11-18 00:40:58.483867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.854 qpair failed and we were unable to recover it. 00:35:34.854 [2024-11-18 00:40:58.483963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.854 [2024-11-18 00:40:58.483994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.854 qpair failed and we were unable to recover it. 00:35:34.854 [2024-11-18 00:40:58.484117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.854 [2024-11-18 00:40:58.484146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.854 qpair failed and we were unable to recover it. 00:35:34.854 [2024-11-18 00:40:58.484284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.854 [2024-11-18 00:40:58.484318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.855 qpair failed and we were unable to recover it. 00:35:34.855 [2024-11-18 00:40:58.484409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.855 [2024-11-18 00:40:58.484436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.855 qpair failed and we were unable to recover it. 00:35:34.855 [2024-11-18 00:40:58.484523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.855 [2024-11-18 00:40:58.484555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.855 qpair failed and we were unable to recover it. 00:35:34.855 [2024-11-18 00:40:58.484651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.855 [2024-11-18 00:40:58.484702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.855 qpair failed and we were unable to recover it. 00:35:34.855 [2024-11-18 00:40:58.484822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.855 [2024-11-18 00:40:58.484852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.855 qpair failed and we were unable to recover it. 00:35:34.855 [2024-11-18 00:40:58.485030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.855 [2024-11-18 00:40:58.485083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.855 qpair failed and we were unable to recover it. 00:35:34.855 [2024-11-18 00:40:58.485202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.855 [2024-11-18 00:40:58.485245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.855 qpair failed and we were unable to recover it. 00:35:34.855 [2024-11-18 00:40:58.485338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.855 [2024-11-18 00:40:58.485365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.855 qpair failed and we were unable to recover it. 00:35:34.855 [2024-11-18 00:40:58.485452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.855 [2024-11-18 00:40:58.485478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.855 qpair failed and we were unable to recover it. 00:35:34.855 [2024-11-18 00:40:58.485594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.855 [2024-11-18 00:40:58.485620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.855 qpair failed and we were unable to recover it. 00:35:34.855 [2024-11-18 00:40:58.485707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.855 [2024-11-18 00:40:58.485734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.855 qpair failed and we were unable to recover it. 00:35:34.855 [2024-11-18 00:40:58.485870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.855 [2024-11-18 00:40:58.485899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.855 qpair failed and we were unable to recover it. 00:35:34.855 [2024-11-18 00:40:58.486001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.855 [2024-11-18 00:40:58.486027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.855 qpair failed and we were unable to recover it. 00:35:34.855 [2024-11-18 00:40:58.486169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.855 [2024-11-18 00:40:58.486200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.855 qpair failed and we were unable to recover it. 00:35:34.855 [2024-11-18 00:40:58.486365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.855 [2024-11-18 00:40:58.486393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.855 qpair failed and we were unable to recover it. 00:35:34.855 [2024-11-18 00:40:58.486476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.855 [2024-11-18 00:40:58.486503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.855 qpair failed and we were unable to recover it. 00:35:34.855 [2024-11-18 00:40:58.486613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.855 [2024-11-18 00:40:58.486641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.855 qpair failed and we were unable to recover it. 00:35:34.855 [2024-11-18 00:40:58.486765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.855 [2024-11-18 00:40:58.486795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.855 qpair failed and we were unable to recover it. 00:35:34.855 [2024-11-18 00:40:58.486944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.855 [2024-11-18 00:40:58.486973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.855 qpair failed and we were unable to recover it. 00:35:34.855 [2024-11-18 00:40:58.487122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.855 [2024-11-18 00:40:58.487160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.855 qpair failed and we were unable to recover it. 00:35:34.855 [2024-11-18 00:40:58.487316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.855 [2024-11-18 00:40:58.487363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.855 qpair failed and we were unable to recover it. 00:35:34.855 [2024-11-18 00:40:58.487446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.855 [2024-11-18 00:40:58.487472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.855 qpair failed and we were unable to recover it. 00:35:34.855 [2024-11-18 00:40:58.487613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.855 [2024-11-18 00:40:58.487640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.855 qpair failed and we were unable to recover it. 00:35:34.855 [2024-11-18 00:40:58.487722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.855 [2024-11-18 00:40:58.487766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.855 qpair failed and we were unable to recover it. 00:35:34.855 [2024-11-18 00:40:58.487913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.855 [2024-11-18 00:40:58.487962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.855 qpair failed and we were unable to recover it. 00:35:34.855 [2024-11-18 00:40:58.488115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.855 [2024-11-18 00:40:58.488145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.855 qpair failed and we were unable to recover it. 00:35:34.855 [2024-11-18 00:40:58.488264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.855 [2024-11-18 00:40:58.488308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.855 qpair failed and we were unable to recover it. 00:35:34.855 [2024-11-18 00:40:58.488484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.855 [2024-11-18 00:40:58.488528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.855 qpair failed and we were unable to recover it. 00:35:34.855 [2024-11-18 00:40:58.488640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.855 [2024-11-18 00:40:58.488667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.855 qpair failed and we were unable to recover it. 00:35:34.855 [2024-11-18 00:40:58.488798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.855 [2024-11-18 00:40:58.488849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.855 qpair failed and we were unable to recover it. 00:35:34.855 [2024-11-18 00:40:58.488948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.855 [2024-11-18 00:40:58.488999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.855 qpair failed and we were unable to recover it. 00:35:34.855 [2024-11-18 00:40:58.489113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.855 [2024-11-18 00:40:58.489139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.855 qpair failed and we were unable to recover it. 00:35:34.855 [2024-11-18 00:40:58.489256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.855 [2024-11-18 00:40:58.489282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.855 qpair failed and we were unable to recover it. 00:35:34.855 [2024-11-18 00:40:58.489365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.855 [2024-11-18 00:40:58.489392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.855 qpair failed and we were unable to recover it. 00:35:34.855 [2024-11-18 00:40:58.489508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.855 [2024-11-18 00:40:58.489536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.855 qpair failed and we were unable to recover it. 00:35:34.855 [2024-11-18 00:40:58.489680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.855 [2024-11-18 00:40:58.489707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.855 qpair failed and we were unable to recover it. 00:35:34.855 [2024-11-18 00:40:58.489849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.855 [2024-11-18 00:40:58.489875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.855 qpair failed and we were unable to recover it. 00:35:34.855 [2024-11-18 00:40:58.489959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.855 [2024-11-18 00:40:58.489986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.855 qpair failed and we were unable to recover it. 00:35:34.855 [2024-11-18 00:40:58.490100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.855 [2024-11-18 00:40:58.490128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.855 qpair failed and we were unable to recover it. 00:35:34.855 [2024-11-18 00:40:58.490214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.855 [2024-11-18 00:40:58.490241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.855 qpair failed and we were unable to recover it. 00:35:34.855 [2024-11-18 00:40:58.490358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.855 [2024-11-18 00:40:58.490385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.855 qpair failed and we were unable to recover it. 00:35:34.855 [2024-11-18 00:40:58.490459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.855 [2024-11-18 00:40:58.490485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.856 qpair failed and we were unable to recover it. 00:35:34.856 [2024-11-18 00:40:58.490647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.856 [2024-11-18 00:40:58.490676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.856 qpair failed and we were unable to recover it. 00:35:34.856 [2024-11-18 00:40:58.490827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.856 [2024-11-18 00:40:58.490865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.856 qpair failed and we were unable to recover it. 00:35:34.856 [2024-11-18 00:40:58.490992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.856 [2024-11-18 00:40:58.491022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.856 qpair failed and we were unable to recover it. 00:35:34.856 [2024-11-18 00:40:58.491207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.856 [2024-11-18 00:40:58.491244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.856 qpair failed and we were unable to recover it. 00:35:34.856 [2024-11-18 00:40:58.491392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.856 [2024-11-18 00:40:58.491432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.856 qpair failed and we were unable to recover it. 00:35:34.856 [2024-11-18 00:40:58.491526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.856 [2024-11-18 00:40:58.491553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.856 qpair failed and we were unable to recover it. 00:35:34.856 [2024-11-18 00:40:58.491701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.856 [2024-11-18 00:40:58.491731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.856 qpair failed and we were unable to recover it. 00:35:34.856 [2024-11-18 00:40:58.491877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.856 [2024-11-18 00:40:58.491921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.856 qpair failed and we were unable to recover it. 00:35:34.856 [2024-11-18 00:40:58.492032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.856 [2024-11-18 00:40:58.492059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.856 qpair failed and we were unable to recover it. 00:35:34.856 [2024-11-18 00:40:58.492194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.856 [2024-11-18 00:40:58.492233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.856 qpair failed and we were unable to recover it. 00:35:34.856 [2024-11-18 00:40:58.492333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.856 [2024-11-18 00:40:58.492361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.856 qpair failed and we were unable to recover it. 00:35:34.856 [2024-11-18 00:40:58.492489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.856 [2024-11-18 00:40:58.492517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.856 qpair failed and we were unable to recover it. 00:35:34.856 [2024-11-18 00:40:58.492657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.856 [2024-11-18 00:40:58.492683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.856 qpair failed and we were unable to recover it. 00:35:34.856 [2024-11-18 00:40:58.492799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.856 [2024-11-18 00:40:58.492825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.856 qpair failed and we were unable to recover it. 00:35:34.856 [2024-11-18 00:40:58.492914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.856 [2024-11-18 00:40:58.492942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.856 qpair failed and we were unable to recover it. 00:35:34.856 [2024-11-18 00:40:58.493078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.856 [2024-11-18 00:40:58.493107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.856 qpair failed and we were unable to recover it. 00:35:34.856 [2024-11-18 00:40:58.493281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.856 [2024-11-18 00:40:58.493338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.856 qpair failed and we were unable to recover it. 00:35:34.856 [2024-11-18 00:40:58.493448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.856 [2024-11-18 00:40:58.493476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.856 qpair failed and we were unable to recover it. 00:35:34.856 [2024-11-18 00:40:58.493590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.856 [2024-11-18 00:40:58.493634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.856 qpair failed and we were unable to recover it. 00:35:34.856 [2024-11-18 00:40:58.493782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.856 [2024-11-18 00:40:58.493830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.856 qpair failed and we were unable to recover it. 00:35:34.856 [2024-11-18 00:40:58.493953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.856 [2024-11-18 00:40:58.493993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.856 qpair failed and we were unable to recover it. 00:35:34.856 [2024-11-18 00:40:58.494103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.856 [2024-11-18 00:40:58.494158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.856 qpair failed and we were unable to recover it. 00:35:34.856 [2024-11-18 00:40:58.494325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.856 [2024-11-18 00:40:58.494351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.856 qpair failed and we were unable to recover it. 00:35:34.856 [2024-11-18 00:40:58.494440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.856 [2024-11-18 00:40:58.494466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.856 qpair failed and we were unable to recover it. 00:35:34.856 [2024-11-18 00:40:58.494568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.856 [2024-11-18 00:40:58.494598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.856 qpair failed and we were unable to recover it. 00:35:34.856 [2024-11-18 00:40:58.494718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.856 [2024-11-18 00:40:58.494756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.856 qpair failed and we were unable to recover it. 00:35:34.856 [2024-11-18 00:40:58.494879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.856 [2024-11-18 00:40:58.494925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.856 qpair failed and we were unable to recover it. 00:35:34.856 [2024-11-18 00:40:58.495040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.856 [2024-11-18 00:40:58.495074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.856 qpair failed and we were unable to recover it. 00:35:34.856 [2024-11-18 00:40:58.495161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.856 [2024-11-18 00:40:58.495188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.856 qpair failed and we were unable to recover it. 00:35:34.856 [2024-11-18 00:40:58.495301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.856 [2024-11-18 00:40:58.495333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.856 qpair failed and we were unable to recover it. 00:35:34.856 [2024-11-18 00:40:58.495418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.856 [2024-11-18 00:40:58.495445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.856 qpair failed and we were unable to recover it. 00:35:34.856 [2024-11-18 00:40:58.495577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.856 [2024-11-18 00:40:58.495607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.856 qpair failed and we were unable to recover it. 00:35:34.856 [2024-11-18 00:40:58.495794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.856 [2024-11-18 00:40:58.495847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.856 qpair failed and we were unable to recover it. 00:35:34.856 [2024-11-18 00:40:58.496012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.856 [2024-11-18 00:40:58.496041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.856 qpair failed and we were unable to recover it. 00:35:34.856 [2024-11-18 00:40:58.496179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.856 [2024-11-18 00:40:58.496207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.856 qpair failed and we were unable to recover it. 00:35:34.856 [2024-11-18 00:40:58.496367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.856 [2024-11-18 00:40:58.496406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.856 qpair failed and we were unable to recover it. 00:35:34.856 [2024-11-18 00:40:58.496526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.856 [2024-11-18 00:40:58.496553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.856 qpair failed and we were unable to recover it. 00:35:34.856 [2024-11-18 00:40:58.496745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.856 [2024-11-18 00:40:58.496797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.856 qpair failed and we were unable to recover it. 00:35:34.856 [2024-11-18 00:40:58.496906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.856 [2024-11-18 00:40:58.496934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.856 qpair failed and we were unable to recover it. 00:35:34.856 [2024-11-18 00:40:58.497049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.856 [2024-11-18 00:40:58.497079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.856 qpair failed and we were unable to recover it. 00:35:34.856 [2024-11-18 00:40:58.497231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.857 [2024-11-18 00:40:58.497262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.857 qpair failed and we were unable to recover it. 00:35:34.857 [2024-11-18 00:40:58.497415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.857 [2024-11-18 00:40:58.497442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.857 qpair failed and we were unable to recover it. 00:35:34.857 [2024-11-18 00:40:58.497526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.857 [2024-11-18 00:40:58.497552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.857 qpair failed and we were unable to recover it. 00:35:34.857 [2024-11-18 00:40:58.497710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.857 [2024-11-18 00:40:58.497748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.857 qpair failed and we were unable to recover it. 00:35:34.857 [2024-11-18 00:40:58.497960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.857 [2024-11-18 00:40:58.497998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.857 qpair failed and we were unable to recover it. 00:35:34.857 [2024-11-18 00:40:58.498231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.857 [2024-11-18 00:40:58.498258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.857 qpair failed and we were unable to recover it. 00:35:34.857 [2024-11-18 00:40:58.498404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.857 [2024-11-18 00:40:58.498433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.857 qpair failed and we were unable to recover it. 00:35:34.857 [2024-11-18 00:40:58.498524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.857 [2024-11-18 00:40:58.498550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.857 qpair failed and we were unable to recover it. 00:35:34.857 [2024-11-18 00:40:58.498646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.857 [2024-11-18 00:40:58.498672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.857 qpair failed and we were unable to recover it. 00:35:34.857 [2024-11-18 00:40:58.498781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.857 [2024-11-18 00:40:58.498808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.857 qpair failed and we were unable to recover it. 00:35:34.857 [2024-11-18 00:40:58.498916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.857 [2024-11-18 00:40:58.498945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.857 qpair failed and we were unable to recover it. 00:35:34.857 [2024-11-18 00:40:58.499040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.857 [2024-11-18 00:40:58.499070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.857 qpair failed and we were unable to recover it. 00:35:34.857 [2024-11-18 00:40:58.499172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.857 [2024-11-18 00:40:58.499230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.857 qpair failed and we were unable to recover it. 00:35:34.857 [2024-11-18 00:40:58.499368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.857 [2024-11-18 00:40:58.499407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.857 qpair failed and we were unable to recover it. 00:35:34.857 [2024-11-18 00:40:58.499552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.857 [2024-11-18 00:40:58.499579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.857 qpair failed and we were unable to recover it. 00:35:34.857 [2024-11-18 00:40:58.499731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.857 [2024-11-18 00:40:58.499758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.857 qpair failed and we were unable to recover it. 00:35:34.857 [2024-11-18 00:40:58.499871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.857 [2024-11-18 00:40:58.499896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.857 qpair failed and we were unable to recover it. 00:35:34.857 [2024-11-18 00:40:58.500063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.857 [2024-11-18 00:40:58.500092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.857 qpair failed and we were unable to recover it. 00:35:34.857 [2024-11-18 00:40:58.500183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.857 [2024-11-18 00:40:58.500212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.857 qpair failed and we were unable to recover it. 00:35:34.857 [2024-11-18 00:40:58.500359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.857 [2024-11-18 00:40:58.500399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.857 qpair failed and we were unable to recover it. 00:35:34.857 [2024-11-18 00:40:58.500545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.857 [2024-11-18 00:40:58.500572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.857 qpair failed and we were unable to recover it. 00:35:34.857 [2024-11-18 00:40:58.500669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.857 [2024-11-18 00:40:58.500714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.857 qpair failed and we were unable to recover it. 00:35:34.857 [2024-11-18 00:40:58.500838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.857 [2024-11-18 00:40:58.500867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.857 qpair failed and we were unable to recover it. 00:35:34.857 [2024-11-18 00:40:58.500966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.857 [2024-11-18 00:40:58.501008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.857 qpair failed and we were unable to recover it. 00:35:34.857 [2024-11-18 00:40:58.501133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.857 [2024-11-18 00:40:58.501162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.857 qpair failed and we were unable to recover it. 00:35:34.857 [2024-11-18 00:40:58.501282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.857 [2024-11-18 00:40:58.501322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.857 qpair failed and we were unable to recover it. 00:35:34.857 [2024-11-18 00:40:58.501482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.857 [2024-11-18 00:40:58.501508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.857 qpair failed and we were unable to recover it. 00:35:34.857 [2024-11-18 00:40:58.501597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.857 [2024-11-18 00:40:58.501638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.857 qpair failed and we were unable to recover it. 00:35:34.857 [2024-11-18 00:40:58.501800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.857 [2024-11-18 00:40:58.501829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.857 qpair failed and we were unable to recover it. 00:35:34.857 [2024-11-18 00:40:58.501961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.857 [2024-11-18 00:40:58.502005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.857 qpair failed and we were unable to recover it. 00:35:34.857 [2024-11-18 00:40:58.502126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.857 [2024-11-18 00:40:58.502154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.857 qpair failed and we were unable to recover it. 00:35:34.857 [2024-11-18 00:40:58.502293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.857 [2024-11-18 00:40:58.502347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.857 qpair failed and we were unable to recover it. 00:35:34.857 [2024-11-18 00:40:58.502453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.857 [2024-11-18 00:40:58.502482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.857 qpair failed and we were unable to recover it. 00:35:34.857 [2024-11-18 00:40:58.502600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.858 [2024-11-18 00:40:58.502627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.858 qpair failed and we were unable to recover it. 00:35:34.858 [2024-11-18 00:40:58.502765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.858 [2024-11-18 00:40:58.502791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.858 qpair failed and we were unable to recover it. 00:35:34.858 [2024-11-18 00:40:58.502899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.858 [2024-11-18 00:40:58.502957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.858 qpair failed and we were unable to recover it. 00:35:34.858 [2024-11-18 00:40:58.503111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.858 [2024-11-18 00:40:58.503155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.858 qpair failed and we were unable to recover it. 00:35:34.858 [2024-11-18 00:40:58.503278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.858 [2024-11-18 00:40:58.503315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.858 qpair failed and we were unable to recover it. 00:35:34.858 [2024-11-18 00:40:58.503419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.858 [2024-11-18 00:40:58.503446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.858 qpair failed and we were unable to recover it. 00:35:34.858 [2024-11-18 00:40:58.503567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.858 [2024-11-18 00:40:58.503596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.858 qpair failed and we were unable to recover it. 00:35:34.858 [2024-11-18 00:40:58.503709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.858 [2024-11-18 00:40:58.503756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.858 qpair failed and we were unable to recover it. 00:35:34.858 [2024-11-18 00:40:58.503902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.858 [2024-11-18 00:40:58.503953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.858 qpair failed and we were unable to recover it. 00:35:34.858 [2024-11-18 00:40:58.504115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.858 [2024-11-18 00:40:58.504145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.858 qpair failed and we were unable to recover it. 00:35:34.858 [2024-11-18 00:40:58.504295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.858 [2024-11-18 00:40:58.504339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.858 qpair failed and we were unable to recover it. 00:35:34.858 [2024-11-18 00:40:58.504500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.858 [2024-11-18 00:40:58.504526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.858 qpair failed and we were unable to recover it. 00:35:34.858 [2024-11-18 00:40:58.504709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.858 [2024-11-18 00:40:58.504758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.858 qpair failed and we were unable to recover it. 00:35:34.858 [2024-11-18 00:40:58.504892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.858 [2024-11-18 00:40:58.504944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.858 qpair failed and we were unable to recover it. 00:35:34.858 [2024-11-18 00:40:58.505091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.858 [2024-11-18 00:40:58.505120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.858 qpair failed and we were unable to recover it. 00:35:34.858 [2024-11-18 00:40:58.505235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.858 [2024-11-18 00:40:58.505264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.858 qpair failed and we were unable to recover it. 00:35:34.858 [2024-11-18 00:40:58.505392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.858 [2024-11-18 00:40:58.505419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.858 qpair failed and we were unable to recover it. 00:35:34.858 [2024-11-18 00:40:58.505557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.858 [2024-11-18 00:40:58.505586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.858 qpair failed and we were unable to recover it. 00:35:34.858 [2024-11-18 00:40:58.505734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.858 [2024-11-18 00:40:58.505764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.858 qpair failed and we were unable to recover it. 00:35:34.858 [2024-11-18 00:40:58.505913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.858 [2024-11-18 00:40:58.505958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.858 qpair failed and we were unable to recover it. 00:35:34.858 [2024-11-18 00:40:58.506069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.858 [2024-11-18 00:40:58.506096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.858 qpair failed and we were unable to recover it. 00:35:34.858 [2024-11-18 00:40:58.506208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.858 [2024-11-18 00:40:58.506241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.858 qpair failed and we were unable to recover it. 00:35:34.858 [2024-11-18 00:40:58.506340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.858 [2024-11-18 00:40:58.506381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.858 qpair failed and we were unable to recover it. 00:35:34.858 [2024-11-18 00:40:58.506529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.858 [2024-11-18 00:40:58.506557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.858 qpair failed and we were unable to recover it. 00:35:34.858 [2024-11-18 00:40:58.506657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.858 [2024-11-18 00:40:58.506686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.858 qpair failed and we were unable to recover it. 00:35:34.858 [2024-11-18 00:40:58.506809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.858 [2024-11-18 00:40:58.506838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.858 qpair failed and we were unable to recover it. 00:35:34.858 [2024-11-18 00:40:58.506947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.858 [2024-11-18 00:40:58.506974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.858 qpair failed and we were unable to recover it. 00:35:34.858 [2024-11-18 00:40:58.507086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.859 [2024-11-18 00:40:58.507114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.859 qpair failed and we were unable to recover it. 00:35:34.859 [2024-11-18 00:40:58.507205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.859 [2024-11-18 00:40:58.507233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.859 qpair failed and we were unable to recover it. 00:35:34.859 [2024-11-18 00:40:58.507355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.859 [2024-11-18 00:40:58.507386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.859 qpair failed and we were unable to recover it. 00:35:34.859 [2024-11-18 00:40:58.507497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.859 [2024-11-18 00:40:58.507526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.859 qpair failed and we were unable to recover it. 00:35:34.859 [2024-11-18 00:40:58.507639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.859 [2024-11-18 00:40:58.507666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.859 qpair failed and we were unable to recover it. 00:35:34.859 [2024-11-18 00:40:58.507845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.859 [2024-11-18 00:40:58.507897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.859 qpair failed and we were unable to recover it. 00:35:34.859 [2024-11-18 00:40:58.507990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.859 [2024-11-18 00:40:58.508020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.859 qpair failed and we were unable to recover it. 00:35:34.859 [2024-11-18 00:40:58.508123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.859 [2024-11-18 00:40:58.508154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.859 qpair failed and we were unable to recover it. 00:35:34.859 [2024-11-18 00:40:58.508292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.859 [2024-11-18 00:40:58.508324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.859 qpair failed and we were unable to recover it. 00:35:34.859 [2024-11-18 00:40:58.508438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.859 [2024-11-18 00:40:58.508465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.859 qpair failed and we were unable to recover it. 00:35:34.859 [2024-11-18 00:40:58.508605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.859 [2024-11-18 00:40:58.508631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.859 qpair failed and we were unable to recover it. 00:35:34.859 [2024-11-18 00:40:58.508728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.859 [2024-11-18 00:40:58.508757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.859 qpair failed and we were unable to recover it. 00:35:34.859 [2024-11-18 00:40:58.508869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.859 [2024-11-18 00:40:58.508923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.859 qpair failed and we were unable to recover it. 00:35:34.859 [2024-11-18 00:40:58.509052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.859 [2024-11-18 00:40:58.509082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.859 qpair failed and we were unable to recover it. 00:35:34.859 [2024-11-18 00:40:58.509219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.859 [2024-11-18 00:40:58.509248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.859 qpair failed and we were unable to recover it. 00:35:34.859 [2024-11-18 00:40:58.509400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.859 [2024-11-18 00:40:58.509427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.859 qpair failed and we were unable to recover it. 00:35:34.859 [2024-11-18 00:40:58.509556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.859 [2024-11-18 00:40:58.509586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.859 qpair failed and we were unable to recover it. 00:35:34.859 [2024-11-18 00:40:58.509744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.859 [2024-11-18 00:40:58.509772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.859 qpair failed and we were unable to recover it. 00:35:34.859 [2024-11-18 00:40:58.509886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.859 [2024-11-18 00:40:58.509912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.859 qpair failed and we were unable to recover it. 00:35:34.859 [2024-11-18 00:40:58.509997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.859 [2024-11-18 00:40:58.510023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.859 qpair failed and we were unable to recover it. 00:35:34.859 [2024-11-18 00:40:58.510103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.859 [2024-11-18 00:40:58.510130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.860 qpair failed and we were unable to recover it. 00:35:34.860 [2024-11-18 00:40:58.510269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.860 [2024-11-18 00:40:58.510306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.860 qpair failed and we were unable to recover it. 00:35:34.860 [2024-11-18 00:40:58.510399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.860 [2024-11-18 00:40:58.510426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.860 qpair failed and we were unable to recover it. 00:35:34.860 [2024-11-18 00:40:58.510519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.860 [2024-11-18 00:40:58.510545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.860 qpair failed and we were unable to recover it. 00:35:34.860 [2024-11-18 00:40:58.510618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.860 [2024-11-18 00:40:58.510644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.860 qpair failed and we were unable to recover it. 00:35:34.860 [2024-11-18 00:40:58.510725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.860 [2024-11-18 00:40:58.510754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.860 qpair failed and we were unable to recover it. 00:35:34.860 [2024-11-18 00:40:58.510869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.860 [2024-11-18 00:40:58.510896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.860 qpair failed and we were unable to recover it. 00:35:34.860 [2024-11-18 00:40:58.511020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.860 [2024-11-18 00:40:58.511061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.860 qpair failed and we were unable to recover it. 00:35:34.860 [2024-11-18 00:40:58.511182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.860 [2024-11-18 00:40:58.511212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.860 qpair failed and we were unable to recover it. 00:35:34.860 [2024-11-18 00:40:58.511363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.860 [2024-11-18 00:40:58.511396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.860 qpair failed and we were unable to recover it. 00:35:34.860 [2024-11-18 00:40:58.511586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.860 [2024-11-18 00:40:58.511641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.860 qpair failed and we were unable to recover it. 00:35:34.860 [2024-11-18 00:40:58.511869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.860 [2024-11-18 00:40:58.511919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.860 qpair failed and we were unable to recover it. 00:35:34.860 [2024-11-18 00:40:58.512006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.860 [2024-11-18 00:40:58.512035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.860 qpair failed and we were unable to recover it. 00:35:34.860 [2024-11-18 00:40:58.512201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.860 [2024-11-18 00:40:58.512227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.860 qpair failed and we were unable to recover it. 00:35:34.860 [2024-11-18 00:40:58.512375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.860 [2024-11-18 00:40:58.512403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.860 qpair failed and we were unable to recover it. 00:35:34.860 [2024-11-18 00:40:58.512610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.860 [2024-11-18 00:40:58.512672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.860 qpair failed and we were unable to recover it. 00:35:34.860 [2024-11-18 00:40:58.512837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.860 [2024-11-18 00:40:58.512890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.860 qpair failed and we were unable to recover it. 00:35:34.860 [2024-11-18 00:40:58.513123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.860 [2024-11-18 00:40:58.513174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.860 qpair failed and we were unable to recover it. 00:35:34.860 [2024-11-18 00:40:58.513248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.860 [2024-11-18 00:40:58.513275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.860 qpair failed and we were unable to recover it. 00:35:34.860 [2024-11-18 00:40:58.513377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.860 [2024-11-18 00:40:58.513406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.860 qpair failed and we were unable to recover it. 00:35:34.860 [2024-11-18 00:40:58.513553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.860 [2024-11-18 00:40:58.513597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.860 qpair failed and we were unable to recover it. 00:35:34.860 [2024-11-18 00:40:58.513767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.860 [2024-11-18 00:40:58.513820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.860 qpair failed and we were unable to recover it. 00:35:34.860 [2024-11-18 00:40:58.514010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.860 [2024-11-18 00:40:58.514037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.860 qpair failed and we were unable to recover it. 00:35:34.860 [2024-11-18 00:40:58.514196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.860 [2024-11-18 00:40:58.514225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.860 qpair failed and we were unable to recover it. 00:35:34.860 [2024-11-18 00:40:58.514322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.860 [2024-11-18 00:40:58.514349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.860 qpair failed and we were unable to recover it. 00:35:34.860 [2024-11-18 00:40:58.514462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.860 [2024-11-18 00:40:58.514489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.860 qpair failed and we were unable to recover it. 00:35:34.860 [2024-11-18 00:40:58.514653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.860 [2024-11-18 00:40:58.514717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.860 qpair failed and we were unable to recover it. 00:35:34.860 [2024-11-18 00:40:58.514922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.860 [2024-11-18 00:40:58.514978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.860 qpair failed and we were unable to recover it. 00:35:34.860 [2024-11-18 00:40:58.515193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.860 [2024-11-18 00:40:58.515259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.860 qpair failed and we were unable to recover it. 00:35:34.860 [2024-11-18 00:40:58.515407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.860 [2024-11-18 00:40:58.515438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.860 qpair failed and we were unable to recover it. 00:35:34.860 [2024-11-18 00:40:58.515580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.860 [2024-11-18 00:40:58.515607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.860 qpair failed and we were unable to recover it. 00:35:34.860 [2024-11-18 00:40:58.515718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.860 [2024-11-18 00:40:58.515745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.860 qpair failed and we were unable to recover it. 00:35:34.860 [2024-11-18 00:40:58.515893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.860 [2024-11-18 00:40:58.515958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.860 qpair failed and we were unable to recover it. 00:35:34.860 [2024-11-18 00:40:58.516050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.860 [2024-11-18 00:40:58.516081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.860 qpair failed and we were unable to recover it. 00:35:34.860 [2024-11-18 00:40:58.516193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.860 [2024-11-18 00:40:58.516238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.860 qpair failed and we were unable to recover it. 00:35:34.860 [2024-11-18 00:40:58.516398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.860 [2024-11-18 00:40:58.516427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 404189 Killed "${NVMF_APP[@]}" "$@" 00:35:34.860 qpair failed and we were unable to recover it. 00:35:34.860 [2024-11-18 00:40:58.516525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.860 [2024-11-18 00:40:58.516565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.860 qpair failed and we were unable to recover it. 00:35:34.860 [2024-11-18 00:40:58.516686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.860 [2024-11-18 00:40:58.516715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.860 qpair failed and we were unable to recover it. 00:35:34.860 00:40:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:35:34.860 00:40:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:35:34.860 [2024-11-18 00:40:58.517032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.860 00:40:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:34.860 [2024-11-18 00:40:58.517110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.860 qpair failed and we were unable to recover it. 00:35:34.861 00:40:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:34.861 [2024-11-18 00:40:58.517332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.861 [2024-11-18 00:40:58.517360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.861 qpair failed and we were unable to recover it. 00:35:34.861 00:40:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:34.861 [2024-11-18 00:40:58.517504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.861 [2024-11-18 00:40:58.517532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.861 qpair failed and we were unable to recover it. 00:35:34.861 [2024-11-18 00:40:58.517676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.861 [2024-11-18 00:40:58.517705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.861 qpair failed and we were unable to recover it. 00:35:34.861 [2024-11-18 00:40:58.517888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.861 [2024-11-18 00:40:58.517918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.861 qpair failed and we were unable to recover it. 00:35:34.861 [2024-11-18 00:40:58.518077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.861 [2024-11-18 00:40:58.518108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.861 qpair failed and we were unable to recover it. 00:35:34.861 [2024-11-18 00:40:58.518201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.861 [2024-11-18 00:40:58.518231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.861 qpair failed and we were unable to recover it. 00:35:34.861 [2024-11-18 00:40:58.518364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.861 [2024-11-18 00:40:58.518393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.861 qpair failed and we were unable to recover it. 00:35:34.861 [2024-11-18 00:40:58.518508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.861 [2024-11-18 00:40:58.518536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.861 qpair failed and we were unable to recover it. 00:35:34.861 [2024-11-18 00:40:58.518652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.861 [2024-11-18 00:40:58.518681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.861 qpair failed and we were unable to recover it. 00:35:34.861 [2024-11-18 00:40:58.518808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.861 [2024-11-18 00:40:58.518856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.861 qpair failed and we were unable to recover it. 00:35:34.861 [2024-11-18 00:40:58.519105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.861 [2024-11-18 00:40:58.519172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.861 qpair failed and we were unable to recover it. 00:35:34.861 [2024-11-18 00:40:58.519354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.861 [2024-11-18 00:40:58.519399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.861 qpair failed and we were unable to recover it. 00:35:34.861 [2024-11-18 00:40:58.519529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.861 [2024-11-18 00:40:58.519559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.861 qpair failed and we were unable to recover it. 00:35:34.861 [2024-11-18 00:40:58.519660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.861 [2024-11-18 00:40:58.519696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.861 qpair failed and we were unable to recover it. 00:35:34.861 [2024-11-18 00:40:58.519835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.861 [2024-11-18 00:40:58.519893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.861 qpair failed and we were unable to recover it. 00:35:34.861 [2024-11-18 00:40:58.520037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.861 [2024-11-18 00:40:58.520085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.861 qpair failed and we were unable to recover it. 00:35:34.861 [2024-11-18 00:40:58.520252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.861 [2024-11-18 00:40:58.520287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.861 qpair failed and we were unable to recover it. 00:35:34.861 [2024-11-18 00:40:58.520422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.861 [2024-11-18 00:40:58.520450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.861 qpair failed and we were unable to recover it. 00:35:34.861 [2024-11-18 00:40:58.520607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.861 [2024-11-18 00:40:58.520646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.861 qpair failed and we were unable to recover it. 00:35:34.861 [2024-11-18 00:40:58.520809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.861 [2024-11-18 00:40:58.520869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.861 qpair failed and we were unable to recover it. 00:35:34.861 [2024-11-18 00:40:58.520979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.861 [2024-11-18 00:40:58.521012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.861 qpair failed and we were unable to recover it. 00:35:34.861 [2024-11-18 00:40:58.521160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.861 [2024-11-18 00:40:58.521187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.861 qpair failed and we were unable to recover it. 00:35:34.861 [2024-11-18 00:40:58.521332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.861 [2024-11-18 00:40:58.521360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.861 qpair failed and we were unable to recover it. 00:35:34.861 00:40:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=404743 00:35:34.861 [2024-11-18 00:40:58.521455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.861 [2024-11-18 00:40:58.521483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.861 qpair failed and we were unable to recover it. 00:35:34.861 00:40:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 404743 00:35:34.861 00:40:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:35:34.861 [2024-11-18 00:40:58.521603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.861 [2024-11-18 00:40:58.521631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.861 qpair failed and we were unable to recover it. 00:35:34.861 00:40:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 404743 ']' 00:35:34.861 [2024-11-18 00:40:58.521783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.861 [2024-11-18 00:40:58.521813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.861 qpair failed and we were unable to recover it. 00:35:34.861 00:40:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:34.861 [2024-11-18 00:40:58.521949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.861 00:40:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:34.861 [2024-11-18 00:40:58.521980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.861 qpair failed and we were unable to recover it. 00:35:34.861 00:40:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:34.861 [2024-11-18 00:40:58.522078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:34.861 [2024-11-18 00:40:58.522124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.861 qpair failed and we were unable to recover it. 00:35:34.861 00:40:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:34.861 [2024-11-18 00:40:58.522260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.861 [2024-11-18 00:40:58.522291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.861 00:40:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:34.861 qpair failed and we were unable to recover it. 00:35:34.861 [2024-11-18 00:40:58.522421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.861 [2024-11-18 00:40:58.522448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.861 qpair failed and we were unable to recover it. 00:35:34.861 [2024-11-18 00:40:58.522543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.861 [2024-11-18 00:40:58.522571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.861 qpair failed and we were unable to recover it. 00:35:34.861 [2024-11-18 00:40:58.522690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.861 [2024-11-18 00:40:58.522717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.861 qpair failed and we were unable to recover it. 00:35:34.861 [2024-11-18 00:40:58.522847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.861 [2024-11-18 00:40:58.522876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.861 qpair failed and we were unable to recover it. 00:35:34.861 [2024-11-18 00:40:58.523009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.861 [2024-11-18 00:40:58.523039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.862 qpair failed and we were unable to recover it. 00:35:34.862 [2024-11-18 00:40:58.523162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.862 [2024-11-18 00:40:58.523192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.862 qpair failed and we were unable to recover it. 00:35:34.862 [2024-11-18 00:40:58.523336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.862 [2024-11-18 00:40:58.523395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.862 qpair failed and we were unable to recover it. 00:35:34.862 [2024-11-18 00:40:58.523492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.862 [2024-11-18 00:40:58.523531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.862 qpair failed and we were unable to recover it. 00:35:34.862 [2024-11-18 00:40:58.523672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.862 [2024-11-18 00:40:58.523719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.862 qpair failed and we were unable to recover it. 00:35:34.862 [2024-11-18 00:40:58.523809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.862 [2024-11-18 00:40:58.523837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.862 qpair failed and we were unable to recover it. 00:35:34.862 [2024-11-18 00:40:58.524008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.862 [2024-11-18 00:40:58.524057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.862 qpair failed and we were unable to recover it. 00:35:34.862 [2024-11-18 00:40:58.524169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.862 [2024-11-18 00:40:58.524195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.862 qpair failed and we were unable to recover it. 00:35:34.862 [2024-11-18 00:40:58.524322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.862 [2024-11-18 00:40:58.524348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.862 qpair failed and we were unable to recover it. 00:35:34.862 [2024-11-18 00:40:58.524481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.862 [2024-11-18 00:40:58.524525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.862 qpair failed and we were unable to recover it. 00:35:34.862 [2024-11-18 00:40:58.524698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.862 [2024-11-18 00:40:58.524739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.862 qpair failed and we were unable to recover it. 00:35:34.862 [2024-11-18 00:40:58.524902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.862 [2024-11-18 00:40:58.524963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.862 qpair failed and we were unable to recover it. 00:35:34.862 [2024-11-18 00:40:58.525166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.862 [2024-11-18 00:40:58.525213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.862 qpair failed and we were unable to recover it. 00:35:34.862 [2024-11-18 00:40:58.525373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.862 [2024-11-18 00:40:58.525405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.862 qpair failed and we were unable to recover it. 00:35:34.862 [2024-11-18 00:40:58.525503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.862 [2024-11-18 00:40:58.525534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.862 qpair failed and we were unable to recover it. 00:35:34.862 [2024-11-18 00:40:58.525669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.862 [2024-11-18 00:40:58.525717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.862 qpair failed and we were unable to recover it. 00:35:34.862 [2024-11-18 00:40:58.525912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.862 [2024-11-18 00:40:58.525946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.862 qpair failed and we were unable to recover it. 00:35:34.862 [2024-11-18 00:40:58.526077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.862 [2024-11-18 00:40:58.526129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.862 qpair failed and we were unable to recover it. 00:35:34.862 [2024-11-18 00:40:58.526244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.862 [2024-11-18 00:40:58.526272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.862 qpair failed and we were unable to recover it. 00:35:34.862 [2024-11-18 00:40:58.526374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.862 [2024-11-18 00:40:58.526402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.862 qpair failed and we were unable to recover it. 00:35:34.862 [2024-11-18 00:40:58.526485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.862 [2024-11-18 00:40:58.526513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.862 qpair failed and we were unable to recover it. 00:35:34.862 [2024-11-18 00:40:58.526631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.862 [2024-11-18 00:40:58.526659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.862 qpair failed and we were unable to recover it. 00:35:34.862 [2024-11-18 00:40:58.526772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.862 [2024-11-18 00:40:58.526803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.862 qpair failed and we were unable to recover it. 00:35:34.862 [2024-11-18 00:40:58.526912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.862 [2024-11-18 00:40:58.526945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.862 qpair failed and we were unable to recover it. 00:35:34.862 [2024-11-18 00:40:58.527061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.862 [2024-11-18 00:40:58.527091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.862 qpair failed and we were unable to recover it. 00:35:34.862 [2024-11-18 00:40:58.527184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.862 [2024-11-18 00:40:58.527211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.862 qpair failed and we were unable to recover it. 00:35:34.862 [2024-11-18 00:40:58.527324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.862 [2024-11-18 00:40:58.527352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.862 qpair failed and we were unable to recover it. 00:35:34.862 [2024-11-18 00:40:58.527444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.862 [2024-11-18 00:40:58.527471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.862 qpair failed and we were unable to recover it. 00:35:34.862 [2024-11-18 00:40:58.527570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.862 [2024-11-18 00:40:58.527601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.862 qpair failed and we were unable to recover it. 00:35:34.862 [2024-11-18 00:40:58.527784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.862 [2024-11-18 00:40:58.527817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.862 qpair failed and we were unable to recover it. 00:35:34.862 [2024-11-18 00:40:58.527952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.862 [2024-11-18 00:40:58.527984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.862 qpair failed and we were unable to recover it. 00:35:34.862 [2024-11-18 00:40:58.528143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.862 [2024-11-18 00:40:58.528170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.862 qpair failed and we were unable to recover it. 00:35:34.862 [2024-11-18 00:40:58.528278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.862 [2024-11-18 00:40:58.528305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.862 qpair failed and we were unable to recover it. 00:35:34.862 [2024-11-18 00:40:58.528429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.862 [2024-11-18 00:40:58.528457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.862 qpair failed and we were unable to recover it. 00:35:34.862 [2024-11-18 00:40:58.528567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.863 [2024-11-18 00:40:58.528598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.863 qpair failed and we were unable to recover it. 00:35:34.863 [2024-11-18 00:40:58.528753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.863 [2024-11-18 00:40:58.528807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.863 qpair failed and we were unable to recover it. 00:35:34.863 [2024-11-18 00:40:58.528921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.863 [2024-11-18 00:40:58.528948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.863 qpair failed and we were unable to recover it. 00:35:34.863 [2024-11-18 00:40:58.529035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.863 [2024-11-18 00:40:58.529062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.863 qpair failed and we were unable to recover it. 00:35:34.863 [2024-11-18 00:40:58.529158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.863 [2024-11-18 00:40:58.529185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.863 qpair failed and we were unable to recover it. 00:35:34.863 [2024-11-18 00:40:58.529287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.863 [2024-11-18 00:40:58.529351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.863 qpair failed and we were unable to recover it. 00:35:34.863 [2024-11-18 00:40:58.529453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.863 [2024-11-18 00:40:58.529482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.863 qpair failed and we were unable to recover it. 00:35:34.863 [2024-11-18 00:40:58.529567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.863 [2024-11-18 00:40:58.529595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.863 qpair failed and we were unable to recover it. 00:35:34.863 [2024-11-18 00:40:58.529708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.863 [2024-11-18 00:40:58.529736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.863 qpair failed and we were unable to recover it. 00:35:34.863 [2024-11-18 00:40:58.529827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.863 [2024-11-18 00:40:58.529854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.863 qpair failed and we were unable to recover it. 00:35:34.863 [2024-11-18 00:40:58.529967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.863 [2024-11-18 00:40:58.529993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.863 qpair failed and we were unable to recover it. 00:35:34.863 [2024-11-18 00:40:58.530098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.863 [2024-11-18 00:40:58.530125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.863 qpair failed and we were unable to recover it. 00:35:34.863 [2024-11-18 00:40:58.530267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.863 [2024-11-18 00:40:58.530293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.863 qpair failed and we were unable to recover it. 00:35:34.863 [2024-11-18 00:40:58.530403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.863 [2024-11-18 00:40:58.530429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.863 qpair failed and we were unable to recover it. 00:35:34.863 [2024-11-18 00:40:58.530509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.863 [2024-11-18 00:40:58.530536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.863 qpair failed and we were unable to recover it. 00:35:34.863 [2024-11-18 00:40:58.530616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.863 [2024-11-18 00:40:58.530643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.863 qpair failed and we were unable to recover it. 00:35:34.863 [2024-11-18 00:40:58.530757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.863 [2024-11-18 00:40:58.530784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.863 qpair failed and we were unable to recover it. 00:35:34.863 [2024-11-18 00:40:58.530891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.863 [2024-11-18 00:40:58.530919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.863 qpair failed and we were unable to recover it. 00:35:34.863 [2024-11-18 00:40:58.531019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.863 [2024-11-18 00:40:58.531059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.863 qpair failed and we were unable to recover it. 00:35:34.863 [2024-11-18 00:40:58.531193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.863 [2024-11-18 00:40:58.531234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.863 qpair failed and we were unable to recover it. 00:35:34.863 [2024-11-18 00:40:58.531387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.863 [2024-11-18 00:40:58.531416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.863 qpair failed and we were unable to recover it. 00:35:34.863 [2024-11-18 00:40:58.531499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.863 [2024-11-18 00:40:58.531532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.863 qpair failed and we were unable to recover it. 00:35:34.863 [2024-11-18 00:40:58.531661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.863 [2024-11-18 00:40:58.531691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.863 qpair failed and we were unable to recover it. 00:35:34.863 [2024-11-18 00:40:58.531811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.863 [2024-11-18 00:40:58.531859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.863 qpair failed and we were unable to recover it. 00:35:34.863 [2024-11-18 00:40:58.532019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.863 [2024-11-18 00:40:58.532049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.863 qpair failed and we were unable to recover it. 00:35:34.863 [2024-11-18 00:40:58.532173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.863 [2024-11-18 00:40:58.532204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.863 qpair failed and we were unable to recover it. 00:35:34.863 [2024-11-18 00:40:58.532317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.863 [2024-11-18 00:40:58.532363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.863 qpair failed and we were unable to recover it. 00:35:34.863 [2024-11-18 00:40:58.532443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.863 [2024-11-18 00:40:58.532488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.863 qpair failed and we were unable to recover it. 00:35:34.863 [2024-11-18 00:40:58.532624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.863 [2024-11-18 00:40:58.532654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.863 qpair failed and we were unable to recover it. 00:35:34.863 [2024-11-18 00:40:58.532792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.863 [2024-11-18 00:40:58.532823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.863 qpair failed and we were unable to recover it. 00:35:34.863 [2024-11-18 00:40:58.532976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.863 [2024-11-18 00:40:58.533009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.863 qpair failed and we were unable to recover it. 00:35:34.863 [2024-11-18 00:40:58.533103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.863 [2024-11-18 00:40:58.533131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.863 qpair failed and we were unable to recover it. 00:35:34.863 [2024-11-18 00:40:58.533269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.863 [2024-11-18 00:40:58.533296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.863 qpair failed and we were unable to recover it. 00:35:34.863 [2024-11-18 00:40:58.533418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.863 [2024-11-18 00:40:58.533448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.863 qpair failed and we were unable to recover it. 00:35:34.863 [2024-11-18 00:40:58.533600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.863 [2024-11-18 00:40:58.533630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.863 qpair failed and we were unable to recover it. 00:35:34.863 [2024-11-18 00:40:58.533789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.863 [2024-11-18 00:40:58.533833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.863 qpair failed and we were unable to recover it. 00:35:34.863 [2024-11-18 00:40:58.533972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.863 [2024-11-18 00:40:58.534001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.863 qpair failed and we were unable to recover it. 00:35:34.863 [2024-11-18 00:40:58.534127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.863 [2024-11-18 00:40:58.534155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.863 qpair failed and we were unable to recover it. 00:35:34.863 [2024-11-18 00:40:58.534296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.864 [2024-11-18 00:40:58.534342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.864 qpair failed and we were unable to recover it. 00:35:34.864 [2024-11-18 00:40:58.534478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.864 [2024-11-18 00:40:58.534522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.864 qpair failed and we were unable to recover it. 00:35:34.864 [2024-11-18 00:40:58.534674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.864 [2024-11-18 00:40:58.534700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.864 qpair failed and we were unable to recover it. 00:35:34.864 [2024-11-18 00:40:58.534808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.864 [2024-11-18 00:40:58.534835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.864 qpair failed and we were unable to recover it. 00:35:34.864 [2024-11-18 00:40:58.534961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.864 [2024-11-18 00:40:58.534989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.864 qpair failed and we were unable to recover it. 00:35:34.864 [2024-11-18 00:40:58.535101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.864 [2024-11-18 00:40:58.535137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.864 qpair failed and we were unable to recover it. 00:35:34.864 [2024-11-18 00:40:58.535215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.864 [2024-11-18 00:40:58.535243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.864 qpair failed and we were unable to recover it. 00:35:34.864 [2024-11-18 00:40:58.535351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.864 [2024-11-18 00:40:58.535379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.864 qpair failed and we were unable to recover it. 00:35:34.864 [2024-11-18 00:40:58.535473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.864 [2024-11-18 00:40:58.535500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.864 qpair failed and we were unable to recover it. 00:35:34.864 [2024-11-18 00:40:58.535586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.864 [2024-11-18 00:40:58.535619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.864 qpair failed and we were unable to recover it. 00:35:34.864 [2024-11-18 00:40:58.535742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.864 [2024-11-18 00:40:58.535770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.864 qpair failed and we were unable to recover it. 00:35:34.864 [2024-11-18 00:40:58.535915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.864 [2024-11-18 00:40:58.535942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.864 qpair failed and we were unable to recover it. 00:35:34.864 [2024-11-18 00:40:58.536042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.864 [2024-11-18 00:40:58.536085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.864 qpair failed and we were unable to recover it. 00:35:34.864 [2024-11-18 00:40:58.536241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.864 [2024-11-18 00:40:58.536275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.864 qpair failed and we were unable to recover it. 00:35:34.864 [2024-11-18 00:40:58.536394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.864 [2024-11-18 00:40:58.536435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.864 qpair failed and we were unable to recover it. 00:35:34.864 [2024-11-18 00:40:58.536566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.864 [2024-11-18 00:40:58.536620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.864 qpair failed and we were unable to recover it. 00:35:34.864 [2024-11-18 00:40:58.536711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.864 [2024-11-18 00:40:58.536740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.864 qpair failed and we were unable to recover it. 00:35:34.864 [2024-11-18 00:40:58.536820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.864 [2024-11-18 00:40:58.536847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.864 qpair failed and we were unable to recover it. 00:35:34.864 [2024-11-18 00:40:58.536944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.864 [2024-11-18 00:40:58.536974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.864 qpair failed and we were unable to recover it. 00:35:34.864 [2024-11-18 00:40:58.537099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.864 [2024-11-18 00:40:58.537129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.864 qpair failed and we were unable to recover it. 00:35:34.864 [2024-11-18 00:40:58.537276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.864 [2024-11-18 00:40:58.537337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.864 qpair failed and we were unable to recover it. 00:35:34.864 [2024-11-18 00:40:58.537498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.864 [2024-11-18 00:40:58.537529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.864 qpair failed and we were unable to recover it. 00:35:34.864 [2024-11-18 00:40:58.537656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.864 [2024-11-18 00:40:58.537686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.864 qpair failed and we were unable to recover it. 00:35:34.864 [2024-11-18 00:40:58.537788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.864 [2024-11-18 00:40:58.537828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.864 qpair failed and we were unable to recover it. 00:35:34.864 [2024-11-18 00:40:58.538006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.864 [2024-11-18 00:40:58.538037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.864 qpair failed and we were unable to recover it. 00:35:34.864 [2024-11-18 00:40:58.538168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.864 [2024-11-18 00:40:58.538195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.864 qpair failed and we were unable to recover it. 00:35:34.864 [2024-11-18 00:40:58.538279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.864 [2024-11-18 00:40:58.538308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.864 qpair failed and we were unable to recover it. 00:35:34.864 [2024-11-18 00:40:58.538404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.864 [2024-11-18 00:40:58.538432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.864 qpair failed and we were unable to recover it. 00:35:34.864 [2024-11-18 00:40:58.538537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.864 [2024-11-18 00:40:58.538565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.864 qpair failed and we were unable to recover it. 00:35:34.864 [2024-11-18 00:40:58.538715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.864 [2024-11-18 00:40:58.538745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.864 qpair failed and we were unable to recover it. 00:35:34.864 [2024-11-18 00:40:58.538870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.864 [2024-11-18 00:40:58.538901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.864 qpair failed and we were unable to recover it. 00:35:34.864 [2024-11-18 00:40:58.539047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.864 [2024-11-18 00:40:58.539094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.864 qpair failed and we were unable to recover it. 00:35:34.864 [2024-11-18 00:40:58.539229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.864 [2024-11-18 00:40:58.539261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.864 qpair failed and we were unable to recover it. 00:35:34.864 [2024-11-18 00:40:58.539358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.864 [2024-11-18 00:40:58.539386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.864 qpair failed and we were unable to recover it. 00:35:34.864 [2024-11-18 00:40:58.539502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.864 [2024-11-18 00:40:58.539529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.864 qpair failed and we were unable to recover it. 00:35:34.864 [2024-11-18 00:40:58.539664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.864 [2024-11-18 00:40:58.539693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.864 qpair failed and we were unable to recover it. 00:35:34.864 [2024-11-18 00:40:58.539821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.864 [2024-11-18 00:40:58.539850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.864 qpair failed and we were unable to recover it. 00:35:34.864 [2024-11-18 00:40:58.539979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.864 [2024-11-18 00:40:58.540008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.864 qpair failed and we were unable to recover it. 00:35:34.864 [2024-11-18 00:40:58.540156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.864 [2024-11-18 00:40:58.540192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.864 qpair failed and we were unable to recover it. 00:35:34.864 [2024-11-18 00:40:58.540299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.864 [2024-11-18 00:40:58.540334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.864 qpair failed and we were unable to recover it. 00:35:34.864 [2024-11-18 00:40:58.540474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.865 [2024-11-18 00:40:58.540500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.865 qpair failed and we were unable to recover it. 00:35:34.865 [2024-11-18 00:40:58.540584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.865 [2024-11-18 00:40:58.540611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.865 qpair failed and we were unable to recover it. 00:35:34.865 [2024-11-18 00:40:58.540731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.865 [2024-11-18 00:40:58.540760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.865 qpair failed and we were unable to recover it. 00:35:34.865 [2024-11-18 00:40:58.540892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.865 [2024-11-18 00:40:58.540919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.865 qpair failed and we were unable to recover it. 00:35:34.865 [2024-11-18 00:40:58.541045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.865 [2024-11-18 00:40:58.541073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.865 qpair failed and we were unable to recover it. 00:35:34.865 [2024-11-18 00:40:58.541159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.865 [2024-11-18 00:40:58.541186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.865 qpair failed and we were unable to recover it. 00:35:34.865 [2024-11-18 00:40:58.541296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.865 [2024-11-18 00:40:58.541329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.865 qpair failed and we were unable to recover it. 00:35:34.865 [2024-11-18 00:40:58.541408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.865 [2024-11-18 00:40:58.541435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.865 qpair failed and we were unable to recover it. 00:35:34.865 [2024-11-18 00:40:58.541525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.865 [2024-11-18 00:40:58.541552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.865 qpair failed and we were unable to recover it. 00:35:34.865 [2024-11-18 00:40:58.541672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.865 [2024-11-18 00:40:58.541699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.865 qpair failed and we were unable to recover it. 00:35:34.865 [2024-11-18 00:40:58.541809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.865 [2024-11-18 00:40:58.541860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.865 qpair failed and we were unable to recover it. 00:35:34.865 [2024-11-18 00:40:58.542020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.865 [2024-11-18 00:40:58.542064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.865 qpair failed and we were unable to recover it. 00:35:34.865 [2024-11-18 00:40:58.542203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.865 [2024-11-18 00:40:58.542236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.865 qpair failed and we were unable to recover it. 00:35:34.865 [2024-11-18 00:40:58.542377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.865 [2024-11-18 00:40:58.542406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.865 qpair failed and we were unable to recover it. 00:35:34.865 [2024-11-18 00:40:58.542549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.865 [2024-11-18 00:40:58.542593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.865 qpair failed and we were unable to recover it. 00:35:34.865 [2024-11-18 00:40:58.542685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.865 [2024-11-18 00:40:58.542712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.865 qpair failed and we were unable to recover it. 00:35:34.865 [2024-11-18 00:40:58.542808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.865 [2024-11-18 00:40:58.542838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.865 qpair failed and we were unable to recover it. 00:35:34.865 [2024-11-18 00:40:58.543010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.865 [2024-11-18 00:40:58.543054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.865 qpair failed and we were unable to recover it. 00:35:34.865 [2024-11-18 00:40:58.543157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.865 [2024-11-18 00:40:58.543188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.865 qpair failed and we were unable to recover it. 00:35:34.865 [2024-11-18 00:40:58.543327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.865 [2024-11-18 00:40:58.543373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.865 qpair failed and we were unable to recover it. 00:35:34.865 [2024-11-18 00:40:58.543467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.865 [2024-11-18 00:40:58.543497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.865 qpair failed and we were unable to recover it. 00:35:34.865 [2024-11-18 00:40:58.543588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.865 [2024-11-18 00:40:58.543627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.865 qpair failed and we were unable to recover it. 00:35:34.865 [2024-11-18 00:40:58.543749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.865 [2024-11-18 00:40:58.543779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.865 qpair failed and we were unable to recover it. 00:35:34.865 [2024-11-18 00:40:58.543929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.865 [2024-11-18 00:40:58.543959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.865 qpair failed and we were unable to recover it. 00:35:34.865 [2024-11-18 00:40:58.544081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.865 [2024-11-18 00:40:58.544111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.865 qpair failed and we were unable to recover it. 00:35:34.865 [2024-11-18 00:40:58.544248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.865 [2024-11-18 00:40:58.544276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.865 qpair failed and we were unable to recover it. 00:35:34.865 [2024-11-18 00:40:58.544413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.865 [2024-11-18 00:40:58.544458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.865 qpair failed and we were unable to recover it. 00:35:34.865 [2024-11-18 00:40:58.544591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.865 [2024-11-18 00:40:58.544635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.865 qpair failed and we were unable to recover it. 00:35:34.865 [2024-11-18 00:40:58.544773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.865 [2024-11-18 00:40:58.544817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.865 qpair failed and we were unable to recover it. 00:35:34.865 [2024-11-18 00:40:58.544949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.865 [2024-11-18 00:40:58.544993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.865 qpair failed and we were unable to recover it. 00:35:34.865 [2024-11-18 00:40:58.545136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.865 [2024-11-18 00:40:58.545162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.865 qpair failed and we were unable to recover it. 00:35:34.865 [2024-11-18 00:40:58.545279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.865 [2024-11-18 00:40:58.545308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.865 qpair failed and we were unable to recover it. 00:35:34.865 [2024-11-18 00:40:58.545428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.865 [2024-11-18 00:40:58.545471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.865 qpair failed and we were unable to recover it. 00:35:34.865 [2024-11-18 00:40:58.545591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.865 [2024-11-18 00:40:58.545619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.865 qpair failed and we were unable to recover it. 00:35:34.865 [2024-11-18 00:40:58.545721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.865 [2024-11-18 00:40:58.545750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.865 qpair failed and we were unable to recover it. 00:35:34.865 [2024-11-18 00:40:58.545874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.865 [2024-11-18 00:40:58.545903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.865 qpair failed and we were unable to recover it. 00:35:34.865 [2024-11-18 00:40:58.545996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.865 [2024-11-18 00:40:58.546023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.865 qpair failed and we were unable to recover it. 00:35:34.865 [2024-11-18 00:40:58.546154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.865 [2024-11-18 00:40:58.546182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.865 qpair failed and we were unable to recover it. 00:35:34.865 [2024-11-18 00:40:58.546316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.865 [2024-11-18 00:40:58.546344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.865 qpair failed and we were unable to recover it. 00:35:34.865 [2024-11-18 00:40:58.546498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.865 [2024-11-18 00:40:58.546541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.865 qpair failed and we were unable to recover it. 00:35:34.866 [2024-11-18 00:40:58.546651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.866 [2024-11-18 00:40:58.546679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.866 qpair failed and we were unable to recover it. 00:35:34.866 [2024-11-18 00:40:58.546851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.866 [2024-11-18 00:40:58.546894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.866 qpair failed and we were unable to recover it. 00:35:34.866 [2024-11-18 00:40:58.546996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.866 [2024-11-18 00:40:58.547026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.866 qpair failed and we were unable to recover it. 00:35:34.866 [2024-11-18 00:40:58.547135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.866 [2024-11-18 00:40:58.547163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.866 qpair failed and we were unable to recover it. 00:35:34.866 [2024-11-18 00:40:58.547291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.866 [2024-11-18 00:40:58.547340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.866 qpair failed and we were unable to recover it. 00:35:34.866 [2024-11-18 00:40:58.547480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.866 [2024-11-18 00:40:58.547510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.866 qpair failed and we were unable to recover it. 00:35:34.866 [2024-11-18 00:40:58.547702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.866 [2024-11-18 00:40:58.547730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.866 qpair failed and we were unable to recover it. 00:35:34.866 [2024-11-18 00:40:58.547822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.866 [2024-11-18 00:40:58.547849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.866 qpair failed and we were unable to recover it. 00:35:34.866 [2024-11-18 00:40:58.547948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.866 [2024-11-18 00:40:58.547976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.866 qpair failed and we were unable to recover it. 00:35:34.866 [2024-11-18 00:40:58.548115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.866 [2024-11-18 00:40:58.548144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.866 qpair failed and we were unable to recover it. 00:35:34.866 [2024-11-18 00:40:58.548271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.866 [2024-11-18 00:40:58.548300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.866 qpair failed and we were unable to recover it. 00:35:34.866 [2024-11-18 00:40:58.548444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.866 [2024-11-18 00:40:58.548473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.866 qpair failed and we were unable to recover it. 00:35:34.866 [2024-11-18 00:40:58.548601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.866 [2024-11-18 00:40:58.548629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.866 qpair failed and we were unable to recover it. 00:35:34.866 [2024-11-18 00:40:58.548716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.866 [2024-11-18 00:40:58.548743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.866 qpair failed and we were unable to recover it. 00:35:34.866 [2024-11-18 00:40:58.548863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.866 [2024-11-18 00:40:58.548891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.866 qpair failed and we were unable to recover it. 00:35:34.866 [2024-11-18 00:40:58.548975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.866 [2024-11-18 00:40:58.549002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.866 qpair failed and we were unable to recover it. 00:35:34.866 [2024-11-18 00:40:58.549112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.866 [2024-11-18 00:40:58.549146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.866 qpair failed and we were unable to recover it. 00:35:34.866 [2024-11-18 00:40:58.549261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.866 [2024-11-18 00:40:58.549289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.866 qpair failed and we were unable to recover it. 00:35:34.866 [2024-11-18 00:40:58.549419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.866 [2024-11-18 00:40:58.549448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.866 qpair failed and we were unable to recover it. 00:35:34.866 [2024-11-18 00:40:58.549538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.866 [2024-11-18 00:40:58.549566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.866 qpair failed and we were unable to recover it. 00:35:34.866 [2024-11-18 00:40:58.549679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.866 [2024-11-18 00:40:58.549707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.866 qpair failed and we were unable to recover it. 00:35:34.866 [2024-11-18 00:40:58.549793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.866 [2024-11-18 00:40:58.549819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.866 qpair failed and we were unable to recover it. 00:35:34.866 [2024-11-18 00:40:58.549909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.866 [2024-11-18 00:40:58.549948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.866 qpair failed and we were unable to recover it. 00:35:34.866 [2024-11-18 00:40:58.550065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.866 [2024-11-18 00:40:58.550092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.866 qpair failed and we were unable to recover it. 00:35:34.866 [2024-11-18 00:40:58.550177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.866 [2024-11-18 00:40:58.550204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.866 qpair failed and we were unable to recover it. 00:35:34.866 [2024-11-18 00:40:58.550296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.866 [2024-11-18 00:40:58.550335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.866 qpair failed and we were unable to recover it. 00:35:34.866 [2024-11-18 00:40:58.550427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.866 [2024-11-18 00:40:58.550455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.866 qpair failed and we were unable to recover it. 00:35:34.866 [2024-11-18 00:40:58.550574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.866 [2024-11-18 00:40:58.550601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.866 qpair failed and we were unable to recover it. 00:35:34.866 [2024-11-18 00:40:58.550745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.866 [2024-11-18 00:40:58.550772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.866 qpair failed and we were unable to recover it. 00:35:34.866 [2024-11-18 00:40:58.550917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.866 [2024-11-18 00:40:58.550944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.866 qpair failed and we were unable to recover it. 00:35:34.866 [2024-11-18 00:40:58.551031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.866 [2024-11-18 00:40:58.551057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.866 qpair failed and we were unable to recover it. 00:35:34.866 [2024-11-18 00:40:58.551174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.866 [2024-11-18 00:40:58.551200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.866 qpair failed and we were unable to recover it. 00:35:34.866 [2024-11-18 00:40:58.551292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.866 [2024-11-18 00:40:58.551325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.866 qpair failed and we were unable to recover it. 00:35:34.866 [2024-11-18 00:40:58.551406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.866 [2024-11-18 00:40:58.551433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.866 qpair failed and we were unable to recover it. 00:35:34.866 [2024-11-18 00:40:58.551544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.866 [2024-11-18 00:40:58.551571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.866 qpair failed and we were unable to recover it. 00:35:34.867 [2024-11-18 00:40:58.551658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.867 [2024-11-18 00:40:58.551686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.867 qpair failed and we were unable to recover it. 00:35:34.867 [2024-11-18 00:40:58.551781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.867 [2024-11-18 00:40:58.551808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.867 qpair failed and we were unable to recover it. 00:35:34.867 [2024-11-18 00:40:58.551888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.867 [2024-11-18 00:40:58.551915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.867 qpair failed and we were unable to recover it. 00:35:34.867 [2024-11-18 00:40:58.552002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.867 [2024-11-18 00:40:58.552029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.867 qpair failed and we were unable to recover it. 00:35:34.867 [2024-11-18 00:40:58.552160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.867 [2024-11-18 00:40:58.552189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.867 qpair failed and we were unable to recover it. 00:35:34.867 [2024-11-18 00:40:58.552301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.867 [2024-11-18 00:40:58.552337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.867 qpair failed and we were unable to recover it. 00:35:34.867 [2024-11-18 00:40:58.552457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.867 [2024-11-18 00:40:58.552485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.867 qpair failed and we were unable to recover it. 00:35:34.867 [2024-11-18 00:40:58.552578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.867 [2024-11-18 00:40:58.552616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.867 qpair failed and we were unable to recover it. 00:35:34.867 [2024-11-18 00:40:58.552737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.867 [2024-11-18 00:40:58.552765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.867 qpair failed and we were unable to recover it. 00:35:34.867 [2024-11-18 00:40:58.552894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.867 [2024-11-18 00:40:58.552922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.867 qpair failed and we were unable to recover it. 00:35:34.867 [2024-11-18 00:40:58.553034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.867 [2024-11-18 00:40:58.553062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.867 qpair failed and we were unable to recover it. 00:35:34.867 [2024-11-18 00:40:58.553192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.867 [2024-11-18 00:40:58.553232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.867 qpair failed and we were unable to recover it. 00:35:34.867 [2024-11-18 00:40:58.553369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.867 [2024-11-18 00:40:58.553400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.867 qpair failed and we were unable to recover it. 00:35:34.867 [2024-11-18 00:40:58.553542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.867 [2024-11-18 00:40:58.553569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.867 qpair failed and we were unable to recover it. 00:35:34.867 [2024-11-18 00:40:58.553659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.867 [2024-11-18 00:40:58.553687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.867 qpair failed and we were unable to recover it. 00:35:34.867 [2024-11-18 00:40:58.553774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.867 [2024-11-18 00:40:58.553801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.867 qpair failed and we were unable to recover it. 00:35:34.867 [2024-11-18 00:40:58.553895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.867 [2024-11-18 00:40:58.553924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.867 qpair failed and we were unable to recover it. 00:35:34.867 [2024-11-18 00:40:58.554049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.867 [2024-11-18 00:40:58.554077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.867 qpair failed and we were unable to recover it. 00:35:34.867 [2024-11-18 00:40:58.554189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.867 [2024-11-18 00:40:58.554216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.867 qpair failed and we were unable to recover it. 00:35:34.867 [2024-11-18 00:40:58.554291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.867 [2024-11-18 00:40:58.554335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.867 qpair failed and we were unable to recover it. 00:35:34.867 [2024-11-18 00:40:58.554434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.867 [2024-11-18 00:40:58.554462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.867 qpair failed and we were unable to recover it. 00:35:34.867 [2024-11-18 00:40:58.554570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.867 [2024-11-18 00:40:58.554597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.867 qpair failed and we were unable to recover it. 00:35:34.867 [2024-11-18 00:40:58.554740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.867 [2024-11-18 00:40:58.554767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.867 qpair failed and we were unable to recover it. 00:35:34.867 [2024-11-18 00:40:58.554850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.867 [2024-11-18 00:40:58.554887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.867 qpair failed and we were unable to recover it. 00:35:34.867 [2024-11-18 00:40:58.555005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.867 [2024-11-18 00:40:58.555033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.867 qpair failed and we were unable to recover it. 00:35:34.867 [2024-11-18 00:40:58.555149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.867 [2024-11-18 00:40:58.555178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.867 qpair failed and we were unable to recover it. 00:35:34.867 [2024-11-18 00:40:58.555295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.867 [2024-11-18 00:40:58.555329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.867 qpair failed and we were unable to recover it. 00:35:34.867 [2024-11-18 00:40:58.555446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.867 [2024-11-18 00:40:58.555473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.867 qpair failed and we were unable to recover it. 00:35:34.867 [2024-11-18 00:40:58.555554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.867 [2024-11-18 00:40:58.555581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.867 qpair failed and we were unable to recover it. 00:35:34.867 [2024-11-18 00:40:58.555745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.867 [2024-11-18 00:40:58.555777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.867 qpair failed and we were unable to recover it. 00:35:34.867 [2024-11-18 00:40:58.555916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.867 [2024-11-18 00:40:58.555943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.867 qpair failed and we were unable to recover it. 00:35:34.867 [2024-11-18 00:40:58.556062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.867 [2024-11-18 00:40:58.556089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.867 qpair failed and we were unable to recover it. 00:35:34.867 [2024-11-18 00:40:58.556170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.867 [2024-11-18 00:40:58.556197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.867 qpair failed and we were unable to recover it. 00:35:34.867 [2024-11-18 00:40:58.556286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.867 [2024-11-18 00:40:58.556320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.867 qpair failed and we were unable to recover it. 00:35:34.867 [2024-11-18 00:40:58.556463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.867 [2024-11-18 00:40:58.556489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.867 qpair failed and we were unable to recover it. 00:35:34.867 [2024-11-18 00:40:58.556627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.868 [2024-11-18 00:40:58.556654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.868 qpair failed and we were unable to recover it. 00:35:34.868 [2024-11-18 00:40:58.556771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.868 [2024-11-18 00:40:58.556798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.868 qpair failed and we were unable to recover it. 00:35:34.868 [2024-11-18 00:40:58.556908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.868 [2024-11-18 00:40:58.556936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.868 qpair failed and we were unable to recover it. 00:35:34.868 [2024-11-18 00:40:58.557052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.868 [2024-11-18 00:40:58.557080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.868 qpair failed and we were unable to recover it. 00:35:34.868 [2024-11-18 00:40:58.557183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.868 [2024-11-18 00:40:58.557211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.868 qpair failed and we were unable to recover it. 00:35:34.868 [2024-11-18 00:40:58.557329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.868 [2024-11-18 00:40:58.557357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.868 qpair failed and we were unable to recover it. 00:35:34.868 [2024-11-18 00:40:58.557443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.868 [2024-11-18 00:40:58.557470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.868 qpair failed and we were unable to recover it. 00:35:34.868 [2024-11-18 00:40:58.557552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.868 [2024-11-18 00:40:58.557579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.868 qpair failed and we were unable to recover it. 00:35:34.868 [2024-11-18 00:40:58.557707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.868 [2024-11-18 00:40:58.557739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.868 qpair failed and we were unable to recover it. 00:35:34.868 [2024-11-18 00:40:58.557852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.868 [2024-11-18 00:40:58.557880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.868 qpair failed and we were unable to recover it. 00:35:34.868 [2024-11-18 00:40:58.558026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.868 [2024-11-18 00:40:58.558054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.868 qpair failed and we were unable to recover it. 00:35:34.868 [2024-11-18 00:40:58.558135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.868 [2024-11-18 00:40:58.558163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.868 qpair failed and we were unable to recover it. 00:35:34.868 [2024-11-18 00:40:58.558325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.868 [2024-11-18 00:40:58.558366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.868 qpair failed and we were unable to recover it. 00:35:34.868 [2024-11-18 00:40:58.558498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.868 [2024-11-18 00:40:58.558542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.868 qpair failed and we were unable to recover it. 00:35:34.868 [2024-11-18 00:40:58.558659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.868 [2024-11-18 00:40:58.558687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.868 qpair failed and we were unable to recover it. 00:35:34.868 [2024-11-18 00:40:58.558827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.868 [2024-11-18 00:40:58.558865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.868 qpair failed and we were unable to recover it. 00:35:34.868 [2024-11-18 00:40:58.558980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.868 [2024-11-18 00:40:58.559008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.868 qpair failed and we were unable to recover it. 00:35:34.868 [2024-11-18 00:40:58.559149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.868 [2024-11-18 00:40:58.559176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.868 qpair failed and we were unable to recover it. 00:35:34.868 [2024-11-18 00:40:58.559263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.868 [2024-11-18 00:40:58.559290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.868 qpair failed and we were unable to recover it. 00:35:34.868 [2024-11-18 00:40:58.559417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.868 [2024-11-18 00:40:58.559447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.868 qpair failed and we were unable to recover it. 00:35:34.868 [2024-11-18 00:40:58.559531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.868 [2024-11-18 00:40:58.559558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.868 qpair failed and we were unable to recover it. 00:35:34.868 [2024-11-18 00:40:58.559684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.868 [2024-11-18 00:40:58.559713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.868 qpair failed and we were unable to recover it. 00:35:34.868 [2024-11-18 00:40:58.559834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.868 [2024-11-18 00:40:58.559862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.868 qpair failed and we were unable to recover it. 00:35:34.868 [2024-11-18 00:40:58.559957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.868 [2024-11-18 00:40:58.559998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.868 qpair failed and we were unable to recover it. 00:35:34.868 [2024-11-18 00:40:58.560125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.868 [2024-11-18 00:40:58.560154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.868 qpair failed and we were unable to recover it. 00:35:34.868 [2024-11-18 00:40:58.560240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.868 [2024-11-18 00:40:58.560274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.868 qpair failed and we were unable to recover it. 00:35:34.868 [2024-11-18 00:40:58.560368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.868 [2024-11-18 00:40:58.560396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.868 qpair failed and we were unable to recover it. 00:35:34.868 [2024-11-18 00:40:58.560517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.868 [2024-11-18 00:40:58.560546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.868 qpair failed and we were unable to recover it. 00:35:34.868 [2024-11-18 00:40:58.560665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.868 [2024-11-18 00:40:58.560693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.868 qpair failed and we were unable to recover it. 00:35:34.868 [2024-11-18 00:40:58.560820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.868 [2024-11-18 00:40:58.560854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.868 qpair failed and we were unable to recover it. 00:35:34.868 [2024-11-18 00:40:58.560998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.868 [2024-11-18 00:40:58.561026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.868 qpair failed and we were unable to recover it. 00:35:34.868 [2024-11-18 00:40:58.561144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.868 [2024-11-18 00:40:58.561183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.868 qpair failed and we were unable to recover it. 00:35:34.868 [2024-11-18 00:40:58.561302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.868 [2024-11-18 00:40:58.561337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.868 qpair failed and we were unable to recover it. 00:35:34.868 [2024-11-18 00:40:58.561449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.868 [2024-11-18 00:40:58.561476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.868 qpair failed and we were unable to recover it. 00:35:34.868 [2024-11-18 00:40:58.561563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.868 [2024-11-18 00:40:58.561594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.868 qpair failed and we were unable to recover it. 00:35:34.868 [2024-11-18 00:40:58.561701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.869 [2024-11-18 00:40:58.561727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.869 qpair failed and we were unable to recover it. 00:35:34.869 [2024-11-18 00:40:58.561876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.869 [2024-11-18 00:40:58.561903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.869 qpair failed and we were unable to recover it. 00:35:34.869 [2024-11-18 00:40:58.562025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.869 [2024-11-18 00:40:58.562053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.869 qpair failed and we were unable to recover it. 00:35:34.869 [2024-11-18 00:40:58.562176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.869 [2024-11-18 00:40:58.562216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.869 qpair failed and we were unable to recover it. 00:35:34.869 [2024-11-18 00:40:58.562357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.869 [2024-11-18 00:40:58.562397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.869 qpair failed and we were unable to recover it. 00:35:34.869 [2024-11-18 00:40:58.562501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.869 [2024-11-18 00:40:58.562541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.869 qpair failed and we were unable to recover it. 00:35:34.869 [2024-11-18 00:40:58.562632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.869 [2024-11-18 00:40:58.562660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.869 qpair failed and we were unable to recover it. 00:35:34.869 [2024-11-18 00:40:58.562771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.869 [2024-11-18 00:40:58.562798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.869 qpair failed and we were unable to recover it. 00:35:34.869 [2024-11-18 00:40:58.562923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.869 [2024-11-18 00:40:58.562951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.869 qpair failed and we were unable to recover it. 00:35:34.869 [2024-11-18 00:40:58.563066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.869 [2024-11-18 00:40:58.563107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.869 qpair failed and we were unable to recover it. 00:35:34.869 [2024-11-18 00:40:58.563223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.869 [2024-11-18 00:40:58.563250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.869 qpair failed and we were unable to recover it. 00:35:34.869 [2024-11-18 00:40:58.563371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.869 [2024-11-18 00:40:58.563398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.869 qpair failed and we were unable to recover it. 00:35:34.869 [2024-11-18 00:40:58.563484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.869 [2024-11-18 00:40:58.563512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.869 qpair failed and we were unable to recover it. 00:35:34.869 [2024-11-18 00:40:58.563630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.869 [2024-11-18 00:40:58.563657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.869 qpair failed and we were unable to recover it. 00:35:34.869 [2024-11-18 00:40:58.563753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.869 [2024-11-18 00:40:58.563781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.869 qpair failed and we were unable to recover it. 00:35:34.869 [2024-11-18 00:40:58.563922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.869 [2024-11-18 00:40:58.563949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.869 qpair failed and we were unable to recover it. 00:35:34.869 [2024-11-18 00:40:58.564067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.869 [2024-11-18 00:40:58.564102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.869 qpair failed and we were unable to recover it. 00:35:34.869 [2024-11-18 00:40:58.564241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.869 [2024-11-18 00:40:58.564269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.869 qpair failed and we were unable to recover it. 00:35:34.869 [2024-11-18 00:40:58.564393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.869 [2024-11-18 00:40:58.564421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.869 qpair failed and we were unable to recover it. 00:35:34.869 [2024-11-18 00:40:58.564532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.869 [2024-11-18 00:40:58.564570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.869 qpair failed and we were unable to recover it. 00:35:34.869 [2024-11-18 00:40:58.564678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.869 [2024-11-18 00:40:58.564705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.869 qpair failed and we were unable to recover it. 00:35:34.869 [2024-11-18 00:40:58.564817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.869 [2024-11-18 00:40:58.564844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.869 qpair failed and we were unable to recover it. 00:35:34.869 [2024-11-18 00:40:58.564961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.869 [2024-11-18 00:40:58.564990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.869 qpair failed and we were unable to recover it. 00:35:34.869 [2024-11-18 00:40:58.565104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.869 [2024-11-18 00:40:58.565143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.869 qpair failed and we were unable to recover it. 00:35:34.869 [2024-11-18 00:40:58.565254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.869 [2024-11-18 00:40:58.565281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.869 qpair failed and we were unable to recover it. 00:35:34.869 [2024-11-18 00:40:58.565376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.869 [2024-11-18 00:40:58.565404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.869 qpair failed and we were unable to recover it. 00:35:34.869 [2024-11-18 00:40:58.565519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.869 [2024-11-18 00:40:58.565552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.869 qpair failed and we were unable to recover it. 00:35:34.869 [2024-11-18 00:40:58.565644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.869 [2024-11-18 00:40:58.565673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.869 qpair failed and we were unable to recover it. 00:35:34.869 [2024-11-18 00:40:58.565793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.869 [2024-11-18 00:40:58.565820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.869 qpair failed and we were unable to recover it. 00:35:34.869 [2024-11-18 00:40:58.565902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.869 [2024-11-18 00:40:58.565944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.869 qpair failed and we were unable to recover it. 00:35:34.869 [2024-11-18 00:40:58.566068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.869 [2024-11-18 00:40:58.566095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.869 qpair failed and we were unable to recover it. 00:35:34.869 [2024-11-18 00:40:58.566212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.869 [2024-11-18 00:40:58.566239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.869 qpair failed and we were unable to recover it. 00:35:34.869 [2024-11-18 00:40:58.566386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.869 [2024-11-18 00:40:58.566414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.869 qpair failed and we were unable to recover it. 00:35:34.869 [2024-11-18 00:40:58.566514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.869 [2024-11-18 00:40:58.566553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.869 qpair failed and we were unable to recover it. 00:35:34.869 [2024-11-18 00:40:58.566654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.869 [2024-11-18 00:40:58.566683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.869 qpair failed and we were unable to recover it. 00:35:34.869 [2024-11-18 00:40:58.566768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.869 [2024-11-18 00:40:58.566796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.869 qpair failed and we were unable to recover it. 00:35:34.869 [2024-11-18 00:40:58.566910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.869 [2024-11-18 00:40:58.566937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.869 qpair failed and we were unable to recover it. 00:35:34.869 [2024-11-18 00:40:58.567050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.870 [2024-11-18 00:40:58.567085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.870 qpair failed and we were unable to recover it. 00:35:34.870 [2024-11-18 00:40:58.567178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.870 [2024-11-18 00:40:58.567206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.870 qpair failed and we were unable to recover it. 00:35:34.870 [2024-11-18 00:40:58.567426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.870 [2024-11-18 00:40:58.567455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.870 qpair failed and we were unable to recover it. 00:35:34.870 [2024-11-18 00:40:58.567560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.870 [2024-11-18 00:40:58.567612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.870 qpair failed and we were unable to recover it. 00:35:34.870 [2024-11-18 00:40:58.567744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.870 [2024-11-18 00:40:58.567774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.870 qpair failed and we were unable to recover it. 00:35:34.870 [2024-11-18 00:40:58.567862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.870 [2024-11-18 00:40:58.567891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.870 qpair failed and we were unable to recover it. 00:35:34.870 [2024-11-18 00:40:58.568008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.870 [2024-11-18 00:40:58.568036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.870 qpair failed and we were unable to recover it. 00:35:34.870 [2024-11-18 00:40:58.568147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.870 [2024-11-18 00:40:58.568175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.870 qpair failed and we were unable to recover it. 00:35:34.870 [2024-11-18 00:40:58.568303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.870 [2024-11-18 00:40:58.568341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.870 qpair failed and we were unable to recover it. 00:35:34.870 [2024-11-18 00:40:58.568462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.870 [2024-11-18 00:40:58.568489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.870 qpair failed and we were unable to recover it. 00:35:34.870 [2024-11-18 00:40:58.568594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.870 [2024-11-18 00:40:58.568620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.870 qpair failed and we were unable to recover it. 00:35:34.870 [2024-11-18 00:40:58.568735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.870 [2024-11-18 00:40:58.568762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.870 qpair failed and we were unable to recover it. 00:35:34.870 [2024-11-18 00:40:58.568873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.870 [2024-11-18 00:40:58.568901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.870 qpair failed and we were unable to recover it. 00:35:34.870 [2024-11-18 00:40:58.569021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.870 [2024-11-18 00:40:58.569048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.870 qpair failed and we were unable to recover it. 00:35:34.870 [2024-11-18 00:40:58.569130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.870 [2024-11-18 00:40:58.569165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.870 qpair failed and we were unable to recover it. 00:35:34.870 [2024-11-18 00:40:58.569252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.870 [2024-11-18 00:40:58.569279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.870 qpair failed and we were unable to recover it. 00:35:34.870 [2024-11-18 00:40:58.569420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.870 [2024-11-18 00:40:58.569450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.870 qpair failed and we were unable to recover it. 00:35:34.870 [2024-11-18 00:40:58.569568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.870 [2024-11-18 00:40:58.569596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.870 qpair failed and we were unable to recover it. 00:35:34.870 [2024-11-18 00:40:58.569671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.870 [2024-11-18 00:40:58.569698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.870 qpair failed and we were unable to recover it. 00:35:34.870 [2024-11-18 00:40:58.569824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.870 [2024-11-18 00:40:58.569852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.870 qpair failed and we were unable to recover it. 00:35:34.870 [2024-11-18 00:40:58.569968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.870 [2024-11-18 00:40:58.569995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.870 qpair failed and we were unable to recover it. 00:35:34.870 [2024-11-18 00:40:58.570085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.870 [2024-11-18 00:40:58.570112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.870 qpair failed and we were unable to recover it. 00:35:34.870 [2024-11-18 00:40:58.570203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.870 [2024-11-18 00:40:58.570232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.870 qpair failed and we were unable to recover it. 00:35:34.870 [2024-11-18 00:40:58.570347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.870 [2024-11-18 00:40:58.570377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.870 qpair failed and we were unable to recover it. 00:35:34.870 [2024-11-18 00:40:58.570502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.870 [2024-11-18 00:40:58.570547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.870 qpair failed and we were unable to recover it. 00:35:34.870 [2024-11-18 00:40:58.570644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.870 [2024-11-18 00:40:58.570673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.870 qpair failed and we were unable to recover it. 00:35:34.870 [2024-11-18 00:40:58.570782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.870 [2024-11-18 00:40:58.570809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.870 qpair failed and we were unable to recover it. 00:35:34.870 [2024-11-18 00:40:58.570922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.870 [2024-11-18 00:40:58.570950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.870 qpair failed and we were unable to recover it. 00:35:34.870 [2024-11-18 00:40:58.571068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.870 [2024-11-18 00:40:58.571105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.870 qpair failed and we were unable to recover it. 00:35:34.870 [2024-11-18 00:40:58.571233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.870 [2024-11-18 00:40:58.571266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.870 qpair failed and we were unable to recover it. 00:35:34.870 [2024-11-18 00:40:58.571389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.870 [2024-11-18 00:40:58.571423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.870 qpair failed and we were unable to recover it. 00:35:34.870 [2024-11-18 00:40:58.571515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.870 [2024-11-18 00:40:58.571542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.870 qpair failed and we were unable to recover it. 00:35:34.870 [2024-11-18 00:40:58.571530] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:35:34.870 [2024-11-18 00:40:58.571621] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:34.870 [2024-11-18 00:40:58.571635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.870 [2024-11-18 00:40:58.571662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.870 qpair failed and we were unable to recover it. 00:35:34.870 [2024-11-18 00:40:58.571744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.870 [2024-11-18 00:40:58.571770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.870 qpair failed and we were unable to recover it. 00:35:34.870 [2024-11-18 00:40:58.571877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.870 [2024-11-18 00:40:58.571903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.870 qpair failed and we were unable to recover it. 00:35:34.870 [2024-11-18 00:40:58.572023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.870 [2024-11-18 00:40:58.572048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.870 qpair failed and we were unable to recover it. 00:35:34.870 [2024-11-18 00:40:58.572194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.870 [2024-11-18 00:40:58.572222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.870 qpair failed and we were unable to recover it. 00:35:34.870 [2024-11-18 00:40:58.572338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.870 [2024-11-18 00:40:58.572378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.870 qpair failed and we were unable to recover it. 00:35:34.870 [2024-11-18 00:40:58.572500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.871 [2024-11-18 00:40:58.572529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.871 qpair failed and we were unable to recover it. 00:35:34.871 [2024-11-18 00:40:58.572630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.871 [2024-11-18 00:40:58.572658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.871 qpair failed and we were unable to recover it. 00:35:34.871 [2024-11-18 00:40:58.572770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.871 [2024-11-18 00:40:58.572797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.871 qpair failed and we were unable to recover it. 00:35:34.871 [2024-11-18 00:40:58.572906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.871 [2024-11-18 00:40:58.572938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.871 qpair failed and we were unable to recover it. 00:35:34.871 [2024-11-18 00:40:58.573018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.871 [2024-11-18 00:40:58.573045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.871 qpair failed and we were unable to recover it. 00:35:34.871 [2024-11-18 00:40:58.573170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.871 [2024-11-18 00:40:58.573196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.871 qpair failed and we were unable to recover it. 00:35:34.871 [2024-11-18 00:40:58.573317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.871 [2024-11-18 00:40:58.573348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.871 qpair failed and we were unable to recover it. 00:35:34.871 [2024-11-18 00:40:58.573462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.871 [2024-11-18 00:40:58.573490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.871 qpair failed and we were unable to recover it. 00:35:34.871 [2024-11-18 00:40:58.573600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.871 [2024-11-18 00:40:58.573627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.871 qpair failed and we were unable to recover it. 00:35:34.871 [2024-11-18 00:40:58.573743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.871 [2024-11-18 00:40:58.573769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.871 qpair failed and we were unable to recover it. 00:35:34.871 [2024-11-18 00:40:58.573889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.871 [2024-11-18 00:40:58.573918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.871 qpair failed and we were unable to recover it. 00:35:34.871 [2024-11-18 00:40:58.574047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.871 [2024-11-18 00:40:58.574075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.871 qpair failed and we were unable to recover it. 00:35:34.871 [2024-11-18 00:40:58.574189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.871 [2024-11-18 00:40:58.574217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.871 qpair failed and we were unable to recover it. 00:35:34.871 [2024-11-18 00:40:58.574340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.871 [2024-11-18 00:40:58.574369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.871 qpair failed and we were unable to recover it. 00:35:34.871 [2024-11-18 00:40:58.574462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.871 [2024-11-18 00:40:58.574489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.871 qpair failed and we were unable to recover it. 00:35:34.871 [2024-11-18 00:40:58.574609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.871 [2024-11-18 00:40:58.574637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.871 qpair failed and we were unable to recover it. 00:35:34.871 [2024-11-18 00:40:58.574731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.871 [2024-11-18 00:40:58.574758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.871 qpair failed and we were unable to recover it. 00:35:34.871 [2024-11-18 00:40:58.574882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.871 [2024-11-18 00:40:58.574912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.871 qpair failed and we were unable to recover it. 00:35:34.871 [2024-11-18 00:40:58.575024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.871 [2024-11-18 00:40:58.575051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.871 qpair failed and we were unable to recover it. 00:35:34.871 [2024-11-18 00:40:58.575138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.871 [2024-11-18 00:40:58.575168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.871 qpair failed and we were unable to recover it. 00:35:34.871 [2024-11-18 00:40:58.575253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.871 [2024-11-18 00:40:58.575281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.871 qpair failed and we were unable to recover it. 00:35:34.871 [2024-11-18 00:40:58.575403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.871 [2024-11-18 00:40:58.575432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.871 qpair failed and we were unable to recover it. 00:35:34.871 [2024-11-18 00:40:58.575585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.871 [2024-11-18 00:40:58.575612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.871 qpair failed and we were unable to recover it. 00:35:34.871 [2024-11-18 00:40:58.575710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.871 [2024-11-18 00:40:58.575738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.871 qpair failed and we were unable to recover it. 00:35:34.871 [2024-11-18 00:40:58.575849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.871 [2024-11-18 00:40:58.575876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.871 qpair failed and we were unable to recover it. 00:35:34.871 [2024-11-18 00:40:58.575989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.871 [2024-11-18 00:40:58.576022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.871 qpair failed and we were unable to recover it. 00:35:34.871 [2024-11-18 00:40:58.576112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.871 [2024-11-18 00:40:58.576139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.871 qpair failed and we were unable to recover it. 00:35:34.871 [2024-11-18 00:40:58.576267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.871 [2024-11-18 00:40:58.576307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.871 qpair failed and we were unable to recover it. 00:35:34.871 [2024-11-18 00:40:58.576440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.871 [2024-11-18 00:40:58.576469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.871 qpair failed and we were unable to recover it. 00:35:34.871 [2024-11-18 00:40:58.576560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.871 [2024-11-18 00:40:58.576587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.871 qpair failed and we were unable to recover it. 00:35:34.871 [2024-11-18 00:40:58.576707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.871 [2024-11-18 00:40:58.576739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.871 qpair failed and we were unable to recover it. 00:35:34.871 [2024-11-18 00:40:58.576894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.872 [2024-11-18 00:40:58.576922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.872 qpair failed and we were unable to recover it. 00:35:34.872 [2024-11-18 00:40:58.577005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.872 [2024-11-18 00:40:58.577032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.872 qpair failed and we were unable to recover it. 00:35:34.872 [2024-11-18 00:40:58.577112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.872 [2024-11-18 00:40:58.577151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.872 qpair failed and we were unable to recover it. 00:35:34.872 [2024-11-18 00:40:58.577289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.872 [2024-11-18 00:40:58.577340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.872 qpair failed and we were unable to recover it. 00:35:34.872 [2024-11-18 00:40:58.577430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.872 [2024-11-18 00:40:58.577458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.872 qpair failed and we were unable to recover it. 00:35:34.872 [2024-11-18 00:40:58.577571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.872 [2024-11-18 00:40:58.577599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.872 qpair failed and we were unable to recover it. 00:35:34.872 [2024-11-18 00:40:58.577682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.872 [2024-11-18 00:40:58.577710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.872 qpair failed and we were unable to recover it. 00:35:34.872 [2024-11-18 00:40:58.577838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.872 [2024-11-18 00:40:58.577866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.872 qpair failed and we were unable to recover it. 00:35:34.872 [2024-11-18 00:40:58.577996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.872 [2024-11-18 00:40:58.578024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.872 qpair failed and we were unable to recover it. 00:35:34.872 [2024-11-18 00:40:58.578133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.872 [2024-11-18 00:40:58.578160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.872 qpair failed and we were unable to recover it. 00:35:34.872 [2024-11-18 00:40:58.578280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.872 [2024-11-18 00:40:58.578316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.872 qpair failed and we were unable to recover it. 00:35:34.872 [2024-11-18 00:40:58.578407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.872 [2024-11-18 00:40:58.578436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.872 qpair failed and we were unable to recover it. 00:35:34.872 [2024-11-18 00:40:58.578534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.872 [2024-11-18 00:40:58.578562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.872 qpair failed and we were unable to recover it. 00:35:34.872 [2024-11-18 00:40:58.578683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.872 [2024-11-18 00:40:58.578710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.872 qpair failed and we were unable to recover it. 00:35:34.872 [2024-11-18 00:40:58.578812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.872 [2024-11-18 00:40:58.578852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.872 qpair failed and we were unable to recover it. 00:35:34.872 [2024-11-18 00:40:58.578940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.872 [2024-11-18 00:40:58.578968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.872 qpair failed and we were unable to recover it. 00:35:34.872 [2024-11-18 00:40:58.579079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.872 [2024-11-18 00:40:58.579108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.872 qpair failed and we were unable to recover it. 00:35:34.872 [2024-11-18 00:40:58.579186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.872 [2024-11-18 00:40:58.579213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.872 qpair failed and we were unable to recover it. 00:35:34.872 [2024-11-18 00:40:58.579300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.872 [2024-11-18 00:40:58.579336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.872 qpair failed and we were unable to recover it. 00:35:34.872 [2024-11-18 00:40:58.579477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.872 [2024-11-18 00:40:58.579504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.872 qpair failed and we were unable to recover it. 00:35:34.872 [2024-11-18 00:40:58.579595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.872 [2024-11-18 00:40:58.579622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.872 qpair failed and we were unable to recover it. 00:35:34.872 [2024-11-18 00:40:58.579725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.872 [2024-11-18 00:40:58.579767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.872 qpair failed and we were unable to recover it. 00:35:34.872 [2024-11-18 00:40:58.579858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.872 [2024-11-18 00:40:58.579887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.872 qpair failed and we were unable to recover it. 00:35:34.872 [2024-11-18 00:40:58.579966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.872 [2024-11-18 00:40:58.579995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.872 qpair failed and we were unable to recover it. 00:35:34.872 [2024-11-18 00:40:58.580110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.872 [2024-11-18 00:40:58.580138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.872 qpair failed and we were unable to recover it. 00:35:34.872 [2024-11-18 00:40:58.580223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.872 [2024-11-18 00:40:58.580250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.872 qpair failed and we were unable to recover it. 00:35:34.872 [2024-11-18 00:40:58.580352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.872 [2024-11-18 00:40:58.580381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.872 qpair failed and we were unable to recover it. 00:35:34.872 [2024-11-18 00:40:58.580494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.872 [2024-11-18 00:40:58.580522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.872 qpair failed and we were unable to recover it. 00:35:34.872 [2024-11-18 00:40:58.580632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.872 [2024-11-18 00:40:58.580659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.872 qpair failed and we were unable to recover it. 00:35:34.872 [2024-11-18 00:40:58.580775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.872 [2024-11-18 00:40:58.580814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.872 qpair failed and we were unable to recover it. 00:35:34.872 [2024-11-18 00:40:58.580907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.872 [2024-11-18 00:40:58.580934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.872 qpair failed and we were unable to recover it. 00:35:34.872 [2024-11-18 00:40:58.581019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.872 [2024-11-18 00:40:58.581046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.872 qpair failed and we were unable to recover it. 00:35:34.872 [2024-11-18 00:40:58.581141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.872 [2024-11-18 00:40:58.581170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.872 qpair failed and we were unable to recover it. 00:35:34.872 [2024-11-18 00:40:58.581261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.872 [2024-11-18 00:40:58.581289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.872 qpair failed and we were unable to recover it. 00:35:34.872 [2024-11-18 00:40:58.581422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.872 [2024-11-18 00:40:58.581453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.872 qpair failed and we were unable to recover it. 00:35:34.872 [2024-11-18 00:40:58.581571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.872 [2024-11-18 00:40:58.581598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.872 qpair failed and we were unable to recover it. 00:35:34.872 [2024-11-18 00:40:58.581692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.872 [2024-11-18 00:40:58.581725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.872 qpair failed and we were unable to recover it. 00:35:34.872 [2024-11-18 00:40:58.581840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.872 [2024-11-18 00:40:58.581867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.872 qpair failed and we were unable to recover it. 00:35:34.873 [2024-11-18 00:40:58.581959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.873 [2024-11-18 00:40:58.581985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.873 qpair failed and we were unable to recover it. 00:35:34.873 [2024-11-18 00:40:58.582094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.873 [2024-11-18 00:40:58.582126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.873 qpair failed and we were unable to recover it. 00:35:34.873 [2024-11-18 00:40:58.582265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.873 [2024-11-18 00:40:58.582294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.873 qpair failed and we were unable to recover it. 00:35:34.873 [2024-11-18 00:40:58.582395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.873 [2024-11-18 00:40:58.582423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.873 qpair failed and we were unable to recover it. 00:35:34.873 [2024-11-18 00:40:58.582519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.873 [2024-11-18 00:40:58.582547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.873 qpair failed and we were unable to recover it. 00:35:34.873 [2024-11-18 00:40:58.582661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.873 [2024-11-18 00:40:58.582688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.873 qpair failed and we were unable to recover it. 00:35:34.873 [2024-11-18 00:40:58.582804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.873 [2024-11-18 00:40:58.582832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.873 qpair failed and we were unable to recover it. 00:35:34.873 [2024-11-18 00:40:58.582949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.873 [2024-11-18 00:40:58.582976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.873 qpair failed and we were unable to recover it. 00:35:34.873 [2024-11-18 00:40:58.583057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.873 [2024-11-18 00:40:58.583085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.873 qpair failed and we were unable to recover it. 00:35:34.873 [2024-11-18 00:40:58.583173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.873 [2024-11-18 00:40:58.583200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.873 qpair failed and we were unable to recover it. 00:35:34.873 [2024-11-18 00:40:58.583308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.873 [2024-11-18 00:40:58.583344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.873 qpair failed and we were unable to recover it. 00:35:34.873 [2024-11-18 00:40:58.583434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.873 [2024-11-18 00:40:58.583462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.873 qpair failed and we were unable to recover it. 00:35:34.873 [2024-11-18 00:40:58.583546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.873 [2024-11-18 00:40:58.583573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.873 qpair failed and we were unable to recover it. 00:35:34.873 [2024-11-18 00:40:58.583695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.873 [2024-11-18 00:40:58.583722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.873 qpair failed and we were unable to recover it. 00:35:34.873 [2024-11-18 00:40:58.583829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.873 [2024-11-18 00:40:58.583856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.873 qpair failed and we were unable to recover it. 00:35:34.873 [2024-11-18 00:40:58.583976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.873 [2024-11-18 00:40:58.584006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.873 qpair failed and we were unable to recover it. 00:35:34.873 [2024-11-18 00:40:58.584107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.873 [2024-11-18 00:40:58.584147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.873 qpair failed and we were unable to recover it. 00:35:34.873 [2024-11-18 00:40:58.584270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.873 [2024-11-18 00:40:58.584300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.873 qpair failed and we were unable to recover it. 00:35:34.873 [2024-11-18 00:40:58.584407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.873 [2024-11-18 00:40:58.584438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.873 qpair failed and we were unable to recover it. 00:35:34.873 [2024-11-18 00:40:58.584581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.873 [2024-11-18 00:40:58.584619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.873 qpair failed and we were unable to recover it. 00:35:34.873 [2024-11-18 00:40:58.584731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.873 [2024-11-18 00:40:58.584757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.873 qpair failed and we were unable to recover it. 00:35:34.873 [2024-11-18 00:40:58.584843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.873 [2024-11-18 00:40:58.584870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.873 qpair failed and we were unable to recover it. 00:35:34.873 [2024-11-18 00:40:58.584958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.873 [2024-11-18 00:40:58.584986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.873 qpair failed and we were unable to recover it. 00:35:34.873 [2024-11-18 00:40:58.585095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.873 [2024-11-18 00:40:58.585122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.873 qpair failed and we were unable to recover it. 00:35:34.873 [2024-11-18 00:40:58.585248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.873 [2024-11-18 00:40:58.585277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.873 qpair failed and we were unable to recover it. 00:35:34.873 [2024-11-18 00:40:58.585405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.873 [2024-11-18 00:40:58.585433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.873 qpair failed and we were unable to recover it. 00:35:34.873 [2024-11-18 00:40:58.585554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.873 [2024-11-18 00:40:58.585586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.873 qpair failed and we were unable to recover it. 00:35:34.873 [2024-11-18 00:40:58.585698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.873 [2024-11-18 00:40:58.585737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.873 qpair failed and we were unable to recover it. 00:35:34.873 [2024-11-18 00:40:58.585874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.873 [2024-11-18 00:40:58.585908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.873 qpair failed and we were unable to recover it. 00:35:34.873 [2024-11-18 00:40:58.585996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.873 [2024-11-18 00:40:58.586023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.873 qpair failed and we were unable to recover it. 00:35:34.873 [2024-11-18 00:40:58.586108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.873 [2024-11-18 00:40:58.586134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.873 qpair failed and we were unable to recover it. 00:35:34.873 [2024-11-18 00:40:58.586255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.873 [2024-11-18 00:40:58.586293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.873 qpair failed and we were unable to recover it. 00:35:34.873 [2024-11-18 00:40:58.586434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.873 [2024-11-18 00:40:58.586462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.873 qpair failed and we were unable to recover it. 00:35:34.873 [2024-11-18 00:40:58.586567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.873 [2024-11-18 00:40:58.586593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.873 qpair failed and we were unable to recover it. 00:35:34.873 [2024-11-18 00:40:58.586682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.873 [2024-11-18 00:40:58.586710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.873 qpair failed and we were unable to recover it. 00:35:34.873 [2024-11-18 00:40:58.586793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.873 [2024-11-18 00:40:58.586820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.873 qpair failed and we were unable to recover it. 00:35:34.873 [2024-11-18 00:40:58.586905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.873 [2024-11-18 00:40:58.586934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.873 qpair failed and we were unable to recover it. 00:35:34.873 [2024-11-18 00:40:58.587023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.873 [2024-11-18 00:40:58.587050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.873 qpair failed and we were unable to recover it. 00:35:34.873 [2024-11-18 00:40:58.587143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.874 [2024-11-18 00:40:58.587170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.874 qpair failed and we were unable to recover it. 00:35:34.874 [2024-11-18 00:40:58.587318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.874 [2024-11-18 00:40:58.587347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.874 qpair failed and we were unable to recover it. 00:35:34.874 [2024-11-18 00:40:58.587432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.874 [2024-11-18 00:40:58.587459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.874 qpair failed and we were unable to recover it. 00:35:34.874 [2024-11-18 00:40:58.587571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.874 [2024-11-18 00:40:58.587605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.874 qpair failed and we were unable to recover it. 00:35:34.874 [2024-11-18 00:40:58.587701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.874 [2024-11-18 00:40:58.587729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.874 qpair failed and we were unable to recover it. 00:35:34.874 [2024-11-18 00:40:58.587872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.874 [2024-11-18 00:40:58.587899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.874 qpair failed and we were unable to recover it. 00:35:34.874 [2024-11-18 00:40:58.588037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.874 [2024-11-18 00:40:58.588064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.874 qpair failed and we were unable to recover it. 00:35:34.874 [2024-11-18 00:40:58.588178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.874 [2024-11-18 00:40:58.588206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.874 qpair failed and we were unable to recover it. 00:35:34.874 [2024-11-18 00:40:58.588356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.874 [2024-11-18 00:40:58.588387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.874 qpair failed and we were unable to recover it. 00:35:34.874 [2024-11-18 00:40:58.588485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.874 [2024-11-18 00:40:58.588513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.874 qpair failed and we were unable to recover it. 00:35:34.874 [2024-11-18 00:40:58.588595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.874 [2024-11-18 00:40:58.588622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.874 qpair failed and we were unable to recover it. 00:35:34.874 [2024-11-18 00:40:58.588710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.874 [2024-11-18 00:40:58.588757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.874 qpair failed and we were unable to recover it. 00:35:34.874 [2024-11-18 00:40:58.588894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.874 [2024-11-18 00:40:58.588921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.874 qpair failed and we were unable to recover it. 00:35:34.874 [2024-11-18 00:40:58.589039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.874 [2024-11-18 00:40:58.589067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.874 qpair failed and we were unable to recover it. 00:35:34.874 [2024-11-18 00:40:58.589189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.874 [2024-11-18 00:40:58.589217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.874 qpair failed and we were unable to recover it. 00:35:34.874 [2024-11-18 00:40:58.589344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.874 [2024-11-18 00:40:58.589384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.874 qpair failed and we were unable to recover it. 00:35:34.874 [2024-11-18 00:40:58.589536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.874 [2024-11-18 00:40:58.589564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.874 qpair failed and we were unable to recover it. 00:35:34.874 [2024-11-18 00:40:58.589677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.874 [2024-11-18 00:40:58.589704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.874 qpair failed and we were unable to recover it. 00:35:34.874 [2024-11-18 00:40:58.589791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.874 [2024-11-18 00:40:58.589818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.874 qpair failed and we were unable to recover it. 00:35:34.874 [2024-11-18 00:40:58.589937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.874 [2024-11-18 00:40:58.589975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.874 qpair failed and we were unable to recover it. 00:35:34.874 [2024-11-18 00:40:58.590082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.874 [2024-11-18 00:40:58.590109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.874 qpair failed and we were unable to recover it. 00:35:34.874 [2024-11-18 00:40:58.590185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.874 [2024-11-18 00:40:58.590212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.874 qpair failed and we were unable to recover it. 00:35:34.874 [2024-11-18 00:40:58.590317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.874 [2024-11-18 00:40:58.590348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.874 qpair failed and we were unable to recover it. 00:35:34.874 [2024-11-18 00:40:58.590437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.874 [2024-11-18 00:40:58.590466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.874 qpair failed and we were unable to recover it. 00:35:34.874 [2024-11-18 00:40:58.590571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.874 [2024-11-18 00:40:58.590599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.874 qpair failed and we were unable to recover it. 00:35:34.874 [2024-11-18 00:40:58.590773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.874 [2024-11-18 00:40:58.590801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.874 qpair failed and we were unable to recover it. 00:35:34.874 [2024-11-18 00:40:58.590949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.874 [2024-11-18 00:40:58.590976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.874 qpair failed and we were unable to recover it. 00:35:34.874 [2024-11-18 00:40:58.591096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.874 [2024-11-18 00:40:58.591133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.874 qpair failed and we were unable to recover it. 00:35:34.874 [2024-11-18 00:40:58.591274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.874 [2024-11-18 00:40:58.591302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.874 qpair failed and we were unable to recover it. 00:35:34.874 [2024-11-18 00:40:58.591420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.874 [2024-11-18 00:40:58.591448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.874 qpair failed and we were unable to recover it. 00:35:34.874 [2024-11-18 00:40:58.591590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.874 [2024-11-18 00:40:58.591621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.874 qpair failed and we were unable to recover it. 00:35:34.874 [2024-11-18 00:40:58.591712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.874 [2024-11-18 00:40:58.591740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.874 qpair failed and we were unable to recover it. 00:35:34.874 [2024-11-18 00:40:58.591851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.874 [2024-11-18 00:40:58.591879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.874 qpair failed and we were unable to recover it. 00:35:34.874 [2024-11-18 00:40:58.591975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.874 [2024-11-18 00:40:58.592002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.874 qpair failed and we were unable to recover it. 00:35:34.874 [2024-11-18 00:40:58.592119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.874 [2024-11-18 00:40:58.592146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.874 qpair failed and we were unable to recover it. 00:35:34.874 [2024-11-18 00:40:58.592261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.874 [2024-11-18 00:40:58.592288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.874 qpair failed and we were unable to recover it. 00:35:34.874 [2024-11-18 00:40:58.592380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.874 [2024-11-18 00:40:58.592410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.874 qpair failed and we were unable to recover it. 00:35:34.874 [2024-11-18 00:40:58.592518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.874 [2024-11-18 00:40:58.592557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.874 qpair failed and we were unable to recover it. 00:35:34.875 [2024-11-18 00:40:58.592678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.875 [2024-11-18 00:40:58.592717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.875 qpair failed and we were unable to recover it. 00:35:34.875 [2024-11-18 00:40:58.592811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.875 [2024-11-18 00:40:58.592847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.875 qpair failed and we were unable to recover it. 00:35:34.875 [2024-11-18 00:40:58.592937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.875 [2024-11-18 00:40:58.592964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.875 qpair failed and we were unable to recover it. 00:35:34.875 [2024-11-18 00:40:58.593044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.875 [2024-11-18 00:40:58.593071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.875 qpair failed and we were unable to recover it. 00:35:34.875 [2024-11-18 00:40:58.593222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.875 [2024-11-18 00:40:58.593248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.875 qpair failed and we were unable to recover it. 00:35:34.875 [2024-11-18 00:40:58.593348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.875 [2024-11-18 00:40:58.593376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.875 qpair failed and we were unable to recover it. 00:35:34.875 [2024-11-18 00:40:58.593471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.875 [2024-11-18 00:40:58.593498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.875 qpair failed and we were unable to recover it. 00:35:34.875 [2024-11-18 00:40:58.593615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.875 [2024-11-18 00:40:58.593643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.875 qpair failed and we were unable to recover it. 00:35:34.875 [2024-11-18 00:40:58.593729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.875 [2024-11-18 00:40:58.593767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.875 qpair failed and we were unable to recover it. 00:35:34.875 [2024-11-18 00:40:58.593848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.875 [2024-11-18 00:40:58.593876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.875 qpair failed and we were unable to recover it. 00:35:34.875 [2024-11-18 00:40:58.593976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.875 [2024-11-18 00:40:58.594005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.875 qpair failed and we were unable to recover it. 00:35:34.875 [2024-11-18 00:40:58.594086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.875 [2024-11-18 00:40:58.594114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.875 qpair failed and we were unable to recover it. 00:35:34.875 [2024-11-18 00:40:58.594211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.875 [2024-11-18 00:40:58.594242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.875 qpair failed and we were unable to recover it. 00:35:34.875 [2024-11-18 00:40:58.594343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.875 [2024-11-18 00:40:58.594372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.875 qpair failed and we were unable to recover it. 00:35:34.875 [2024-11-18 00:40:58.594487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.875 [2024-11-18 00:40:58.594514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.875 qpair failed and we were unable to recover it. 00:35:34.875 [2024-11-18 00:40:58.594634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.875 [2024-11-18 00:40:58.594662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.875 qpair failed and we were unable to recover it. 00:35:34.875 [2024-11-18 00:40:58.594806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.875 [2024-11-18 00:40:58.594833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.875 qpair failed and we were unable to recover it. 00:35:34.875 [2024-11-18 00:40:58.594974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.875 [2024-11-18 00:40:58.595002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.875 qpair failed and we were unable to recover it. 00:35:34.875 [2024-11-18 00:40:58.595086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.875 [2024-11-18 00:40:58.595124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.875 qpair failed and we were unable to recover it. 00:35:34.875 [2024-11-18 00:40:58.595212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.875 [2024-11-18 00:40:58.595241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.875 qpair failed and we were unable to recover it. 00:35:34.875 [2024-11-18 00:40:58.595333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.875 [2024-11-18 00:40:58.595364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.875 qpair failed and we were unable to recover it. 00:35:34.875 [2024-11-18 00:40:58.595473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.875 [2024-11-18 00:40:58.595500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.875 qpair failed and we were unable to recover it. 00:35:34.875 [2024-11-18 00:40:58.595613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.875 [2024-11-18 00:40:58.595640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.875 qpair failed and we were unable to recover it. 00:35:34.875 [2024-11-18 00:40:58.595725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.875 [2024-11-18 00:40:58.595752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.875 qpair failed and we were unable to recover it. 00:35:34.875 [2024-11-18 00:40:58.595841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.875 [2024-11-18 00:40:58.595868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.875 qpair failed and we were unable to recover it. 00:35:34.875 [2024-11-18 00:40:58.595984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.875 [2024-11-18 00:40:58.596021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.875 qpair failed and we were unable to recover it. 00:35:34.875 [2024-11-18 00:40:58.596113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.875 [2024-11-18 00:40:58.596140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.875 qpair failed and we were unable to recover it. 00:35:34.875 [2024-11-18 00:40:58.596263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.875 [2024-11-18 00:40:58.596290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.875 qpair failed and we were unable to recover it. 00:35:34.875 [2024-11-18 00:40:58.596390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.875 [2024-11-18 00:40:58.596417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.875 qpair failed and we were unable to recover it. 00:35:34.875 [2024-11-18 00:40:58.596524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.875 [2024-11-18 00:40:58.596551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.875 qpair failed and we were unable to recover it. 00:35:34.875 [2024-11-18 00:40:58.596649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.875 [2024-11-18 00:40:58.596690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.875 qpair failed and we were unable to recover it. 00:35:34.875 [2024-11-18 00:40:58.596793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.875 [2024-11-18 00:40:58.596833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.875 qpair failed and we were unable to recover it. 00:35:34.875 [2024-11-18 00:40:58.596944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.875 [2024-11-18 00:40:58.596978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.875 qpair failed and we were unable to recover it. 00:35:34.875 [2024-11-18 00:40:58.597067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.875 [2024-11-18 00:40:58.597095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.875 qpair failed and we were unable to recover it. 00:35:34.875 [2024-11-18 00:40:58.597250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.875 [2024-11-18 00:40:58.597276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.875 qpair failed and we were unable to recover it. 00:35:34.875 [2024-11-18 00:40:58.597411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.875 [2024-11-18 00:40:58.597438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.875 qpair failed and we were unable to recover it. 00:35:34.875 [2024-11-18 00:40:58.597525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.875 [2024-11-18 00:40:58.597552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.875 qpair failed and we were unable to recover it. 00:35:34.875 [2024-11-18 00:40:58.597640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.875 [2024-11-18 00:40:58.597667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.875 qpair failed and we were unable to recover it. 00:35:34.875 [2024-11-18 00:40:58.597786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.876 [2024-11-18 00:40:58.597812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.876 qpair failed and we were unable to recover it. 00:35:34.876 [2024-11-18 00:40:58.597932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.876 [2024-11-18 00:40:58.597962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.876 qpair failed and we were unable to recover it. 00:35:34.876 [2024-11-18 00:40:58.598060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.876 [2024-11-18 00:40:58.598099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.876 qpair failed and we were unable to recover it. 00:35:34.876 [2024-11-18 00:40:58.598200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.876 [2024-11-18 00:40:58.598229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.876 qpair failed and we were unable to recover it. 00:35:34.876 [2024-11-18 00:40:58.598347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.876 [2024-11-18 00:40:58.598377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.876 qpair failed and we were unable to recover it. 00:35:34.876 [2024-11-18 00:40:58.598508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.876 [2024-11-18 00:40:58.598541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.876 qpair failed and we were unable to recover it. 00:35:34.876 [2024-11-18 00:40:58.598620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.876 [2024-11-18 00:40:58.598649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.876 qpair failed and we were unable to recover it. 00:35:34.876 [2024-11-18 00:40:58.598769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.876 [2024-11-18 00:40:58.598797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.876 qpair failed and we were unable to recover it. 00:35:34.876 [2024-11-18 00:40:58.598920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.876 [2024-11-18 00:40:58.598949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.876 qpair failed and we were unable to recover it. 00:35:34.876 [2024-11-18 00:40:58.599035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.876 [2024-11-18 00:40:58.599061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.876 qpair failed and we were unable to recover it. 00:35:34.876 [2024-11-18 00:40:58.599155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.876 [2024-11-18 00:40:58.599184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.876 qpair failed and we were unable to recover it. 00:35:34.876 [2024-11-18 00:40:58.599267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.876 [2024-11-18 00:40:58.599295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.876 qpair failed and we were unable to recover it. 00:35:34.876 [2024-11-18 00:40:58.599383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.876 [2024-11-18 00:40:58.599411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.876 qpair failed and we were unable to recover it. 00:35:34.876 [2024-11-18 00:40:58.599523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.876 [2024-11-18 00:40:58.599551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.876 qpair failed and we were unable to recover it. 00:35:34.876 [2024-11-18 00:40:58.599699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.876 [2024-11-18 00:40:58.599727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.876 qpair failed and we were unable to recover it. 00:35:34.876 [2024-11-18 00:40:58.599831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.876 [2024-11-18 00:40:58.599858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.876 qpair failed and we were unable to recover it. 00:35:34.876 [2024-11-18 00:40:58.599975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.876 [2024-11-18 00:40:58.600003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.876 qpair failed and we were unable to recover it. 00:35:34.876 [2024-11-18 00:40:58.600119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.876 [2024-11-18 00:40:58.600148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.876 qpair failed and we were unable to recover it. 00:35:34.876 [2024-11-18 00:40:58.600232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.876 [2024-11-18 00:40:58.600261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.876 qpair failed and we were unable to recover it. 00:35:34.876 [2024-11-18 00:40:58.600355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.876 [2024-11-18 00:40:58.600383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.876 qpair failed and we were unable to recover it. 00:35:34.876 [2024-11-18 00:40:58.600498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.876 [2024-11-18 00:40:58.600526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.876 qpair failed and we were unable to recover it. 00:35:34.876 [2024-11-18 00:40:58.600666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.876 [2024-11-18 00:40:58.600706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.876 qpair failed and we were unable to recover it. 00:35:34.876 [2024-11-18 00:40:58.600791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.876 [2024-11-18 00:40:58.600821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.876 qpair failed and we were unable to recover it. 00:35:34.876 [2024-11-18 00:40:58.600938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.876 [2024-11-18 00:40:58.600965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.876 qpair failed and we were unable to recover it. 00:35:34.876 [2024-11-18 00:40:58.601076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.876 [2024-11-18 00:40:58.601103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.876 qpair failed and we were unable to recover it. 00:35:34.876 [2024-11-18 00:40:58.601226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.876 [2024-11-18 00:40:58.601252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.876 qpair failed and we were unable to recover it. 00:35:34.876 [2024-11-18 00:40:58.601360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.876 [2024-11-18 00:40:58.601387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.876 qpair failed and we were unable to recover it. 00:35:34.876 [2024-11-18 00:40:58.601506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.876 [2024-11-18 00:40:58.601533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.876 qpair failed and we were unable to recover it. 00:35:34.876 [2024-11-18 00:40:58.601675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.876 [2024-11-18 00:40:58.601702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.876 qpair failed and we were unable to recover it. 00:35:34.876 [2024-11-18 00:40:58.601796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.876 [2024-11-18 00:40:58.601822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.876 qpair failed and we were unable to recover it. 00:35:34.876 [2024-11-18 00:40:58.601940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.876 [2024-11-18 00:40:58.601978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.876 qpair failed and we were unable to recover it. 00:35:34.876 [2024-11-18 00:40:58.602119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.876 [2024-11-18 00:40:58.602146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.876 qpair failed and we were unable to recover it. 00:35:34.876 [2024-11-18 00:40:58.602264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.876 [2024-11-18 00:40:58.602293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.876 qpair failed and we were unable to recover it. 00:35:34.876 [2024-11-18 00:40:58.602401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.876 [2024-11-18 00:40:58.602429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.876 qpair failed and we were unable to recover it. 00:35:34.877 [2024-11-18 00:40:58.602556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.877 [2024-11-18 00:40:58.602585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.877 qpair failed and we were unable to recover it. 00:35:34.877 [2024-11-18 00:40:58.602686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.877 [2024-11-18 00:40:58.602713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.877 qpair failed and we were unable to recover it. 00:35:34.877 [2024-11-18 00:40:58.602801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.877 [2024-11-18 00:40:58.602829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.877 qpair failed and we were unable to recover it. 00:35:34.877 [2024-11-18 00:40:58.602947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.877 [2024-11-18 00:40:58.602976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.877 qpair failed and we were unable to recover it. 00:35:34.877 [2024-11-18 00:40:58.603097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.877 [2024-11-18 00:40:58.603126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.877 qpair failed and we were unable to recover it. 00:35:34.877 [2024-11-18 00:40:58.603268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.877 [2024-11-18 00:40:58.603296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.877 qpair failed and we were unable to recover it. 00:35:34.877 [2024-11-18 00:40:58.603394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.877 [2024-11-18 00:40:58.603421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.877 qpair failed and we were unable to recover it. 00:35:34.877 [2024-11-18 00:40:58.603504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.877 [2024-11-18 00:40:58.603531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.877 qpair failed and we were unable to recover it. 00:35:34.877 [2024-11-18 00:40:58.603625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.877 [2024-11-18 00:40:58.603652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.877 qpair failed and we were unable to recover it. 00:35:34.877 [2024-11-18 00:40:58.603782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.877 [2024-11-18 00:40:58.603810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.877 qpair failed and we were unable to recover it. 00:35:34.877 [2024-11-18 00:40:58.603920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.877 [2024-11-18 00:40:58.603949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.877 qpair failed and we were unable to recover it. 00:35:34.877 [2024-11-18 00:40:58.604038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.877 [2024-11-18 00:40:58.604067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.877 qpair failed and we were unable to recover it. 00:35:34.877 [2024-11-18 00:40:58.604184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.877 [2024-11-18 00:40:58.604211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.877 qpair failed and we were unable to recover it. 00:35:34.877 [2024-11-18 00:40:58.604342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.877 [2024-11-18 00:40:58.604370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.877 qpair failed and we were unable to recover it. 00:35:34.877 [2024-11-18 00:40:58.604493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.877 [2024-11-18 00:40:58.604520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.877 qpair failed and we were unable to recover it. 00:35:34.877 [2024-11-18 00:40:58.604607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.877 [2024-11-18 00:40:58.604634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.877 qpair failed and we were unable to recover it. 00:35:34.877 [2024-11-18 00:40:58.604766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.877 [2024-11-18 00:40:58.604793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.877 qpair failed and we were unable to recover it. 00:35:34.877 [2024-11-18 00:40:58.604912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.877 [2024-11-18 00:40:58.604939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.877 qpair failed and we were unable to recover it. 00:35:34.877 [2024-11-18 00:40:58.605056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.877 [2024-11-18 00:40:58.605084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.877 qpair failed and we were unable to recover it. 00:35:34.877 [2024-11-18 00:40:58.605196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.877 [2024-11-18 00:40:58.605223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.877 qpair failed and we were unable to recover it. 00:35:34.877 [2024-11-18 00:40:58.605330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.877 [2024-11-18 00:40:58.605358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.877 qpair failed and we were unable to recover it. 00:35:34.877 [2024-11-18 00:40:58.605455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.877 [2024-11-18 00:40:58.605482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.877 qpair failed and we were unable to recover it. 00:35:34.877 [2024-11-18 00:40:58.605573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.877 [2024-11-18 00:40:58.605600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.877 qpair failed and we were unable to recover it. 00:35:34.877 [2024-11-18 00:40:58.605681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.877 [2024-11-18 00:40:58.605707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.877 qpair failed and we were unable to recover it. 00:35:34.877 [2024-11-18 00:40:58.605806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.877 [2024-11-18 00:40:58.605832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.877 qpair failed and we were unable to recover it. 00:35:34.877 [2024-11-18 00:40:58.605944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.877 [2024-11-18 00:40:58.605973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.877 qpair failed and we were unable to recover it. 00:35:34.877 [2024-11-18 00:40:58.606058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.877 [2024-11-18 00:40:58.606086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.877 qpair failed and we were unable to recover it. 00:35:34.877 [2024-11-18 00:40:58.606171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.877 [2024-11-18 00:40:58.606204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.877 qpair failed and we were unable to recover it. 00:35:34.877 [2024-11-18 00:40:58.606325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.877 [2024-11-18 00:40:58.606353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.877 qpair failed and we were unable to recover it. 00:35:34.877 [2024-11-18 00:40:58.606464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.877 [2024-11-18 00:40:58.606493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.877 qpair failed and we were unable to recover it. 00:35:34.877 [2024-11-18 00:40:58.606586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.877 [2024-11-18 00:40:58.606614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.877 qpair failed and we were unable to recover it. 00:35:34.877 [2024-11-18 00:40:58.606700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.877 [2024-11-18 00:40:58.606727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.877 qpair failed and we were unable to recover it. 00:35:34.877 [2024-11-18 00:40:58.606814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.877 [2024-11-18 00:40:58.606851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:34.877 qpair failed and we were unable to recover it. 00:35:34.877 [2024-11-18 00:40:58.606939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.877 [2024-11-18 00:40:58.606969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:34.877 qpair failed and we were unable to recover it. 00:35:34.877 [2024-11-18 00:40:58.607086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.877 [2024-11-18 00:40:58.607116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:34.877 qpair failed and we were unable to recover it. 00:35:34.877 [2024-11-18 00:40:58.607235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.877 [2024-11-18 00:40:58.607263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.877 qpair failed and we were unable to recover it. 00:35:34.877 [2024-11-18 00:40:58.607369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.877 [2024-11-18 00:40:58.607396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:34.877 qpair failed and we were unable to recover it. 00:35:34.877 [2024-11-18 00:40:58.607490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.878 [2024-11-18 00:40:58.607517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.165 qpair failed and we were unable to recover it. 00:35:35.165 [2024-11-18 00:40:58.607842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.165 [2024-11-18 00:40:58.607876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.165 qpair failed and we were unable to recover it. 00:35:35.165 [2024-11-18 00:40:58.608017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.165 [2024-11-18 00:40:58.608045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.165 qpair failed and we were unable to recover it. 00:35:35.165 [2024-11-18 00:40:58.608171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.165 [2024-11-18 00:40:58.608200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.165 qpair failed and we were unable to recover it. 00:35:35.165 [2024-11-18 00:40:58.608302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.165 [2024-11-18 00:40:58.608337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.165 qpair failed and we were unable to recover it. 00:35:35.165 [2024-11-18 00:40:58.608432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.165 [2024-11-18 00:40:58.608459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.165 qpair failed and we were unable to recover it. 00:35:35.165 [2024-11-18 00:40:58.608538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.165 [2024-11-18 00:40:58.608565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.165 qpair failed and we were unable to recover it. 00:35:35.165 [2024-11-18 00:40:58.608647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.165 [2024-11-18 00:40:58.608674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.165 qpair failed and we were unable to recover it. 00:35:35.165 [2024-11-18 00:40:58.608772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.165 [2024-11-18 00:40:58.608799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.165 qpair failed and we were unable to recover it. 00:35:35.165 [2024-11-18 00:40:58.608923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.165 [2024-11-18 00:40:58.608949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.165 qpair failed and we were unable to recover it. 00:35:35.165 [2024-11-18 00:40:58.609028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.165 [2024-11-18 00:40:58.609055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.165 qpair failed and we were unable to recover it. 00:35:35.165 [2024-11-18 00:40:58.609151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.165 [2024-11-18 00:40:58.609177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.165 qpair failed and we were unable to recover it. 00:35:35.165 [2024-11-18 00:40:58.609265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.165 [2024-11-18 00:40:58.609291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.165 qpair failed and we were unable to recover it. 00:35:35.165 [2024-11-18 00:40:58.609391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.165 [2024-11-18 00:40:58.609421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.165 qpair failed and we were unable to recover it. 00:35:35.165 [2024-11-18 00:40:58.609541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.165 [2024-11-18 00:40:58.609569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.165 qpair failed and we were unable to recover it. 00:35:35.166 [2024-11-18 00:40:58.609691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.166 [2024-11-18 00:40:58.609718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.166 qpair failed and we were unable to recover it. 00:35:35.166 [2024-11-18 00:40:58.609804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.166 [2024-11-18 00:40:58.609831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.166 qpair failed and we were unable to recover it. 00:35:35.166 [2024-11-18 00:40:58.609948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.166 [2024-11-18 00:40:58.609984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.166 qpair failed and we were unable to recover it. 00:35:35.166 [2024-11-18 00:40:58.610071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.166 [2024-11-18 00:40:58.610100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.166 qpair failed and we were unable to recover it. 00:35:35.166 [2024-11-18 00:40:58.610177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.166 [2024-11-18 00:40:58.610211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.166 qpair failed and we were unable to recover it. 00:35:35.166 [2024-11-18 00:40:58.610299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.166 [2024-11-18 00:40:58.610339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.166 qpair failed and we were unable to recover it. 00:35:35.166 [2024-11-18 00:40:58.610433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.166 [2024-11-18 00:40:58.610466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.166 qpair failed and we were unable to recover it. 00:35:35.166 [2024-11-18 00:40:58.610580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.166 [2024-11-18 00:40:58.610607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.166 qpair failed and we were unable to recover it. 00:35:35.166 [2024-11-18 00:40:58.610688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.166 [2024-11-18 00:40:58.610714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.166 qpair failed and we were unable to recover it. 00:35:35.166 [2024-11-18 00:40:58.610813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.166 [2024-11-18 00:40:58.610842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.166 qpair failed and we were unable to recover it. 00:35:35.166 [2024-11-18 00:40:58.610932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.166 [2024-11-18 00:40:58.610961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.166 qpair failed and we were unable to recover it. 00:35:35.166 [2024-11-18 00:40:58.611043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.166 [2024-11-18 00:40:58.611073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.166 qpair failed and we were unable to recover it. 00:35:35.166 [2024-11-18 00:40:58.611156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.166 [2024-11-18 00:40:58.611183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.166 qpair failed and we were unable to recover it. 00:35:35.166 [2024-11-18 00:40:58.611272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.166 [2024-11-18 00:40:58.611299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.166 qpair failed and we were unable to recover it. 00:35:35.166 [2024-11-18 00:40:58.611454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.166 [2024-11-18 00:40:58.611482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.166 qpair failed and we were unable to recover it. 00:35:35.166 [2024-11-18 00:40:58.611600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.166 [2024-11-18 00:40:58.611627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.166 qpair failed and we were unable to recover it. 00:35:35.166 [2024-11-18 00:40:58.611749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.166 [2024-11-18 00:40:58.611776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.166 qpair failed and we were unable to recover it. 00:35:35.166 [2024-11-18 00:40:58.611859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.166 [2024-11-18 00:40:58.611886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.166 qpair failed and we were unable to recover it. 00:35:35.166 [2024-11-18 00:40:58.612004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.166 [2024-11-18 00:40:58.612031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.166 qpair failed and we were unable to recover it. 00:35:35.166 [2024-11-18 00:40:58.612117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.166 [2024-11-18 00:40:58.612144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.166 qpair failed and we were unable to recover it. 00:35:35.166 [2024-11-18 00:40:58.612256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.166 [2024-11-18 00:40:58.612282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.166 qpair failed and we were unable to recover it. 00:35:35.166 [2024-11-18 00:40:58.612386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.166 [2024-11-18 00:40:58.612415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.166 qpair failed and we were unable to recover it. 00:35:35.166 [2024-11-18 00:40:58.612529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.166 [2024-11-18 00:40:58.612556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.166 qpair failed and we were unable to recover it. 00:35:35.166 [2024-11-18 00:40:58.612650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.166 [2024-11-18 00:40:58.612676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.166 qpair failed and we were unable to recover it. 00:35:35.166 [2024-11-18 00:40:58.612794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.166 [2024-11-18 00:40:58.612821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.166 qpair failed and we were unable to recover it. 00:35:35.166 [2024-11-18 00:40:58.612915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.166 [2024-11-18 00:40:58.612944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.166 qpair failed and we were unable to recover it. 00:35:35.167 [2024-11-18 00:40:58.613064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.167 [2024-11-18 00:40:58.613093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.167 qpair failed and we were unable to recover it. 00:35:35.167 [2024-11-18 00:40:58.613237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.167 [2024-11-18 00:40:58.613275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.167 qpair failed and we were unable to recover it. 00:35:35.167 [2024-11-18 00:40:58.613381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.167 [2024-11-18 00:40:58.613409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.167 qpair failed and we were unable to recover it. 00:35:35.167 [2024-11-18 00:40:58.613537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.167 [2024-11-18 00:40:58.613568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.167 qpair failed and we were unable to recover it. 00:35:35.167 [2024-11-18 00:40:58.613670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.167 [2024-11-18 00:40:58.613709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.167 qpair failed and we were unable to recover it. 00:35:35.167 [2024-11-18 00:40:58.613823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.167 [2024-11-18 00:40:58.613850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.167 qpair failed and we were unable to recover it. 00:35:35.167 [2024-11-18 00:40:58.613943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.167 [2024-11-18 00:40:58.613972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.167 qpair failed and we were unable to recover it. 00:35:35.167 [2024-11-18 00:40:58.614113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.167 [2024-11-18 00:40:58.614141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.167 qpair failed and we were unable to recover it. 00:35:35.167 [2024-11-18 00:40:58.614263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.167 [2024-11-18 00:40:58.614291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.167 qpair failed and we were unable to recover it. 00:35:35.167 [2024-11-18 00:40:58.614436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.167 [2024-11-18 00:40:58.614465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.167 qpair failed and we were unable to recover it. 00:35:35.167 [2024-11-18 00:40:58.614583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.167 [2024-11-18 00:40:58.614614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.167 qpair failed and we were unable to recover it. 00:35:35.167 [2024-11-18 00:40:58.614732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.167 [2024-11-18 00:40:58.614760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.167 qpair failed and we were unable to recover it. 00:35:35.167 [2024-11-18 00:40:58.614866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.167 [2024-11-18 00:40:58.614894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.167 qpair failed and we were unable to recover it. 00:35:35.167 [2024-11-18 00:40:58.615046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.167 [2024-11-18 00:40:58.615074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.167 qpair failed and we were unable to recover it. 00:35:35.167 [2024-11-18 00:40:58.615170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.167 [2024-11-18 00:40:58.615199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.167 qpair failed and we were unable to recover it. 00:35:35.167 [2024-11-18 00:40:58.615323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.167 [2024-11-18 00:40:58.615351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.167 qpair failed and we were unable to recover it. 00:35:35.167 [2024-11-18 00:40:58.615442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.167 [2024-11-18 00:40:58.615475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.167 qpair failed and we were unable to recover it. 00:35:35.167 [2024-11-18 00:40:58.615562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.167 [2024-11-18 00:40:58.615600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.167 qpair failed and we were unable to recover it. 00:35:35.167 [2024-11-18 00:40:58.615705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.167 [2024-11-18 00:40:58.615732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.167 qpair failed and we were unable to recover it. 00:35:35.167 [2024-11-18 00:40:58.615847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.167 [2024-11-18 00:40:58.615875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.167 qpair failed and we were unable to recover it. 00:35:35.167 [2024-11-18 00:40:58.616007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.167 [2024-11-18 00:40:58.616035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.167 qpair failed and we were unable to recover it. 00:35:35.167 [2024-11-18 00:40:58.616132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.167 [2024-11-18 00:40:58.616161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.167 qpair failed and we were unable to recover it. 00:35:35.167 [2024-11-18 00:40:58.616295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.167 [2024-11-18 00:40:58.616336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.167 qpair failed and we were unable to recover it. 00:35:35.167 [2024-11-18 00:40:58.616458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.167 [2024-11-18 00:40:58.616485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.167 qpair failed and we were unable to recover it. 00:35:35.167 [2024-11-18 00:40:58.616600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.167 [2024-11-18 00:40:58.616627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.167 qpair failed and we were unable to recover it. 00:35:35.167 [2024-11-18 00:40:58.616717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.167 [2024-11-18 00:40:58.616745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.167 qpair failed and we were unable to recover it. 00:35:35.168 [2024-11-18 00:40:58.616841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.168 [2024-11-18 00:40:58.616868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.168 qpair failed and we were unable to recover it. 00:35:35.168 [2024-11-18 00:40:58.616988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.168 [2024-11-18 00:40:58.617017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.168 qpair failed and we were unable to recover it. 00:35:35.168 [2024-11-18 00:40:58.617112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.168 [2024-11-18 00:40:58.617140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.168 qpair failed and we were unable to recover it. 00:35:35.168 [2024-11-18 00:40:58.617285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.168 [2024-11-18 00:40:58.617324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.168 qpair failed and we were unable to recover it. 00:35:35.168 [2024-11-18 00:40:58.617418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.168 [2024-11-18 00:40:58.617445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.168 qpair failed and we were unable to recover it. 00:35:35.168 [2024-11-18 00:40:58.617542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.168 [2024-11-18 00:40:58.617568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.168 qpair failed and we were unable to recover it. 00:35:35.168 [2024-11-18 00:40:58.617653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.168 [2024-11-18 00:40:58.617680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.168 qpair failed and we were unable to recover it. 00:35:35.168 [2024-11-18 00:40:58.617763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.168 [2024-11-18 00:40:58.617789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.168 qpair failed and we were unable to recover it. 00:35:35.168 [2024-11-18 00:40:58.617904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.168 [2024-11-18 00:40:58.617931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.168 qpair failed and we were unable to recover it. 00:35:35.168 [2024-11-18 00:40:58.618024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.168 [2024-11-18 00:40:58.618064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.168 qpair failed and we were unable to recover it. 00:35:35.168 [2024-11-18 00:40:58.618220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.168 [2024-11-18 00:40:58.618249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.168 qpair failed and we were unable to recover it. 00:35:35.168 [2024-11-18 00:40:58.618380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.168 [2024-11-18 00:40:58.618408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.168 qpair failed and we were unable to recover it. 00:35:35.168 [2024-11-18 00:40:58.618495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.168 [2024-11-18 00:40:58.618522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.168 qpair failed and we were unable to recover it. 00:35:35.168 [2024-11-18 00:40:58.618610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.168 [2024-11-18 00:40:58.618640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.168 qpair failed and we were unable to recover it. 00:35:35.168 [2024-11-18 00:40:58.618770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.168 [2024-11-18 00:40:58.618797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.168 qpair failed and we were unable to recover it. 00:35:35.168 [2024-11-18 00:40:58.618912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.168 [2024-11-18 00:40:58.618939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.168 qpair failed and we were unable to recover it. 00:35:35.168 [2024-11-18 00:40:58.619052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.168 [2024-11-18 00:40:58.619079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.168 qpair failed and we were unable to recover it. 00:35:35.168 [2024-11-18 00:40:58.619158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.168 [2024-11-18 00:40:58.619203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.168 qpair failed and we were unable to recover it. 00:35:35.168 [2024-11-18 00:40:58.619293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.168 [2024-11-18 00:40:58.619342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.168 qpair failed and we were unable to recover it. 00:35:35.168 [2024-11-18 00:40:58.619435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.168 [2024-11-18 00:40:58.619462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.168 qpair failed and we were unable to recover it. 00:35:35.168 [2024-11-18 00:40:58.619542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.168 [2024-11-18 00:40:58.619573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.168 qpair failed and we were unable to recover it. 00:35:35.168 [2024-11-18 00:40:58.619671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.168 [2024-11-18 00:40:58.619699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.168 qpair failed and we were unable to recover it. 00:35:35.168 [2024-11-18 00:40:58.619802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.168 [2024-11-18 00:40:58.619829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.168 qpair failed and we were unable to recover it. 00:35:35.168 [2024-11-18 00:40:58.619912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.168 [2024-11-18 00:40:58.619939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.168 qpair failed and we were unable to recover it. 00:35:35.168 [2024-11-18 00:40:58.620058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.168 [2024-11-18 00:40:58.620085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.168 qpair failed and we were unable to recover it. 00:35:35.168 [2024-11-18 00:40:58.620189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.168 [2024-11-18 00:40:58.620229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.168 qpair failed and we were unable to recover it. 00:35:35.169 [2024-11-18 00:40:58.620365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.169 [2024-11-18 00:40:58.620393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.169 qpair failed and we were unable to recover it. 00:35:35.169 [2024-11-18 00:40:58.620509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.169 [2024-11-18 00:40:58.620536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.169 qpair failed and we were unable to recover it. 00:35:35.169 [2024-11-18 00:40:58.620627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.169 [2024-11-18 00:40:58.620654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.169 qpair failed and we were unable to recover it. 00:35:35.169 [2024-11-18 00:40:58.620733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.169 [2024-11-18 00:40:58.620760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.169 qpair failed and we were unable to recover it. 00:35:35.169 [2024-11-18 00:40:58.620880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.169 [2024-11-18 00:40:58.620919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.169 qpair failed and we were unable to recover it. 00:35:35.169 [2024-11-18 00:40:58.621044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.169 [2024-11-18 00:40:58.621071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.169 qpair failed and we were unable to recover it. 00:35:35.169 [2024-11-18 00:40:58.621197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.169 [2024-11-18 00:40:58.621226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.169 qpair failed and we were unable to recover it. 00:35:35.169 [2024-11-18 00:40:58.621356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.169 [2024-11-18 00:40:58.621384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.169 qpair failed and we were unable to recover it. 00:35:35.169 [2024-11-18 00:40:58.621500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.169 [2024-11-18 00:40:58.621529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.169 qpair failed and we were unable to recover it. 00:35:35.169 [2024-11-18 00:40:58.621653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.169 [2024-11-18 00:40:58.621681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.169 qpair failed and we were unable to recover it. 00:35:35.169 [2024-11-18 00:40:58.621777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.169 [2024-11-18 00:40:58.621803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.169 qpair failed and we were unable to recover it. 00:35:35.169 [2024-11-18 00:40:58.621941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.169 [2024-11-18 00:40:58.621968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.169 qpair failed and we were unable to recover it. 00:35:35.169 [2024-11-18 00:40:58.622088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.169 [2024-11-18 00:40:58.622117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.169 qpair failed and we were unable to recover it. 00:35:35.169 [2024-11-18 00:40:58.622243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.169 [2024-11-18 00:40:58.622283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.169 qpair failed and we were unable to recover it. 00:35:35.169 [2024-11-18 00:40:58.622390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.169 [2024-11-18 00:40:58.622418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.169 qpair failed and we were unable to recover it. 00:35:35.169 [2024-11-18 00:40:58.622508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.169 [2024-11-18 00:40:58.622536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.169 qpair failed and we were unable to recover it. 00:35:35.169 [2024-11-18 00:40:58.622634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.169 [2024-11-18 00:40:58.622661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.169 qpair failed and we were unable to recover it. 00:35:35.169 [2024-11-18 00:40:58.622776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.169 [2024-11-18 00:40:58.622805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.169 qpair failed and we were unable to recover it. 00:35:35.169 [2024-11-18 00:40:58.622922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.169 [2024-11-18 00:40:58.622949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.169 qpair failed and we were unable to recover it. 00:35:35.169 [2024-11-18 00:40:58.623064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.169 [2024-11-18 00:40:58.623091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.169 qpair failed and we were unable to recover it. 00:35:35.169 [2024-11-18 00:40:58.623186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.169 [2024-11-18 00:40:58.623216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.169 qpair failed and we were unable to recover it. 00:35:35.169 [2024-11-18 00:40:58.623348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.169 [2024-11-18 00:40:58.623378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.169 qpair failed and we were unable to recover it. 00:35:35.169 [2024-11-18 00:40:58.623498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.169 [2024-11-18 00:40:58.623525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.169 qpair failed and we were unable to recover it. 00:35:35.169 [2024-11-18 00:40:58.623617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.169 [2024-11-18 00:40:58.623645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.169 qpair failed and we were unable to recover it. 00:35:35.169 [2024-11-18 00:40:58.623743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.169 [2024-11-18 00:40:58.623770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.169 qpair failed and we were unable to recover it. 00:35:35.169 [2024-11-18 00:40:58.623889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.169 [2024-11-18 00:40:58.623918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.169 qpair failed and we were unable to recover it. 00:35:35.170 [2024-11-18 00:40:58.624037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.170 [2024-11-18 00:40:58.624064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.170 qpair failed and we were unable to recover it. 00:35:35.170 [2024-11-18 00:40:58.624170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.170 [2024-11-18 00:40:58.624218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.170 qpair failed and we were unable to recover it. 00:35:35.170 [2024-11-18 00:40:58.624324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.170 [2024-11-18 00:40:58.624354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.170 qpair failed and we were unable to recover it. 00:35:35.170 [2024-11-18 00:40:58.624495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.170 [2024-11-18 00:40:58.624523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.170 qpair failed and we were unable to recover it. 00:35:35.170 [2024-11-18 00:40:58.624617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.170 [2024-11-18 00:40:58.624644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.170 qpair failed and we were unable to recover it. 00:35:35.170 [2024-11-18 00:40:58.624727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.170 [2024-11-18 00:40:58.624758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.170 qpair failed and we were unable to recover it. 00:35:35.170 [2024-11-18 00:40:58.624908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.170 [2024-11-18 00:40:58.624935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.170 qpair failed and we were unable to recover it. 00:35:35.170 [2024-11-18 00:40:58.625047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.170 [2024-11-18 00:40:58.625076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.170 qpair failed and we were unable to recover it. 00:35:35.170 [2024-11-18 00:40:58.625170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.170 [2024-11-18 00:40:58.625200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.170 qpair failed and we were unable to recover it. 00:35:35.170 [2024-11-18 00:40:58.625342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.170 [2024-11-18 00:40:58.625391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.170 qpair failed and we were unable to recover it. 00:35:35.170 [2024-11-18 00:40:58.625490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.170 [2024-11-18 00:40:58.625519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.170 qpair failed and we were unable to recover it. 00:35:35.170 [2024-11-18 00:40:58.625646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.170 [2024-11-18 00:40:58.625675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.170 qpair failed and we were unable to recover it. 00:35:35.170 [2024-11-18 00:40:58.625761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.170 [2024-11-18 00:40:58.625788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.170 qpair failed and we were unable to recover it. 00:35:35.170 [2024-11-18 00:40:58.625914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.170 [2024-11-18 00:40:58.625942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.170 qpair failed and we were unable to recover it. 00:35:35.170 [2024-11-18 00:40:58.626076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.170 [2024-11-18 00:40:58.626104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.170 qpair failed and we were unable to recover it. 00:35:35.170 [2024-11-18 00:40:58.626209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.170 [2024-11-18 00:40:58.626250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.170 qpair failed and we were unable to recover it. 00:35:35.170 [2024-11-18 00:40:58.626377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.170 [2024-11-18 00:40:58.626405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.170 qpair failed and we were unable to recover it. 00:35:35.170 [2024-11-18 00:40:58.626524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.170 [2024-11-18 00:40:58.626552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.170 qpair failed and we were unable to recover it. 00:35:35.170 [2024-11-18 00:40:58.626684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.170 [2024-11-18 00:40:58.626711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.170 qpair failed and we were unable to recover it. 00:35:35.170 [2024-11-18 00:40:58.626840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.170 [2024-11-18 00:40:58.626867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.170 qpair failed and we were unable to recover it. 00:35:35.170 [2024-11-18 00:40:58.626960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.170 [2024-11-18 00:40:58.626988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.170 qpair failed and we were unable to recover it. 00:35:35.170 [2024-11-18 00:40:58.627103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.170 [2024-11-18 00:40:58.627131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.170 qpair failed and we were unable to recover it. 00:35:35.170 [2024-11-18 00:40:58.627270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.170 [2024-11-18 00:40:58.627328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.170 qpair failed and we were unable to recover it. 00:35:35.170 [2024-11-18 00:40:58.627452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.170 [2024-11-18 00:40:58.627480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.170 qpair failed and we were unable to recover it. 00:35:35.170 [2024-11-18 00:40:58.627566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.170 [2024-11-18 00:40:58.627606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.170 qpair failed and we were unable to recover it. 00:35:35.170 [2024-11-18 00:40:58.627690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.170 [2024-11-18 00:40:58.627717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.170 qpair failed and we were unable to recover it. 00:35:35.171 [2024-11-18 00:40:58.627843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.171 [2024-11-18 00:40:58.627871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.171 qpair failed and we were unable to recover it. 00:35:35.171 [2024-11-18 00:40:58.627952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.171 [2024-11-18 00:40:58.627979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.171 qpair failed and we were unable to recover it. 00:35:35.171 [2024-11-18 00:40:58.628068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.171 [2024-11-18 00:40:58.628097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.171 qpair failed and we were unable to recover it. 00:35:35.171 [2024-11-18 00:40:58.628214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.171 [2024-11-18 00:40:58.628242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.171 qpair failed and we were unable to recover it. 00:35:35.171 [2024-11-18 00:40:58.628375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.171 [2024-11-18 00:40:58.628403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.171 qpair failed and we were unable to recover it. 00:35:35.171 [2024-11-18 00:40:58.628493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.171 [2024-11-18 00:40:58.628520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.171 qpair failed and we were unable to recover it. 00:35:35.171 [2024-11-18 00:40:58.628646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.171 [2024-11-18 00:40:58.628686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.171 qpair failed and we were unable to recover it. 00:35:35.171 [2024-11-18 00:40:58.628778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.171 [2024-11-18 00:40:58.628805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.171 qpair failed and we were unable to recover it. 00:35:35.171 [2024-11-18 00:40:58.628920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.171 [2024-11-18 00:40:58.628947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.171 qpair failed and we were unable to recover it. 00:35:35.171 [2024-11-18 00:40:58.629025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.171 [2024-11-18 00:40:58.629052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.171 qpair failed and we were unable to recover it. 00:35:35.171 [2024-11-18 00:40:58.629161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.171 [2024-11-18 00:40:58.629188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.171 qpair failed and we were unable to recover it. 00:35:35.171 [2024-11-18 00:40:58.629282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.171 [2024-11-18 00:40:58.629321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.171 qpair failed and we were unable to recover it. 00:35:35.171 [2024-11-18 00:40:58.629434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.171 [2024-11-18 00:40:58.629461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.171 qpair failed and we were unable to recover it. 00:35:35.171 [2024-11-18 00:40:58.629546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.171 [2024-11-18 00:40:58.629574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.171 qpair failed and we were unable to recover it. 00:35:35.171 [2024-11-18 00:40:58.629682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.171 [2024-11-18 00:40:58.629709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.171 qpair failed and we were unable to recover it. 00:35:35.171 [2024-11-18 00:40:58.629809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.171 [2024-11-18 00:40:58.629837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.171 qpair failed and we were unable to recover it. 00:35:35.171 [2024-11-18 00:40:58.629965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.171 [2024-11-18 00:40:58.630004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.171 qpair failed and we were unable to recover it. 00:35:35.171 [2024-11-18 00:40:58.630096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.171 [2024-11-18 00:40:58.630124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.171 qpair failed and we were unable to recover it. 00:35:35.171 [2024-11-18 00:40:58.630218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.171 [2024-11-18 00:40:58.630247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.171 qpair failed and we were unable to recover it. 00:35:35.171 [2024-11-18 00:40:58.630380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.171 [2024-11-18 00:40:58.630408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.171 qpair failed and we were unable to recover it. 00:35:35.171 [2024-11-18 00:40:58.630503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.171 [2024-11-18 00:40:58.630531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.171 qpair failed and we were unable to recover it. 00:35:35.171 [2024-11-18 00:40:58.630641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.171 [2024-11-18 00:40:58.630679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.171 qpair failed and we were unable to recover it. 00:35:35.171 [2024-11-18 00:40:58.630793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.171 [2024-11-18 00:40:58.630820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.171 qpair failed and we were unable to recover it. 00:35:35.171 [2024-11-18 00:40:58.630945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.171 [2024-11-18 00:40:58.630972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.171 qpair failed and we were unable to recover it. 00:35:35.171 [2024-11-18 00:40:58.631066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.171 [2024-11-18 00:40:58.631093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.171 qpair failed and we were unable to recover it. 00:35:35.171 [2024-11-18 00:40:58.631210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.172 [2024-11-18 00:40:58.631237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.172 qpair failed and we were unable to recover it. 00:35:35.172 [2024-11-18 00:40:58.631391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.172 [2024-11-18 00:40:58.631419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.172 qpair failed and we were unable to recover it. 00:35:35.172 [2024-11-18 00:40:58.631501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.172 [2024-11-18 00:40:58.631528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.172 qpair failed and we were unable to recover it. 00:35:35.172 [2024-11-18 00:40:58.631655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.172 [2024-11-18 00:40:58.631685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.172 qpair failed and we were unable to recover it. 00:35:35.172 [2024-11-18 00:40:58.631800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.172 [2024-11-18 00:40:58.631827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.172 qpair failed and we were unable to recover it. 00:35:35.172 [2024-11-18 00:40:58.631942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.172 [2024-11-18 00:40:58.631980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.172 qpair failed and we were unable to recover it. 00:35:35.172 [2024-11-18 00:40:58.632069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.172 [2024-11-18 00:40:58.632098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.172 qpair failed and we were unable to recover it. 00:35:35.172 [2024-11-18 00:40:58.632196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.172 [2024-11-18 00:40:58.632235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.172 qpair failed and we were unable to recover it. 00:35:35.172 [2024-11-18 00:40:58.632372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.172 [2024-11-18 00:40:58.632401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.172 qpair failed and we were unable to recover it. 00:35:35.172 [2024-11-18 00:40:58.632510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.172 [2024-11-18 00:40:58.632537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.172 qpair failed and we were unable to recover it. 00:35:35.172 [2024-11-18 00:40:58.632646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.172 [2024-11-18 00:40:58.632673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.172 qpair failed and we were unable to recover it. 00:35:35.172 [2024-11-18 00:40:58.632768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.172 [2024-11-18 00:40:58.632795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.172 qpair failed and we were unable to recover it. 00:35:35.172 [2024-11-18 00:40:58.632883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.172 [2024-11-18 00:40:58.632911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.172 qpair failed and we were unable to recover it. 00:35:35.172 [2024-11-18 00:40:58.632998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.172 [2024-11-18 00:40:58.633025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.172 qpair failed and we were unable to recover it. 00:35:35.172 [2024-11-18 00:40:58.633159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.172 [2024-11-18 00:40:58.633198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.172 qpair failed and we were unable to recover it. 00:35:35.172 [2024-11-18 00:40:58.633299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.172 [2024-11-18 00:40:58.633335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.172 qpair failed and we were unable to recover it. 00:35:35.172 [2024-11-18 00:40:58.633454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.172 [2024-11-18 00:40:58.633484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.172 qpair failed and we were unable to recover it. 00:35:35.172 [2024-11-18 00:40:58.633623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.172 [2024-11-18 00:40:58.633650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.172 qpair failed and we were unable to recover it. 00:35:35.172 [2024-11-18 00:40:58.633740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.172 [2024-11-18 00:40:58.633768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.172 qpair failed and we were unable to recover it. 00:35:35.172 [2024-11-18 00:40:58.633859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.172 [2024-11-18 00:40:58.633887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.172 qpair failed and we were unable to recover it. 00:35:35.172 [2024-11-18 00:40:58.634002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.172 [2024-11-18 00:40:58.634030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.172 qpair failed and we were unable to recover it. 00:35:35.172 [2024-11-18 00:40:58.634142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.172 [2024-11-18 00:40:58.634175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.172 qpair failed and we were unable to recover it. 00:35:35.173 [2024-11-18 00:40:58.634267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.173 [2024-11-18 00:40:58.634294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.173 qpair failed and we were unable to recover it. 00:35:35.173 [2024-11-18 00:40:58.634387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.173 [2024-11-18 00:40:58.634414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.173 qpair failed and we were unable to recover it. 00:35:35.173 [2024-11-18 00:40:58.634503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.173 [2024-11-18 00:40:58.634530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.173 qpair failed and we were unable to recover it. 00:35:35.173 [2024-11-18 00:40:58.634652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.173 [2024-11-18 00:40:58.634679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.173 qpair failed and we were unable to recover it. 00:35:35.173 [2024-11-18 00:40:58.634764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.173 [2024-11-18 00:40:58.634794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.173 qpair failed and we were unable to recover it. 00:35:35.173 [2024-11-18 00:40:58.634884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.173 [2024-11-18 00:40:58.634913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.173 qpair failed and we were unable to recover it. 00:35:35.173 [2024-11-18 00:40:58.635032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.173 [2024-11-18 00:40:58.635062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.173 qpair failed and we were unable to recover it. 00:35:35.173 [2024-11-18 00:40:58.635143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.173 [2024-11-18 00:40:58.635170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.173 qpair failed and we were unable to recover it. 00:35:35.173 [2024-11-18 00:40:58.635261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.173 [2024-11-18 00:40:58.635288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.173 qpair failed and we were unable to recover it. 00:35:35.173 [2024-11-18 00:40:58.635388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.173 [2024-11-18 00:40:58.635416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.173 qpair failed and we were unable to recover it. 00:35:35.173 [2024-11-18 00:40:58.635533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.173 [2024-11-18 00:40:58.635560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.173 qpair failed and we were unable to recover it. 00:35:35.173 [2024-11-18 00:40:58.635696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.173 [2024-11-18 00:40:58.635733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.173 qpair failed and we were unable to recover it. 00:35:35.173 [2024-11-18 00:40:58.635847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.173 [2024-11-18 00:40:58.635873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.173 qpair failed and we were unable to recover it. 00:35:35.173 [2024-11-18 00:40:58.635969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.173 [2024-11-18 00:40:58.635996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.173 qpair failed and we were unable to recover it. 00:35:35.173 [2024-11-18 00:40:58.636134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.173 [2024-11-18 00:40:58.636160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.173 qpair failed and we were unable to recover it. 00:35:35.173 [2024-11-18 00:40:58.636251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.173 [2024-11-18 00:40:58.636278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.173 qpair failed and we were unable to recover it. 00:35:35.173 [2024-11-18 00:40:58.636414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.173 [2024-11-18 00:40:58.636442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.173 qpair failed and we were unable to recover it. 00:35:35.173 [2024-11-18 00:40:58.636529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.173 [2024-11-18 00:40:58.636555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.173 qpair failed and we were unable to recover it. 00:35:35.173 [2024-11-18 00:40:58.636686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.173 [2024-11-18 00:40:58.636713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.173 qpair failed and we were unable to recover it. 00:35:35.173 [2024-11-18 00:40:58.636820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.173 [2024-11-18 00:40:58.636846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.173 qpair failed and we were unable to recover it. 00:35:35.173 [2024-11-18 00:40:58.636931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.173 [2024-11-18 00:40:58.636960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.173 qpair failed and we were unable to recover it. 00:35:35.173 [2024-11-18 00:40:58.637075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.173 [2024-11-18 00:40:58.637103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.173 qpair failed and we were unable to recover it. 00:35:35.173 [2024-11-18 00:40:58.637227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.173 [2024-11-18 00:40:58.637255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.173 qpair failed and we were unable to recover it. 00:35:35.173 [2024-11-18 00:40:58.637356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.173 [2024-11-18 00:40:58.637384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.173 qpair failed and we were unable to recover it. 00:35:35.173 [2024-11-18 00:40:58.637499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.173 [2024-11-18 00:40:58.637526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.173 qpair failed and we were unable to recover it. 00:35:35.173 [2024-11-18 00:40:58.637645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.173 [2024-11-18 00:40:58.637672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.173 qpair failed and we were unable to recover it. 00:35:35.173 [2024-11-18 00:40:58.637753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.174 [2024-11-18 00:40:58.637786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.174 qpair failed and we were unable to recover it. 00:35:35.174 [2024-11-18 00:40:58.637873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.174 [2024-11-18 00:40:58.637912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.174 qpair failed and we were unable to recover it. 00:35:35.174 [2024-11-18 00:40:58.638001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.174 [2024-11-18 00:40:58.638029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.174 qpair failed and we were unable to recover it. 00:35:35.174 [2024-11-18 00:40:58.638171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.174 [2024-11-18 00:40:58.638199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.174 qpair failed and we were unable to recover it. 00:35:35.174 [2024-11-18 00:40:58.638321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.174 [2024-11-18 00:40:58.638349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.174 qpair failed and we were unable to recover it. 00:35:35.174 [2024-11-18 00:40:58.638466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.174 [2024-11-18 00:40:58.638492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.174 qpair failed and we were unable to recover it. 00:35:35.174 [2024-11-18 00:40:58.638628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.174 [2024-11-18 00:40:58.638655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.174 qpair failed and we were unable to recover it. 00:35:35.174 [2024-11-18 00:40:58.638763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.174 [2024-11-18 00:40:58.638790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.174 qpair failed and we were unable to recover it. 00:35:35.174 [2024-11-18 00:40:58.638904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.174 [2024-11-18 00:40:58.638935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.174 qpair failed and we were unable to recover it. 00:35:35.174 [2024-11-18 00:40:58.639049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.174 [2024-11-18 00:40:58.639078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.174 qpair failed and we were unable to recover it. 00:35:35.174 [2024-11-18 00:40:58.639166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.174 [2024-11-18 00:40:58.639195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.174 qpair failed and we were unable to recover it. 00:35:35.174 [2024-11-18 00:40:58.639321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.174 [2024-11-18 00:40:58.639349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.174 qpair failed and we were unable to recover it. 00:35:35.174 [2024-11-18 00:40:58.639432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.174 [2024-11-18 00:40:58.639459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.174 qpair failed and we were unable to recover it. 00:35:35.174 [2024-11-18 00:40:58.639569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.174 [2024-11-18 00:40:58.639596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.174 qpair failed and we were unable to recover it. 00:35:35.174 [2024-11-18 00:40:58.639704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.174 [2024-11-18 00:40:58.639732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.174 qpair failed and we were unable to recover it. 00:35:35.174 [2024-11-18 00:40:58.639860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.174 [2024-11-18 00:40:58.639887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.174 qpair failed and we were unable to recover it. 00:35:35.174 [2024-11-18 00:40:58.640004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.174 [2024-11-18 00:40:58.640032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.174 qpair failed and we were unable to recover it. 00:35:35.174 [2024-11-18 00:40:58.640145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.174 [2024-11-18 00:40:58.640172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.174 qpair failed and we were unable to recover it. 00:35:35.174 [2024-11-18 00:40:58.640257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.174 [2024-11-18 00:40:58.640284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.174 qpair failed and we were unable to recover it. 00:35:35.174 [2024-11-18 00:40:58.640384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.174 [2024-11-18 00:40:58.640411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.174 qpair failed and we were unable to recover it. 00:35:35.174 [2024-11-18 00:40:58.640492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.174 [2024-11-18 00:40:58.640519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.174 qpair failed and we were unable to recover it. 00:35:35.174 [2024-11-18 00:40:58.640618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.174 [2024-11-18 00:40:58.640645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.174 qpair failed and we were unable to recover it. 00:35:35.174 [2024-11-18 00:40:58.640752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.174 [2024-11-18 00:40:58.640780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.174 qpair failed and we were unable to recover it. 00:35:35.174 [2024-11-18 00:40:58.640889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.174 [2024-11-18 00:40:58.640916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.174 qpair failed and we were unable to recover it. 00:35:35.174 [2024-11-18 00:40:58.641022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.174 [2024-11-18 00:40:58.641062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.174 qpair failed and we were unable to recover it. 00:35:35.174 [2024-11-18 00:40:58.641148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.174 [2024-11-18 00:40:58.641176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.174 qpair failed and we were unable to recover it. 00:35:35.174 [2024-11-18 00:40:58.641323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.175 [2024-11-18 00:40:58.641352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.175 qpair failed and we were unable to recover it. 00:35:35.175 [2024-11-18 00:40:58.641449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.175 [2024-11-18 00:40:58.641477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.175 qpair failed and we were unable to recover it. 00:35:35.175 [2024-11-18 00:40:58.641550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.175 [2024-11-18 00:40:58.641578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.175 qpair failed and we were unable to recover it. 00:35:35.175 [2024-11-18 00:40:58.641668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.175 [2024-11-18 00:40:58.641696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.175 qpair failed and we were unable to recover it. 00:35:35.175 [2024-11-18 00:40:58.641778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.175 [2024-11-18 00:40:58.641807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.175 qpair failed and we were unable to recover it. 00:35:35.175 [2024-11-18 00:40:58.641921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.175 [2024-11-18 00:40:58.641948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.175 qpair failed and we were unable to recover it. 00:35:35.175 [2024-11-18 00:40:58.642061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.175 [2024-11-18 00:40:58.642088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.175 qpair failed and we were unable to recover it. 00:35:35.175 [2024-11-18 00:40:58.642195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.175 [2024-11-18 00:40:58.642222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.175 qpair failed and we were unable to recover it. 00:35:35.175 [2024-11-18 00:40:58.642302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.175 [2024-11-18 00:40:58.642337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.175 qpair failed and we were unable to recover it. 00:35:35.175 [2024-11-18 00:40:58.642463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.175 [2024-11-18 00:40:58.642490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.175 qpair failed and we were unable to recover it. 00:35:35.175 [2024-11-18 00:40:58.642615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.175 [2024-11-18 00:40:58.642641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.175 qpair failed and we were unable to recover it. 00:35:35.175 [2024-11-18 00:40:58.642734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.175 [2024-11-18 00:40:58.642761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.175 qpair failed and we were unable to recover it. 00:35:35.175 [2024-11-18 00:40:58.642846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.175 [2024-11-18 00:40:58.642872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.175 qpair failed and we were unable to recover it. 00:35:35.175 [2024-11-18 00:40:58.642980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.175 [2024-11-18 00:40:58.643006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.175 qpair failed and we were unable to recover it. 00:35:35.175 [2024-11-18 00:40:58.643115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.175 [2024-11-18 00:40:58.643146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.175 qpair failed and we were unable to recover it. 00:35:35.175 [2024-11-18 00:40:58.643287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.175 [2024-11-18 00:40:58.643324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.175 qpair failed and we were unable to recover it. 00:35:35.175 [2024-11-18 00:40:58.643439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.175 [2024-11-18 00:40:58.643465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.175 qpair failed and we were unable to recover it. 00:35:35.175 [2024-11-18 00:40:58.643552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.175 [2024-11-18 00:40:58.643579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.175 qpair failed and we were unable to recover it. 00:35:35.175 [2024-11-18 00:40:58.643699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.175 [2024-11-18 00:40:58.643727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.175 qpair failed and we were unable to recover it. 00:35:35.175 [2024-11-18 00:40:58.643811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.175 [2024-11-18 00:40:58.643838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.175 qpair failed and we were unable to recover it. 00:35:35.175 [2024-11-18 00:40:58.643930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.175 [2024-11-18 00:40:58.643956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.175 qpair failed and we were unable to recover it. 00:35:35.175 [2024-11-18 00:40:58.644048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.175 [2024-11-18 00:40:58.644075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.175 qpair failed and we were unable to recover it. 00:35:35.175 [2024-11-18 00:40:58.644187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.175 [2024-11-18 00:40:58.644216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.175 qpair failed and we were unable to recover it. 00:35:35.175 [2024-11-18 00:40:58.644349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.175 [2024-11-18 00:40:58.644390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.175 qpair failed and we were unable to recover it. 00:35:35.175 [2024-11-18 00:40:58.644523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.175 [2024-11-18 00:40:58.644564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.175 qpair failed and we were unable to recover it. 00:35:35.175 [2024-11-18 00:40:58.644689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.175 [2024-11-18 00:40:58.644717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.175 qpair failed and we were unable to recover it. 00:35:35.175 [2024-11-18 00:40:58.644840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.175 [2024-11-18 00:40:58.644867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.176 qpair failed and we were unable to recover it. 00:35:35.176 [2024-11-18 00:40:58.644952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.176 [2024-11-18 00:40:58.644979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.176 qpair failed and we were unable to recover it. 00:35:35.176 [2024-11-18 00:40:58.645080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.176 [2024-11-18 00:40:58.645107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.176 qpair failed and we were unable to recover it. 00:35:35.176 [2024-11-18 00:40:58.645222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.176 [2024-11-18 00:40:58.645249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.176 qpair failed and we were unable to recover it. 00:35:35.176 [2024-11-18 00:40:58.645338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.176 [2024-11-18 00:40:58.645365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.176 qpair failed and we were unable to recover it. 00:35:35.176 [2024-11-18 00:40:58.645472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.176 [2024-11-18 00:40:58.645499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.176 qpair failed and we were unable to recover it. 00:35:35.176 [2024-11-18 00:40:58.645620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.176 [2024-11-18 00:40:58.645647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.176 qpair failed and we were unable to recover it. 00:35:35.176 [2024-11-18 00:40:58.645738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.176 [2024-11-18 00:40:58.645765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.176 qpair failed and we were unable to recover it. 00:35:35.176 [2024-11-18 00:40:58.645858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.176 [2024-11-18 00:40:58.645885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.176 qpair failed and we were unable to recover it. 00:35:35.176 [2024-11-18 00:40:58.645993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.176 [2024-11-18 00:40:58.646019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.176 qpair failed and we were unable to recover it. 00:35:35.176 [2024-11-18 00:40:58.646132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.176 [2024-11-18 00:40:58.646159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.176 qpair failed and we were unable to recover it. 00:35:35.176 [2024-11-18 00:40:58.646302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.176 [2024-11-18 00:40:58.646348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.176 qpair failed and we were unable to recover it. 00:35:35.176 [2024-11-18 00:40:58.646476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.176 [2024-11-18 00:40:58.646515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.176 qpair failed and we were unable to recover it. 00:35:35.176 [2024-11-18 00:40:58.646616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.176 [2024-11-18 00:40:58.646644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.176 qpair failed and we were unable to recover it. 00:35:35.176 [2024-11-18 00:40:58.646728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.176 [2024-11-18 00:40:58.646756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.176 qpair failed and we were unable to recover it. 00:35:35.176 [2024-11-18 00:40:58.646868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.176 [2024-11-18 00:40:58.646897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.176 qpair failed and we were unable to recover it. 00:35:35.176 [2024-11-18 00:40:58.646972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.176 [2024-11-18 00:40:58.647000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.176 qpair failed and we were unable to recover it. 00:35:35.176 [2024-11-18 00:40:58.647112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.176 [2024-11-18 00:40:58.647140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.176 qpair failed and we were unable to recover it. 00:35:35.176 [2024-11-18 00:40:58.647221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.176 [2024-11-18 00:40:58.647249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.176 qpair failed and we were unable to recover it. 00:35:35.176 [2024-11-18 00:40:58.647371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.176 [2024-11-18 00:40:58.647400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.176 qpair failed and we were unable to recover it. 00:35:35.176 [2024-11-18 00:40:58.647485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.176 [2024-11-18 00:40:58.647512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.176 qpair failed and we were unable to recover it. 00:35:35.176 [2024-11-18 00:40:58.647633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.176 [2024-11-18 00:40:58.647660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.176 qpair failed and we were unable to recover it. 00:35:35.176 [2024-11-18 00:40:58.647742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.176 [2024-11-18 00:40:58.647769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.176 qpair failed and we were unable to recover it. 00:35:35.176 [2024-11-18 00:40:58.647853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.176 [2024-11-18 00:40:58.647882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.176 qpair failed and we were unable to recover it. 00:35:35.176 [2024-11-18 00:40:58.648016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.176 [2024-11-18 00:40:58.648043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.176 qpair failed and we were unable to recover it. 00:35:35.176 [2024-11-18 00:40:58.648161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.176 [2024-11-18 00:40:58.648188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.176 qpair failed and we were unable to recover it. 00:35:35.176 [2024-11-18 00:40:58.648307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.176 [2024-11-18 00:40:58.648341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.177 qpair failed and we were unable to recover it. 00:35:35.177 [2024-11-18 00:40:58.648425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.177 [2024-11-18 00:40:58.648452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.177 qpair failed and we were unable to recover it. 00:35:35.177 [2024-11-18 00:40:58.648532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.177 [2024-11-18 00:40:58.648564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.177 qpair failed and we were unable to recover it. 00:35:35.177 [2024-11-18 00:40:58.648650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.177 [2024-11-18 00:40:58.648678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.177 qpair failed and we were unable to recover it. 00:35:35.177 [2024-11-18 00:40:58.648765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.177 [2024-11-18 00:40:58.648804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.177 qpair failed and we were unable to recover it. 00:35:35.177 [2024-11-18 00:40:58.648956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.177 [2024-11-18 00:40:58.648984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.177 qpair failed and we were unable to recover it. 00:35:35.177 [2024-11-18 00:40:58.649100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.177 [2024-11-18 00:40:58.649127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.177 qpair failed and we were unable to recover it. 00:35:35.177 [2024-11-18 00:40:58.649244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.177 [2024-11-18 00:40:58.649271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.177 qpair failed and we were unable to recover it. 00:35:35.177 [2024-11-18 00:40:58.649402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.177 [2024-11-18 00:40:58.649443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.177 qpair failed and we were unable to recover it. 00:35:35.177 [2024-11-18 00:40:58.649539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.177 [2024-11-18 00:40:58.649566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.177 qpair failed and we were unable to recover it. 00:35:35.177 [2024-11-18 00:40:58.649687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.177 [2024-11-18 00:40:58.649715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.177 qpair failed and we were unable to recover it. 00:35:35.177 [2024-11-18 00:40:58.649834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.177 [2024-11-18 00:40:58.649861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.177 qpair failed and we were unable to recover it. 00:35:35.177 [2024-11-18 00:40:58.649974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.177 [2024-11-18 00:40:58.650001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.177 qpair failed and we were unable to recover it. 00:35:35.177 [2024-11-18 00:40:58.650077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.177 [2024-11-18 00:40:58.650103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.177 qpair failed and we were unable to recover it. 00:35:35.177 [2024-11-18 00:40:58.650219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.177 [2024-11-18 00:40:58.650245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.177 qpair failed and we were unable to recover it. 00:35:35.177 [2024-11-18 00:40:58.650331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.177 [2024-11-18 00:40:58.650358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.177 qpair failed and we were unable to recover it. 00:35:35.177 [2024-11-18 00:40:58.650448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.177 [2024-11-18 00:40:58.650474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.177 qpair failed and we were unable to recover it. 00:35:35.177 [2024-11-18 00:40:58.650550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.177 [2024-11-18 00:40:58.650576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.177 qpair failed and we were unable to recover it. 00:35:35.177 [2024-11-18 00:40:58.650725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.177 [2024-11-18 00:40:58.650752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.177 qpair failed and we were unable to recover it. 00:35:35.177 [2024-11-18 00:40:58.650834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.177 [2024-11-18 00:40:58.650860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.177 qpair failed and we were unable to recover it. 00:35:35.177 [2024-11-18 00:40:58.650995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.177 [2024-11-18 00:40:58.651022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.177 qpair failed and we were unable to recover it. 00:35:35.177 [2024-11-18 00:40:58.651114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.177 [2024-11-18 00:40:58.651141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.177 qpair failed and we were unable to recover it. 00:35:35.177 [2024-11-18 00:40:58.651254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.177 [2024-11-18 00:40:58.651281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.177 qpair failed and we were unable to recover it. 00:35:35.177 [2024-11-18 00:40:58.651403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.177 [2024-11-18 00:40:58.651432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.177 qpair failed and we were unable to recover it. 00:35:35.177 [2024-11-18 00:40:58.651553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.177 [2024-11-18 00:40:58.651584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.177 qpair failed and we were unable to recover it. 00:35:35.177 [2024-11-18 00:40:58.651685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.177 [2024-11-18 00:40:58.651712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.177 qpair failed and we were unable to recover it. 00:35:35.177 [2024-11-18 00:40:58.651864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.177 [2024-11-18 00:40:58.651891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.177 qpair failed and we were unable to recover it. 00:35:35.178 [2024-11-18 00:40:58.652003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.178 [2024-11-18 00:40:58.652030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.178 qpair failed and we were unable to recover it. 00:35:35.178 [2024-11-18 00:40:58.652142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.178 [2024-11-18 00:40:58.652169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.178 qpair failed and we were unable to recover it. 00:35:35.178 [2024-11-18 00:40:58.652276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.178 [2024-11-18 00:40:58.652329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.178 qpair failed and we were unable to recover it. 00:35:35.178 [2024-11-18 00:40:58.652419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.178 [2024-11-18 00:40:58.652446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.178 qpair failed and we were unable to recover it. 00:35:35.178 [2024-11-18 00:40:58.652563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.178 [2024-11-18 00:40:58.652601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.178 qpair failed and we were unable to recover it. 00:35:35.178 [2024-11-18 00:40:58.652688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.178 [2024-11-18 00:40:58.652715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.178 qpair failed and we were unable to recover it. 00:35:35.178 [2024-11-18 00:40:58.652801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.178 [2024-11-18 00:40:58.652828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.178 qpair failed and we were unable to recover it. 00:35:35.178 [2024-11-18 00:40:58.652908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.178 [2024-11-18 00:40:58.652938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.178 qpair failed and we were unable to recover it. 00:35:35.178 [2024-11-18 00:40:58.653063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.178 [2024-11-18 00:40:58.653092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.178 qpair failed and we were unable to recover it. 00:35:35.178 [2024-11-18 00:40:58.653209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.178 [2024-11-18 00:40:58.653236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.178 qpair failed and we were unable to recover it. 00:35:35.178 [2024-11-18 00:40:58.653332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.178 [2024-11-18 00:40:58.653360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.178 qpair failed and we were unable to recover it. 00:35:35.178 [2024-11-18 00:40:58.653443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.178 [2024-11-18 00:40:58.653469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.178 qpair failed and we were unable to recover it. 00:35:35.178 [2024-11-18 00:40:58.653546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.178 [2024-11-18 00:40:58.653573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.178 qpair failed and we were unable to recover it. 00:35:35.178 [2024-11-18 00:40:58.653703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.178 [2024-11-18 00:40:58.653729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.178 qpair failed and we were unable to recover it. 00:35:35.178 [2024-11-18 00:40:58.653876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.178 [2024-11-18 00:40:58.653903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.178 qpair failed and we were unable to recover it. 00:35:35.178 [2024-11-18 00:40:58.654009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.178 [2024-11-18 00:40:58.654049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.178 qpair failed and we were unable to recover it. 00:35:35.178 [2024-11-18 00:40:58.654199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.178 [2024-11-18 00:40:58.654228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.178 qpair failed and we were unable to recover it. 00:35:35.178 [2024-11-18 00:40:58.654319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.178 [2024-11-18 00:40:58.654347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.178 qpair failed and we were unable to recover it. 00:35:35.178 [2024-11-18 00:40:58.654466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.178 [2024-11-18 00:40:58.654493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.178 qpair failed and we were unable to recover it. 00:35:35.178 [2024-11-18 00:40:58.654581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.178 [2024-11-18 00:40:58.654612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.178 qpair failed and we were unable to recover it. 00:35:35.178 [2024-11-18 00:40:58.654694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.178 [2024-11-18 00:40:58.654721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.178 qpair failed and we were unable to recover it. 00:35:35.178 [2024-11-18 00:40:58.654837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.179 [2024-11-18 00:40:58.654864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.179 qpair failed and we were unable to recover it. 00:35:35.179 [2024-11-18 00:40:58.654955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.179 [2024-11-18 00:40:58.654982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.179 qpair failed and we were unable to recover it. 00:35:35.179 [2024-11-18 00:40:58.655121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.179 [2024-11-18 00:40:58.655148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.179 qpair failed and we were unable to recover it. 00:35:35.179 [2024-11-18 00:40:58.655235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.179 [2024-11-18 00:40:58.655262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.179 qpair failed and we were unable to recover it. 00:35:35.179 [2024-11-18 00:40:58.655353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.179 [2024-11-18 00:40:58.655381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.179 qpair failed and we were unable to recover it. 00:35:35.179 [2024-11-18 00:40:58.655467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.179 [2024-11-18 00:40:58.655494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.179 qpair failed and we were unable to recover it. 00:35:35.179 [2024-11-18 00:40:58.655579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.179 [2024-11-18 00:40:58.655612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.179 qpair failed and we were unable to recover it. 00:35:35.179 [2024-11-18 00:40:58.655725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.179 [2024-11-18 00:40:58.655752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.179 qpair failed and we were unable to recover it. 00:35:35.179 [2024-11-18 00:40:58.655877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.179 [2024-11-18 00:40:58.655907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.179 qpair failed and we were unable to recover it. 00:35:35.179 [2024-11-18 00:40:58.656003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.179 [2024-11-18 00:40:58.656030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.179 qpair failed and we were unable to recover it. 00:35:35.179 [2024-11-18 00:40:58.656138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.179 [2024-11-18 00:40:58.656165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.179 qpair failed and we were unable to recover it. 00:35:35.179 [2024-11-18 00:40:58.656248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.179 [2024-11-18 00:40:58.656277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.179 qpair failed and we were unable to recover it. 00:35:35.179 [2024-11-18 00:40:58.656394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.179 [2024-11-18 00:40:58.656422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.179 qpair failed and we were unable to recover it. 00:35:35.179 [2024-11-18 00:40:58.656544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.179 [2024-11-18 00:40:58.656584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.179 qpair failed and we were unable to recover it. 00:35:35.179 [2024-11-18 00:40:58.656703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.179 [2024-11-18 00:40:58.656731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.179 qpair failed and we were unable to recover it. 00:35:35.179 [2024-11-18 00:40:58.656827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:35.179 [2024-11-18 00:40:58.656841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.179 [2024-11-18 00:40:58.656868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.179 qpair failed and we were unable to recover it. 00:35:35.179 [2024-11-18 00:40:58.656957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.179 [2024-11-18 00:40:58.656984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.179 qpair failed and we were unable to recover it. 00:35:35.179 [2024-11-18 00:40:58.657101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.179 [2024-11-18 00:40:58.657129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.179 qpair failed and we were unable to recover it. 00:35:35.179 [2024-11-18 00:40:58.657220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.179 [2024-11-18 00:40:58.657249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.179 qpair failed and we were unable to recover it. 00:35:35.179 [2024-11-18 00:40:58.657388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.179 [2024-11-18 00:40:58.657416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.179 qpair failed and we were unable to recover it. 00:35:35.179 [2024-11-18 00:40:58.657538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.179 [2024-11-18 00:40:58.657565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.179 qpair failed and we were unable to recover it. 00:35:35.179 [2024-11-18 00:40:58.657651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.179 [2024-11-18 00:40:58.657683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.179 qpair failed and we were unable to recover it. 00:35:35.179 [2024-11-18 00:40:58.657772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.179 [2024-11-18 00:40:58.657799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.179 qpair failed and we were unable to recover it. 00:35:35.179 [2024-11-18 00:40:58.657915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.179 [2024-11-18 00:40:58.657943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.179 qpair failed and we were unable to recover it. 00:35:35.179 [2024-11-18 00:40:58.658068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.179 [2024-11-18 00:40:58.658096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.179 qpair failed and we were unable to recover it. 00:35:35.179 [2024-11-18 00:40:58.658188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.179 [2024-11-18 00:40:58.658216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.179 qpair failed and we were unable to recover it. 00:35:35.179 [2024-11-18 00:40:58.658343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.180 [2024-11-18 00:40:58.658371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.180 qpair failed and we were unable to recover it. 00:35:35.180 [2024-11-18 00:40:58.658482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.180 [2024-11-18 00:40:58.658509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.180 qpair failed and we were unable to recover it. 00:35:35.180 [2024-11-18 00:40:58.658619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.180 [2024-11-18 00:40:58.658646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.180 qpair failed and we were unable to recover it. 00:35:35.180 [2024-11-18 00:40:58.658754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.180 [2024-11-18 00:40:58.658782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.180 qpair failed and we were unable to recover it. 00:35:35.180 [2024-11-18 00:40:58.658896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.180 [2024-11-18 00:40:58.658923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.180 qpair failed and we were unable to recover it. 00:35:35.180 [2024-11-18 00:40:58.659016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.180 [2024-11-18 00:40:58.659044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.180 qpair failed and we were unable to recover it. 00:35:35.180 [2024-11-18 00:40:58.659156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.180 [2024-11-18 00:40:58.659185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.180 qpair failed and we were unable to recover it. 00:35:35.180 [2024-11-18 00:40:58.659335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.180 [2024-11-18 00:40:58.659375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.180 qpair failed and we were unable to recover it. 00:35:35.180 [2024-11-18 00:40:58.659473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.180 [2024-11-18 00:40:58.659502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.180 qpair failed and we were unable to recover it. 00:35:35.180 [2024-11-18 00:40:58.659603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.180 [2024-11-18 00:40:58.659632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.180 qpair failed and we were unable to recover it. 00:35:35.180 [2024-11-18 00:40:58.659747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.180 [2024-11-18 00:40:58.659774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.180 qpair failed and we were unable to recover it. 00:35:35.180 [2024-11-18 00:40:58.659866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.180 [2024-11-18 00:40:58.659894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.180 qpair failed and we were unable to recover it. 00:35:35.180 [2024-11-18 00:40:58.660015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.180 [2024-11-18 00:40:58.660042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.180 qpair failed and we were unable to recover it. 00:35:35.180 [2024-11-18 00:40:58.660181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.180 [2024-11-18 00:40:58.660207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.180 qpair failed and we were unable to recover it. 00:35:35.180 [2024-11-18 00:40:58.660295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.180 [2024-11-18 00:40:58.660332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.180 qpair failed and we were unable to recover it. 00:35:35.180 [2024-11-18 00:40:58.660422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.180 [2024-11-18 00:40:58.660449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.180 qpair failed and we were unable to recover it. 00:35:35.180 [2024-11-18 00:40:58.660532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.180 [2024-11-18 00:40:58.660559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.180 qpair failed and we were unable to recover it. 00:35:35.180 [2024-11-18 00:40:58.660712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.180 [2024-11-18 00:40:58.660739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.180 qpair failed and we were unable to recover it. 00:35:35.180 [2024-11-18 00:40:58.660851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.180 [2024-11-18 00:40:58.660878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.180 qpair failed and we were unable to recover it. 00:35:35.180 [2024-11-18 00:40:58.660968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.180 [2024-11-18 00:40:58.660996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.180 qpair failed and we were unable to recover it. 00:35:35.180 [2024-11-18 00:40:58.661094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.180 [2024-11-18 00:40:58.661123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.180 qpair failed and we were unable to recover it. 00:35:35.180 [2024-11-18 00:40:58.661235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.180 [2024-11-18 00:40:58.661262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.180 qpair failed and we were unable to recover it. 00:35:35.180 [2024-11-18 00:40:58.661377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.180 [2024-11-18 00:40:58.661405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.180 qpair failed and we were unable to recover it. 00:35:35.180 [2024-11-18 00:40:58.661513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.180 [2024-11-18 00:40:58.661540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.180 qpair failed and we were unable to recover it. 00:35:35.180 [2024-11-18 00:40:58.661666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.180 [2024-11-18 00:40:58.661694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.180 qpair failed and we were unable to recover it. 00:35:35.180 [2024-11-18 00:40:58.661784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.180 [2024-11-18 00:40:58.661811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.180 qpair failed and we were unable to recover it. 00:35:35.180 [2024-11-18 00:40:58.661929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.180 [2024-11-18 00:40:58.661958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.181 qpair failed and we were unable to recover it. 00:35:35.181 [2024-11-18 00:40:58.662044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.181 [2024-11-18 00:40:58.662071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.181 qpair failed and we were unable to recover it. 00:35:35.181 [2024-11-18 00:40:58.662182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.181 [2024-11-18 00:40:58.662208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.181 qpair failed and we were unable to recover it. 00:35:35.181 [2024-11-18 00:40:58.662356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.181 [2024-11-18 00:40:58.662383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.181 qpair failed and we were unable to recover it. 00:35:35.181 [2024-11-18 00:40:58.662495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.181 [2024-11-18 00:40:58.662522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.181 qpair failed and we were unable to recover it. 00:35:35.181 [2024-11-18 00:40:58.662661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.181 [2024-11-18 00:40:58.662688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.181 qpair failed and we were unable to recover it. 00:35:35.181 [2024-11-18 00:40:58.662770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.181 [2024-11-18 00:40:58.662797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.181 qpair failed and we were unable to recover it. 00:35:35.181 [2024-11-18 00:40:58.662880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.181 [2024-11-18 00:40:58.662907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.181 qpair failed and we were unable to recover it. 00:35:35.181 [2024-11-18 00:40:58.662993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.181 [2024-11-18 00:40:58.663020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.181 qpair failed and we were unable to recover it. 00:35:35.181 [2024-11-18 00:40:58.663099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.181 [2024-11-18 00:40:58.663132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.181 qpair failed and we were unable to recover it. 00:35:35.181 [2024-11-18 00:40:58.663295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.181 [2024-11-18 00:40:58.663346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.181 qpair failed and we were unable to recover it. 00:35:35.181 [2024-11-18 00:40:58.663451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.181 [2024-11-18 00:40:58.663480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.181 qpair failed and we were unable to recover it. 00:35:35.181 [2024-11-18 00:40:58.663592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.181 [2024-11-18 00:40:58.663619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.181 qpair failed and we were unable to recover it. 00:35:35.181 [2024-11-18 00:40:58.663701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.181 [2024-11-18 00:40:58.663728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.181 qpair failed and we were unable to recover it. 00:35:35.181 [2024-11-18 00:40:58.663803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.181 [2024-11-18 00:40:58.663831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.181 qpair failed and we were unable to recover it. 00:35:35.181 [2024-11-18 00:40:58.663956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.181 [2024-11-18 00:40:58.663983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.181 qpair failed and we were unable to recover it. 00:35:35.181 [2024-11-18 00:40:58.664065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.181 [2024-11-18 00:40:58.664094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.181 qpair failed and we were unable to recover it. 00:35:35.181 [2024-11-18 00:40:58.664178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.181 [2024-11-18 00:40:58.664205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.181 qpair failed and we were unable to recover it. 00:35:35.181 [2024-11-18 00:40:58.664287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.181 [2024-11-18 00:40:58.664321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.181 qpair failed and we were unable to recover it. 00:35:35.181 [2024-11-18 00:40:58.664463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.181 [2024-11-18 00:40:58.664490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.181 qpair failed and we were unable to recover it. 00:35:35.181 [2024-11-18 00:40:58.664568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.181 [2024-11-18 00:40:58.664596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.181 qpair failed and we were unable to recover it. 00:35:35.181 [2024-11-18 00:40:58.664678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.181 [2024-11-18 00:40:58.664706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.181 qpair failed and we were unable to recover it. 00:35:35.181 [2024-11-18 00:40:58.664801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.181 [2024-11-18 00:40:58.664828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.181 qpair failed and we were unable to recover it. 00:35:35.181 [2024-11-18 00:40:58.664948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.181 [2024-11-18 00:40:58.664976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.181 qpair failed and we were unable to recover it. 00:35:35.181 [2024-11-18 00:40:58.665085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.181 [2024-11-18 00:40:58.665113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.181 qpair failed and we were unable to recover it. 00:35:35.181 [2024-11-18 00:40:58.665254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.181 [2024-11-18 00:40:58.665281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.181 qpair failed and we were unable to recover it. 00:35:35.181 [2024-11-18 00:40:58.665425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.181 [2024-11-18 00:40:58.665465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.182 qpair failed and we were unable to recover it. 00:35:35.182 [2024-11-18 00:40:58.665564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.182 [2024-11-18 00:40:58.665594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.182 qpair failed and we were unable to recover it. 00:35:35.182 [2024-11-18 00:40:58.665705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.182 [2024-11-18 00:40:58.665733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.182 qpair failed and we were unable to recover it. 00:35:35.182 [2024-11-18 00:40:58.665872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.182 [2024-11-18 00:40:58.665898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.182 qpair failed and we were unable to recover it. 00:35:35.182 [2024-11-18 00:40:58.665981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.182 [2024-11-18 00:40:58.666009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.182 qpair failed and we were unable to recover it. 00:35:35.182 [2024-11-18 00:40:58.666131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.182 [2024-11-18 00:40:58.666160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.182 qpair failed and we were unable to recover it. 00:35:35.182 [2024-11-18 00:40:58.666289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.182 [2024-11-18 00:40:58.666328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.182 qpair failed and we were unable to recover it. 00:35:35.182 [2024-11-18 00:40:58.666423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.182 [2024-11-18 00:40:58.666451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.182 qpair failed and we were unable to recover it. 00:35:35.182 [2024-11-18 00:40:58.666564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.182 [2024-11-18 00:40:58.666592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.182 qpair failed and we were unable to recover it. 00:35:35.182 [2024-11-18 00:40:58.666673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.182 [2024-11-18 00:40:58.666700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.182 qpair failed and we were unable to recover it. 00:35:35.182 [2024-11-18 00:40:58.666824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.182 [2024-11-18 00:40:58.666853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.182 qpair failed and we were unable to recover it. 00:35:35.182 [2024-11-18 00:40:58.666971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.182 [2024-11-18 00:40:58.666999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.182 qpair failed and we were unable to recover it. 00:35:35.182 [2024-11-18 00:40:58.667111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.182 [2024-11-18 00:40:58.667138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.182 qpair failed and we were unable to recover it. 00:35:35.182 [2024-11-18 00:40:58.667228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.182 [2024-11-18 00:40:58.667255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.182 qpair failed and we were unable to recover it. 00:35:35.182 [2024-11-18 00:40:58.667365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.182 [2024-11-18 00:40:58.667392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.182 qpair failed and we were unable to recover it. 00:35:35.182 [2024-11-18 00:40:58.667503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.182 [2024-11-18 00:40:58.667530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.182 qpair failed and we were unable to recover it. 00:35:35.182 [2024-11-18 00:40:58.667607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.182 [2024-11-18 00:40:58.667634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.182 qpair failed and we were unable to recover it. 00:35:35.182 [2024-11-18 00:40:58.667779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.182 [2024-11-18 00:40:58.667806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.182 qpair failed and we were unable to recover it. 00:35:35.182 [2024-11-18 00:40:58.667921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.182 [2024-11-18 00:40:58.667947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.182 qpair failed and we were unable to recover it. 00:35:35.182 [2024-11-18 00:40:58.668030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.182 [2024-11-18 00:40:58.668057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.182 qpair failed and we were unable to recover it. 00:35:35.182 [2024-11-18 00:40:58.668138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.182 [2024-11-18 00:40:58.668165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.182 qpair failed and we were unable to recover it. 00:35:35.182 [2024-11-18 00:40:58.668360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.182 [2024-11-18 00:40:58.668387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.182 qpair failed and we were unable to recover it. 00:35:35.182 [2024-11-18 00:40:58.668476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.182 [2024-11-18 00:40:58.668506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.182 qpair failed and we were unable to recover it. 00:35:35.182 [2024-11-18 00:40:58.668623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.182 [2024-11-18 00:40:58.668655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.182 qpair failed and we were unable to recover it. 00:35:35.182 [2024-11-18 00:40:58.668781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.182 [2024-11-18 00:40:58.668811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.182 qpair failed and we were unable to recover it. 00:35:35.182 [2024-11-18 00:40:58.668927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.182 [2024-11-18 00:40:58.668955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.182 qpair failed and we were unable to recover it. 00:35:35.182 [2024-11-18 00:40:58.669072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.182 [2024-11-18 00:40:58.669100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.182 qpair failed and we were unable to recover it. 00:35:35.183 [2024-11-18 00:40:58.669189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.183 [2024-11-18 00:40:58.669217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.183 qpair failed and we were unable to recover it. 00:35:35.183 [2024-11-18 00:40:58.669337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.183 [2024-11-18 00:40:58.669365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.183 qpair failed and we were unable to recover it. 00:35:35.183 [2024-11-18 00:40:58.669480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.183 [2024-11-18 00:40:58.669506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.183 qpair failed and we were unable to recover it. 00:35:35.183 [2024-11-18 00:40:58.669627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.183 [2024-11-18 00:40:58.669654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.183 qpair failed and we were unable to recover it. 00:35:35.183 [2024-11-18 00:40:58.669763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.183 [2024-11-18 00:40:58.669789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.183 qpair failed and we were unable to recover it. 00:35:35.183 [2024-11-18 00:40:58.669944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.183 [2024-11-18 00:40:58.669985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.183 qpair failed and we were unable to recover it. 00:35:35.183 [2024-11-18 00:40:58.670085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.183 [2024-11-18 00:40:58.670114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.183 qpair failed and we were unable to recover it. 00:35:35.183 [2024-11-18 00:40:58.670229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.183 [2024-11-18 00:40:58.670258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.183 qpair failed and we were unable to recover it. 00:35:35.183 [2024-11-18 00:40:58.670388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.183 [2024-11-18 00:40:58.670416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.183 qpair failed and we were unable to recover it. 00:35:35.183 [2024-11-18 00:40:58.670530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.183 [2024-11-18 00:40:58.670557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.183 qpair failed and we were unable to recover it. 00:35:35.183 [2024-11-18 00:40:58.670691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.183 [2024-11-18 00:40:58.670719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.183 qpair failed and we were unable to recover it. 00:35:35.183 [2024-11-18 00:40:58.670869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.183 [2024-11-18 00:40:58.670896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.183 qpair failed and we were unable to recover it. 00:35:35.183 [2024-11-18 00:40:58.671012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.183 [2024-11-18 00:40:58.671049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.183 qpair failed and we were unable to recover it. 00:35:35.183 [2024-11-18 00:40:58.671203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.183 [2024-11-18 00:40:58.671230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.183 qpair failed and we were unable to recover it. 00:35:35.183 [2024-11-18 00:40:58.671322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.183 [2024-11-18 00:40:58.671351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.183 qpair failed and we were unable to recover it. 00:35:35.183 [2024-11-18 00:40:58.671549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.183 [2024-11-18 00:40:58.671575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.183 qpair failed and we were unable to recover it. 00:35:35.183 [2024-11-18 00:40:58.671683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.183 [2024-11-18 00:40:58.671710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.183 qpair failed and we were unable to recover it. 00:35:35.183 [2024-11-18 00:40:58.671802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.183 [2024-11-18 00:40:58.671829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.183 qpair failed and we were unable to recover it. 00:35:35.183 [2024-11-18 00:40:58.671948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.183 [2024-11-18 00:40:58.671975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.183 qpair failed and we were unable to recover it. 00:35:35.183 [2024-11-18 00:40:58.672054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.183 [2024-11-18 00:40:58.672081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.183 qpair failed and we were unable to recover it. 00:35:35.183 [2024-11-18 00:40:58.672203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.183 [2024-11-18 00:40:58.672231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.183 qpair failed and we were unable to recover it. 00:35:35.183 [2024-11-18 00:40:58.672343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.183 [2024-11-18 00:40:58.672370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.183 qpair failed and we were unable to recover it. 00:35:35.183 [2024-11-18 00:40:58.672482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.183 [2024-11-18 00:40:58.672509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.183 qpair failed and we were unable to recover it. 00:35:35.183 [2024-11-18 00:40:58.672600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.183 [2024-11-18 00:40:58.672629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.183 qpair failed and we were unable to recover it. 00:35:35.183 [2024-11-18 00:40:58.672763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.183 [2024-11-18 00:40:58.672803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.183 qpair failed and we were unable to recover it. 00:35:35.183 [2024-11-18 00:40:58.672902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.183 [2024-11-18 00:40:58.672930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.183 qpair failed and we were unable to recover it. 00:35:35.183 [2024-11-18 00:40:58.673021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.184 [2024-11-18 00:40:58.673049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.184 qpair failed and we were unable to recover it. 00:35:35.184 [2024-11-18 00:40:58.673191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.184 [2024-11-18 00:40:58.673218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.184 qpair failed and we were unable to recover it. 00:35:35.184 [2024-11-18 00:40:58.673303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.184 [2024-11-18 00:40:58.673335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.184 qpair failed and we were unable to recover it. 00:35:35.184 [2024-11-18 00:40:58.673449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.184 [2024-11-18 00:40:58.673477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.184 qpair failed and we were unable to recover it. 00:35:35.184 [2024-11-18 00:40:58.673597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.184 [2024-11-18 00:40:58.673625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.184 qpair failed and we were unable to recover it. 00:35:35.184 [2024-11-18 00:40:58.673707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.184 [2024-11-18 00:40:58.673734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.184 qpair failed and we were unable to recover it. 00:35:35.184 [2024-11-18 00:40:58.673851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.184 [2024-11-18 00:40:58.673878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.184 qpair failed and we were unable to recover it. 00:35:35.184 [2024-11-18 00:40:58.673994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.184 [2024-11-18 00:40:58.674022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.184 qpair failed and we were unable to recover it. 00:35:35.184 [2024-11-18 00:40:58.674109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.184 [2024-11-18 00:40:58.674136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.184 qpair failed and we were unable to recover it. 00:35:35.184 [2024-11-18 00:40:58.674294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.184 [2024-11-18 00:40:58.674342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.184 qpair failed and we were unable to recover it. 00:35:35.184 [2024-11-18 00:40:58.674473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.184 [2024-11-18 00:40:58.674502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.184 qpair failed and we were unable to recover it. 00:35:35.184 [2024-11-18 00:40:58.674627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.184 [2024-11-18 00:40:58.674655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.184 qpair failed and we were unable to recover it. 00:35:35.184 [2024-11-18 00:40:58.674735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.184 [2024-11-18 00:40:58.674762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.184 qpair failed and we were unable to recover it. 00:35:35.184 [2024-11-18 00:40:58.674876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.184 [2024-11-18 00:40:58.674902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.184 qpair failed and we were unable to recover it. 00:35:35.184 [2024-11-18 00:40:58.674981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.184 [2024-11-18 00:40:58.675008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.184 qpair failed and we were unable to recover it. 00:35:35.184 [2024-11-18 00:40:58.675121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.184 [2024-11-18 00:40:58.675149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.184 qpair failed and we were unable to recover it. 00:35:35.184 [2024-11-18 00:40:58.675233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.184 [2024-11-18 00:40:58.675261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.184 qpair failed and we were unable to recover it. 00:35:35.184 [2024-11-18 00:40:58.675399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.184 [2024-11-18 00:40:58.675439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.184 qpair failed and we were unable to recover it. 00:35:35.184 [2024-11-18 00:40:58.675538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.184 [2024-11-18 00:40:58.675568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.184 qpair failed and we were unable to recover it. 00:35:35.184 [2024-11-18 00:40:58.675664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.184 [2024-11-18 00:40:58.675692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.184 qpair failed and we were unable to recover it. 00:35:35.184 [2024-11-18 00:40:58.675779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.184 [2024-11-18 00:40:58.675806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.184 qpair failed and we were unable to recover it. 00:35:35.184 [2024-11-18 00:40:58.675946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.184 [2024-11-18 00:40:58.675973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.184 qpair failed and we were unable to recover it. 00:35:35.184 [2024-11-18 00:40:58.676056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.184 [2024-11-18 00:40:58.676083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.184 qpair failed and we were unable to recover it. 00:35:35.184 [2024-11-18 00:40:58.676174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.184 [2024-11-18 00:40:58.676201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.184 qpair failed and we were unable to recover it. 00:35:35.184 [2024-11-18 00:40:58.676294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.184 [2024-11-18 00:40:58.676330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.184 qpair failed and we were unable to recover it. 00:35:35.184 [2024-11-18 00:40:58.676415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.184 [2024-11-18 00:40:58.676442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.184 qpair failed and we were unable to recover it. 00:35:35.184 [2024-11-18 00:40:58.676524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.184 [2024-11-18 00:40:58.676551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.185 qpair failed and we were unable to recover it. 00:35:35.185 [2024-11-18 00:40:58.676688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.185 [2024-11-18 00:40:58.676715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.185 qpair failed and we were unable to recover it. 00:35:35.185 [2024-11-18 00:40:58.676819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.185 [2024-11-18 00:40:58.676845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.185 qpair failed and we were unable to recover it. 00:35:35.185 [2024-11-18 00:40:58.676933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.185 [2024-11-18 00:40:58.676962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.185 qpair failed and we were unable to recover it. 00:35:35.185 [2024-11-18 00:40:58.677083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.185 [2024-11-18 00:40:58.677111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.185 qpair failed and we were unable to recover it. 00:35:35.185 [2024-11-18 00:40:58.677193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.185 [2024-11-18 00:40:58.677220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.185 qpair failed and we were unable to recover it. 00:35:35.185 [2024-11-18 00:40:58.677338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.185 [2024-11-18 00:40:58.677366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.185 qpair failed and we were unable to recover it. 00:35:35.185 [2024-11-18 00:40:58.677459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.185 [2024-11-18 00:40:58.677487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.185 qpair failed and we were unable to recover it. 00:35:35.185 [2024-11-18 00:40:58.677594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.185 [2024-11-18 00:40:58.677633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.185 qpair failed and we were unable to recover it. 00:35:35.185 [2024-11-18 00:40:58.677756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.185 [2024-11-18 00:40:58.677784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.185 qpair failed and we were unable to recover it. 00:35:35.185 [2024-11-18 00:40:58.677874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.185 [2024-11-18 00:40:58.677901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.185 qpair failed and we were unable to recover it. 00:35:35.185 [2024-11-18 00:40:58.678023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.185 [2024-11-18 00:40:58.678055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.185 qpair failed and we were unable to recover it. 00:35:35.185 [2024-11-18 00:40:58.678152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.185 [2024-11-18 00:40:58.678181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.185 qpair failed and we were unable to recover it. 00:35:35.185 [2024-11-18 00:40:58.678305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.185 [2024-11-18 00:40:58.678337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.185 qpair failed and we were unable to recover it. 00:35:35.185 [2024-11-18 00:40:58.678438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.185 [2024-11-18 00:40:58.678466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.185 qpair failed and we were unable to recover it. 00:35:35.185 [2024-11-18 00:40:58.678548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.185 [2024-11-18 00:40:58.678576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.185 qpair failed and we were unable to recover it. 00:35:35.185 [2024-11-18 00:40:58.678659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.185 [2024-11-18 00:40:58.678686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.185 qpair failed and we were unable to recover it. 00:35:35.185 [2024-11-18 00:40:58.678779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.185 [2024-11-18 00:40:58.678806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.185 qpair failed and we were unable to recover it. 00:35:35.185 [2024-11-18 00:40:58.678885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.185 [2024-11-18 00:40:58.678912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.185 qpair failed and we were unable to recover it. 00:35:35.185 [2024-11-18 00:40:58.679005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.185 [2024-11-18 00:40:58.679045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.185 qpair failed and we were unable to recover it. 00:35:35.185 [2024-11-18 00:40:58.679168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.185 [2024-11-18 00:40:58.679196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.185 qpair failed and we were unable to recover it. 00:35:35.185 [2024-11-18 00:40:58.679282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.185 [2024-11-18 00:40:58.679319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.186 qpair failed and we were unable to recover it. 00:35:35.186 [2024-11-18 00:40:58.679445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.186 [2024-11-18 00:40:58.679472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.186 qpair failed and we were unable to recover it. 00:35:35.186 [2024-11-18 00:40:58.679567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.186 [2024-11-18 00:40:58.679594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.186 qpair failed and we were unable to recover it. 00:35:35.186 [2024-11-18 00:40:58.679686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.186 [2024-11-18 00:40:58.679712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.186 qpair failed and we were unable to recover it. 00:35:35.186 [2024-11-18 00:40:58.679862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.186 [2024-11-18 00:40:58.679891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.186 qpair failed and we were unable to recover it. 00:35:35.186 [2024-11-18 00:40:58.680009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.186 [2024-11-18 00:40:58.680040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.186 qpair failed and we were unable to recover it. 00:35:35.186 [2024-11-18 00:40:58.680125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.186 [2024-11-18 00:40:58.680153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.186 qpair failed and we were unable to recover it. 00:35:35.186 [2024-11-18 00:40:58.680242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.186 [2024-11-18 00:40:58.680269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.186 qpair failed and we were unable to recover it. 00:35:35.186 [2024-11-18 00:40:58.680371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.186 [2024-11-18 00:40:58.680398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.186 qpair failed and we were unable to recover it. 00:35:35.186 [2024-11-18 00:40:58.680491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.186 [2024-11-18 00:40:58.680518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.186 qpair failed and we were unable to recover it. 00:35:35.186 [2024-11-18 00:40:58.680598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.186 [2024-11-18 00:40:58.680624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.186 qpair failed and we were unable to recover it. 00:35:35.186 [2024-11-18 00:40:58.680705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.186 [2024-11-18 00:40:58.680732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.186 qpair failed and we were unable to recover it. 00:35:35.186 [2024-11-18 00:40:58.680844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.186 [2024-11-18 00:40:58.680871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.186 qpair failed and we were unable to recover it. 00:35:35.186 [2024-11-18 00:40:58.680951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.186 [2024-11-18 00:40:58.680978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.186 qpair failed and we were unable to recover it. 00:35:35.186 [2024-11-18 00:40:58.681087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.186 [2024-11-18 00:40:58.681114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.186 qpair failed and we were unable to recover it. 00:35:35.186 [2024-11-18 00:40:58.681227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.186 [2024-11-18 00:40:58.681257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.186 qpair failed and we were unable to recover it. 00:35:35.186 [2024-11-18 00:40:58.681382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.186 [2024-11-18 00:40:58.681411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.186 qpair failed and we were unable to recover it. 00:35:35.186 [2024-11-18 00:40:58.681502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.186 [2024-11-18 00:40:58.681534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.186 qpair failed and we were unable to recover it. 00:35:35.186 [2024-11-18 00:40:58.681645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.186 [2024-11-18 00:40:58.681672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.186 qpair failed and we were unable to recover it. 00:35:35.186 [2024-11-18 00:40:58.681765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.186 [2024-11-18 00:40:58.681792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.186 qpair failed and we were unable to recover it. 00:35:35.186 [2024-11-18 00:40:58.681901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.186 [2024-11-18 00:40:58.681928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.186 qpair failed and we were unable to recover it. 00:35:35.186 [2024-11-18 00:40:58.682067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.186 [2024-11-18 00:40:58.682093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.186 qpair failed and we were unable to recover it. 00:35:35.186 [2024-11-18 00:40:58.682182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.186 [2024-11-18 00:40:58.682208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.186 qpair failed and we were unable to recover it. 00:35:35.186 [2024-11-18 00:40:58.682358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.186 [2024-11-18 00:40:58.682399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.186 qpair failed and we were unable to recover it. 00:35:35.186 [2024-11-18 00:40:58.682496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.186 [2024-11-18 00:40:58.682524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.186 qpair failed and we were unable to recover it. 00:35:35.186 [2024-11-18 00:40:58.682633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.186 [2024-11-18 00:40:58.682660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.186 qpair failed and we were unable to recover it. 00:35:35.186 [2024-11-18 00:40:58.682774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.187 [2024-11-18 00:40:58.682801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.187 qpair failed and we were unable to recover it. 00:35:35.187 [2024-11-18 00:40:58.682880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.187 [2024-11-18 00:40:58.682906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.187 qpair failed and we were unable to recover it. 00:35:35.187 [2024-11-18 00:40:58.683023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.187 [2024-11-18 00:40:58.683050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.187 qpair failed and we were unable to recover it. 00:35:35.187 [2024-11-18 00:40:58.683134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.187 [2024-11-18 00:40:58.683160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.187 qpair failed and we were unable to recover it. 00:35:35.187 [2024-11-18 00:40:58.683273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.187 [2024-11-18 00:40:58.683300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.187 qpair failed and we were unable to recover it. 00:35:35.187 [2024-11-18 00:40:58.683424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.187 [2024-11-18 00:40:58.683451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.187 qpair failed and we were unable to recover it. 00:35:35.187 [2024-11-18 00:40:58.683534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.187 [2024-11-18 00:40:58.683561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.187 qpair failed and we were unable to recover it. 00:35:35.187 [2024-11-18 00:40:58.683675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.187 [2024-11-18 00:40:58.683702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.187 qpair failed and we were unable to recover it. 00:35:35.187 [2024-11-18 00:40:58.683826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.187 [2024-11-18 00:40:58.683865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.187 qpair failed and we were unable to recover it. 00:35:35.187 [2024-11-18 00:40:58.683954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.187 [2024-11-18 00:40:58.683982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.187 qpair failed and we were unable to recover it. 00:35:35.187 [2024-11-18 00:40:58.684098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.187 [2024-11-18 00:40:58.684126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.187 qpair failed and we were unable to recover it. 00:35:35.187 [2024-11-18 00:40:58.684208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.187 [2024-11-18 00:40:58.684235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.187 qpair failed and we were unable to recover it. 00:35:35.187 [2024-11-18 00:40:58.684331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.187 [2024-11-18 00:40:58.684358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.187 qpair failed and we were unable to recover it. 00:35:35.187 [2024-11-18 00:40:58.684441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.187 [2024-11-18 00:40:58.684468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.187 qpair failed and we were unable to recover it. 00:35:35.187 [2024-11-18 00:40:58.684582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.187 [2024-11-18 00:40:58.684609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.187 qpair failed and we were unable to recover it. 00:35:35.187 [2024-11-18 00:40:58.684694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.187 [2024-11-18 00:40:58.684721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.187 qpair failed and we were unable to recover it. 00:35:35.187 [2024-11-18 00:40:58.684835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.187 [2024-11-18 00:40:58.684862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.187 qpair failed and we were unable to recover it. 00:35:35.187 [2024-11-18 00:40:58.684936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.187 [2024-11-18 00:40:58.684963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.187 qpair failed and we were unable to recover it. 00:35:35.187 [2024-11-18 00:40:58.685097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.187 [2024-11-18 00:40:58.685129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.187 qpair failed and we were unable to recover it. 00:35:35.187 [2024-11-18 00:40:58.685257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.187 [2024-11-18 00:40:58.685284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.187 qpair failed and we were unable to recover it. 00:35:35.187 [2024-11-18 00:40:58.685407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.187 [2024-11-18 00:40:58.685436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.187 qpair failed and we were unable to recover it. 00:35:35.187 [2024-11-18 00:40:58.685552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.187 [2024-11-18 00:40:58.685580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.187 qpair failed and we were unable to recover it. 00:35:35.187 [2024-11-18 00:40:58.685670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.187 [2024-11-18 00:40:58.685698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.187 qpair failed and we were unable to recover it. 00:35:35.187 [2024-11-18 00:40:58.685782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.187 [2024-11-18 00:40:58.685810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.187 qpair failed and we were unable to recover it. 00:35:35.187 [2024-11-18 00:40:58.685929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.187 [2024-11-18 00:40:58.685957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.187 qpair failed and we were unable to recover it. 00:35:35.187 [2024-11-18 00:40:58.686073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.187 [2024-11-18 00:40:58.686101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.187 qpair failed and we were unable to recover it. 00:35:35.187 [2024-11-18 00:40:58.686190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.188 [2024-11-18 00:40:58.686217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.188 qpair failed and we were unable to recover it. 00:35:35.188 [2024-11-18 00:40:58.686337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.188 [2024-11-18 00:40:58.686365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.188 qpair failed and we were unable to recover it. 00:35:35.188 [2024-11-18 00:40:58.686453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.188 [2024-11-18 00:40:58.686480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.188 qpair failed and we were unable to recover it. 00:35:35.188 [2024-11-18 00:40:58.686626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.188 [2024-11-18 00:40:58.686653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.188 qpair failed and we were unable to recover it. 00:35:35.188 [2024-11-18 00:40:58.686767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.188 [2024-11-18 00:40:58.686795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.188 qpair failed and we were unable to recover it. 00:35:35.188 [2024-11-18 00:40:58.686909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.188 [2024-11-18 00:40:58.686936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.188 qpair failed and we were unable to recover it. 00:35:35.188 [2024-11-18 00:40:58.687035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.188 [2024-11-18 00:40:58.687065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.188 qpair failed and we were unable to recover it. 00:35:35.188 [2024-11-18 00:40:58.687182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.188 [2024-11-18 00:40:58.687211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.188 qpair failed and we were unable to recover it. 00:35:35.188 [2024-11-18 00:40:58.687325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.188 [2024-11-18 00:40:58.687353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.188 qpair failed and we were unable to recover it. 00:35:35.188 [2024-11-18 00:40:58.687467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.188 [2024-11-18 00:40:58.687495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.188 qpair failed and we were unable to recover it. 00:35:35.188 [2024-11-18 00:40:58.687606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.188 [2024-11-18 00:40:58.687634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.188 qpair failed and we were unable to recover it. 00:35:35.188 [2024-11-18 00:40:58.687755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.188 [2024-11-18 00:40:58.687781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.188 qpair failed and we were unable to recover it. 00:35:35.188 [2024-11-18 00:40:58.687889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.188 [2024-11-18 00:40:58.687916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.188 qpair failed and we were unable to recover it. 00:35:35.188 [2024-11-18 00:40:58.688030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.188 [2024-11-18 00:40:58.688058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.188 qpair failed and we were unable to recover it. 00:35:35.188 [2024-11-18 00:40:58.688155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.188 [2024-11-18 00:40:58.688196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.188 qpair failed and we were unable to recover it. 00:35:35.188 [2024-11-18 00:40:58.688336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.188 [2024-11-18 00:40:58.688366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.188 qpair failed and we were unable to recover it. 00:35:35.188 [2024-11-18 00:40:58.688450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.188 [2024-11-18 00:40:58.688477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.188 qpair failed and we were unable to recover it. 00:35:35.188 [2024-11-18 00:40:58.688558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.188 [2024-11-18 00:40:58.688585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.188 qpair failed and we were unable to recover it. 00:35:35.188 [2024-11-18 00:40:58.688703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.188 [2024-11-18 00:40:58.688730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.188 qpair failed and we were unable to recover it. 00:35:35.188 [2024-11-18 00:40:58.688827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.188 [2024-11-18 00:40:58.688855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.188 qpair failed and we were unable to recover it. 00:35:35.188 [2024-11-18 00:40:58.688946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.188 [2024-11-18 00:40:58.688974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.188 qpair failed and we were unable to recover it. 00:35:35.188 [2024-11-18 00:40:58.689075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.188 [2024-11-18 00:40:58.689116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.188 qpair failed and we were unable to recover it. 00:35:35.188 [2024-11-18 00:40:58.689251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.188 [2024-11-18 00:40:58.689280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.188 qpair failed and we were unable to recover it. 00:35:35.188 [2024-11-18 00:40:58.689375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.188 [2024-11-18 00:40:58.689404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.188 qpair failed and we were unable to recover it. 00:35:35.188 [2024-11-18 00:40:58.689548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.188 [2024-11-18 00:40:58.689576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.188 qpair failed and we were unable to recover it. 00:35:35.188 [2024-11-18 00:40:58.689661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.188 [2024-11-18 00:40:58.689689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.188 qpair failed and we were unable to recover it. 00:35:35.188 [2024-11-18 00:40:58.689774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.188 [2024-11-18 00:40:58.689801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.188 qpair failed and we were unable to recover it. 00:35:35.189 [2024-11-18 00:40:58.689945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.189 [2024-11-18 00:40:58.689974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.189 qpair failed and we were unable to recover it. 00:35:35.189 [2024-11-18 00:40:58.690062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.189 [2024-11-18 00:40:58.690089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.189 qpair failed and we were unable to recover it. 00:35:35.189 [2024-11-18 00:40:58.690202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.189 [2024-11-18 00:40:58.690229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.189 qpair failed and we were unable to recover it. 00:35:35.189 [2024-11-18 00:40:58.690324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.189 [2024-11-18 00:40:58.690353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.189 qpair failed and we were unable to recover it. 00:35:35.189 [2024-11-18 00:40:58.690438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.189 [2024-11-18 00:40:58.690466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.189 qpair failed and we were unable to recover it. 00:35:35.189 [2024-11-18 00:40:58.690584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.189 [2024-11-18 00:40:58.690616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.189 qpair failed and we were unable to recover it. 00:35:35.189 [2024-11-18 00:40:58.690727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.189 [2024-11-18 00:40:58.690754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.189 qpair failed and we were unable to recover it. 00:35:35.189 [2024-11-18 00:40:58.690874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.189 [2024-11-18 00:40:58.690905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.189 qpair failed and we were unable to recover it. 00:35:35.189 [2024-11-18 00:40:58.690993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.189 [2024-11-18 00:40:58.691023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.189 qpair failed and we were unable to recover it. 00:35:35.189 [2024-11-18 00:40:58.691107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.189 [2024-11-18 00:40:58.691144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.189 qpair failed and we were unable to recover it. 00:35:35.189 [2024-11-18 00:40:58.691256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.189 [2024-11-18 00:40:58.691283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.189 qpair failed and we were unable to recover it. 00:35:35.189 [2024-11-18 00:40:58.691402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.189 [2024-11-18 00:40:58.691430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.189 qpair failed and we were unable to recover it. 00:35:35.189 [2024-11-18 00:40:58.691538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.189 [2024-11-18 00:40:58.691564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.189 qpair failed and we were unable to recover it. 00:35:35.189 [2024-11-18 00:40:58.691659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.189 [2024-11-18 00:40:58.691687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.189 qpair failed and we were unable to recover it. 00:35:35.189 [2024-11-18 00:40:58.691811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.189 [2024-11-18 00:40:58.691838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.189 qpair failed and we were unable to recover it. 00:35:35.189 [2024-11-18 00:40:58.691926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.189 [2024-11-18 00:40:58.691956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.189 qpair failed and we were unable to recover it. 00:35:35.189 [2024-11-18 00:40:58.692102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.189 [2024-11-18 00:40:58.692130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.189 qpair failed and we were unable to recover it. 00:35:35.189 [2024-11-18 00:40:58.692258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.189 [2024-11-18 00:40:58.692286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.189 qpair failed and we were unable to recover it. 00:35:35.189 [2024-11-18 00:40:58.692410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.189 [2024-11-18 00:40:58.692439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.189 qpair failed and we were unable to recover it. 00:35:35.189 [2024-11-18 00:40:58.692559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.189 [2024-11-18 00:40:58.692587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.189 qpair failed and we were unable to recover it. 00:35:35.189 [2024-11-18 00:40:58.692699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.189 [2024-11-18 00:40:58.692727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.189 qpair failed and we were unable to recover it. 00:35:35.189 [2024-11-18 00:40:58.692809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.189 [2024-11-18 00:40:58.692837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.189 qpair failed and we were unable to recover it. 00:35:35.189 [2024-11-18 00:40:58.692948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.189 [2024-11-18 00:40:58.692975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.189 qpair failed and we were unable to recover it. 00:35:35.189 [2024-11-18 00:40:58.693100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.189 [2024-11-18 00:40:58.693127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.189 qpair failed and we were unable to recover it. 00:35:35.189 [2024-11-18 00:40:58.693255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.189 [2024-11-18 00:40:58.693296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.189 qpair failed and we were unable to recover it. 00:35:35.189 [2024-11-18 00:40:58.693404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.189 [2024-11-18 00:40:58.693434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.189 qpair failed and we were unable to recover it. 00:35:35.189 [2024-11-18 00:40:58.693555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.190 [2024-11-18 00:40:58.693583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.190 qpair failed and we were unable to recover it. 00:35:35.190 [2024-11-18 00:40:58.693664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.190 [2024-11-18 00:40:58.693691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.190 qpair failed and we were unable to recover it. 00:35:35.190 [2024-11-18 00:40:58.693779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.190 [2024-11-18 00:40:58.693806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.190 qpair failed and we were unable to recover it. 00:35:35.190 [2024-11-18 00:40:58.693916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.190 [2024-11-18 00:40:58.693943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.190 qpair failed and we were unable to recover it. 00:35:35.190 [2024-11-18 00:40:58.694056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.190 [2024-11-18 00:40:58.694084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.190 qpair failed and we were unable to recover it. 00:35:35.190 [2024-11-18 00:40:58.694201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.190 [2024-11-18 00:40:58.694228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.190 qpair failed and we were unable to recover it. 00:35:35.190 [2024-11-18 00:40:58.694355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.190 [2024-11-18 00:40:58.694395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.190 qpair failed and we were unable to recover it. 00:35:35.190 [2024-11-18 00:40:58.694502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.190 [2024-11-18 00:40:58.694531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.190 qpair failed and we were unable to recover it. 00:35:35.190 [2024-11-18 00:40:58.694648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.190 [2024-11-18 00:40:58.694677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.190 qpair failed and we were unable to recover it. 00:35:35.190 [2024-11-18 00:40:58.694795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.190 [2024-11-18 00:40:58.694824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.190 qpair failed and we were unable to recover it. 00:35:35.190 [2024-11-18 00:40:58.694921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.190 [2024-11-18 00:40:58.694951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.190 qpair failed and we were unable to recover it. 00:35:35.190 [2024-11-18 00:40:58.695039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.190 [2024-11-18 00:40:58.695067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.190 qpair failed and we were unable to recover it. 00:35:35.190 [2024-11-18 00:40:58.695153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.190 [2024-11-18 00:40:58.695181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.190 qpair failed and we were unable to recover it. 00:35:35.190 [2024-11-18 00:40:58.695291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.190 [2024-11-18 00:40:58.695324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.190 qpair failed and we were unable to recover it. 00:35:35.190 [2024-11-18 00:40:58.695441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.190 [2024-11-18 00:40:58.695470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.190 qpair failed and we were unable to recover it. 00:35:35.190 [2024-11-18 00:40:58.695607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.190 [2024-11-18 00:40:58.695634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.190 qpair failed and we were unable to recover it. 00:35:35.190 [2024-11-18 00:40:58.695744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.190 [2024-11-18 00:40:58.695772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.190 qpair failed and we were unable to recover it. 00:35:35.190 [2024-11-18 00:40:58.695862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.190 [2024-11-18 00:40:58.695890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.190 qpair failed and we were unable to recover it. 00:35:35.190 [2024-11-18 00:40:58.695967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.190 [2024-11-18 00:40:58.695995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.190 qpair failed and we were unable to recover it. 00:35:35.190 [2024-11-18 00:40:58.696100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.190 [2024-11-18 00:40:58.696127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.190 qpair failed and we were unable to recover it. 00:35:35.190 [2024-11-18 00:40:58.696250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.190 [2024-11-18 00:40:58.696279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.190 qpair failed and we were unable to recover it. 00:35:35.190 [2024-11-18 00:40:58.696384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.190 [2024-11-18 00:40:58.696411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.190 qpair failed and we were unable to recover it. 00:35:35.190 [2024-11-18 00:40:58.696528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.190 [2024-11-18 00:40:58.696557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.190 qpair failed and we were unable to recover it. 00:35:35.190 [2024-11-18 00:40:58.696685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.190 [2024-11-18 00:40:58.696713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.190 qpair failed and we were unable to recover it. 00:35:35.190 [2024-11-18 00:40:58.696804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.190 [2024-11-18 00:40:58.696833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.190 qpair failed and we were unable to recover it. 00:35:35.190 [2024-11-18 00:40:58.696947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.190 [2024-11-18 00:40:58.696977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.190 qpair failed and we were unable to recover it. 00:35:35.190 [2024-11-18 00:40:58.697107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.191 [2024-11-18 00:40:58.697149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.191 qpair failed and we were unable to recover it. 00:35:35.191 [2024-11-18 00:40:58.697249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.191 [2024-11-18 00:40:58.697278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.191 qpair failed and we were unable to recover it. 00:35:35.191 [2024-11-18 00:40:58.697375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.191 [2024-11-18 00:40:58.697404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.191 qpair failed and we were unable to recover it. 00:35:35.191 [2024-11-18 00:40:58.697517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.191 [2024-11-18 00:40:58.697544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.191 qpair failed and we were unable to recover it. 00:35:35.191 [2024-11-18 00:40:58.697634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.191 [2024-11-18 00:40:58.697663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.191 qpair failed and we were unable to recover it. 00:35:35.191 [2024-11-18 00:40:58.697756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.191 [2024-11-18 00:40:58.697783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.191 qpair failed and we were unable to recover it. 00:35:35.191 [2024-11-18 00:40:58.697874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.191 [2024-11-18 00:40:58.697904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.191 qpair failed and we were unable to recover it. 00:35:35.191 [2024-11-18 00:40:58.697999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.191 [2024-11-18 00:40:58.698028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.191 qpair failed and we were unable to recover it. 00:35:35.191 [2024-11-18 00:40:58.698176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.191 [2024-11-18 00:40:58.698205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.191 qpair failed and we were unable to recover it. 00:35:35.191 [2024-11-18 00:40:58.698289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.191 [2024-11-18 00:40:58.698327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.191 qpair failed and we were unable to recover it. 00:35:35.191 [2024-11-18 00:40:58.698417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.191 [2024-11-18 00:40:58.698445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.191 qpair failed and we were unable to recover it. 00:35:35.191 [2024-11-18 00:40:58.698564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.191 [2024-11-18 00:40:58.698591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.191 qpair failed and we were unable to recover it. 00:35:35.191 [2024-11-18 00:40:58.698702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.191 [2024-11-18 00:40:58.698730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.191 qpair failed and we were unable to recover it. 00:35:35.191 [2024-11-18 00:40:58.698815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.191 [2024-11-18 00:40:58.698842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.191 qpair failed and we were unable to recover it. 00:35:35.191 [2024-11-18 00:40:58.698935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.191 [2024-11-18 00:40:58.698964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.191 qpair failed and we were unable to recover it. 00:35:35.191 [2024-11-18 00:40:58.699081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.191 [2024-11-18 00:40:58.699112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.191 qpair failed and we were unable to recover it. 00:35:35.191 [2024-11-18 00:40:58.699190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.191 [2024-11-18 00:40:58.699219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.191 qpair failed and we were unable to recover it. 00:35:35.191 [2024-11-18 00:40:58.699306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.191 [2024-11-18 00:40:58.699348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.191 qpair failed and we were unable to recover it. 00:35:35.191 [2024-11-18 00:40:58.699519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.191 [2024-11-18 00:40:58.699547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.191 qpair failed and we were unable to recover it. 00:35:35.191 [2024-11-18 00:40:58.699657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.191 [2024-11-18 00:40:58.699684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.191 qpair failed and we were unable to recover it. 00:35:35.191 [2024-11-18 00:40:58.699763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.191 [2024-11-18 00:40:58.699795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.191 qpair failed and we were unable to recover it. 00:35:35.191 [2024-11-18 00:40:58.699909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.191 [2024-11-18 00:40:58.699937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.191 qpair failed and we were unable to recover it. 00:35:35.191 [2024-11-18 00:40:58.700058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.191 [2024-11-18 00:40:58.700086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.191 qpair failed and we were unable to recover it. 00:35:35.191 [2024-11-18 00:40:58.700201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.191 [2024-11-18 00:40:58.700230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.191 qpair failed and we were unable to recover it. 00:35:35.191 [2024-11-18 00:40:58.700320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.191 [2024-11-18 00:40:58.700349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.191 qpair failed and we were unable to recover it. 00:35:35.192 [2024-11-18 00:40:58.700466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.192 [2024-11-18 00:40:58.700496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.192 qpair failed and we were unable to recover it. 00:35:35.192 [2024-11-18 00:40:58.700640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.192 [2024-11-18 00:40:58.700669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.192 qpair failed and we were unable to recover it. 00:35:35.192 [2024-11-18 00:40:58.700779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.192 [2024-11-18 00:40:58.700807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.192 qpair failed and we were unable to recover it. 00:35:35.192 [2024-11-18 00:40:58.700957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.192 [2024-11-18 00:40:58.700984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.192 qpair failed and we were unable to recover it. 00:35:35.192 [2024-11-18 00:40:58.701109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.192 [2024-11-18 00:40:58.701139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.192 qpair failed and we were unable to recover it. 00:35:35.192 [2024-11-18 00:40:58.701270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.192 [2024-11-18 00:40:58.701318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.192 qpair failed and we were unable to recover it. 00:35:35.192 [2024-11-18 00:40:58.701423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.192 [2024-11-18 00:40:58.701452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.192 qpair failed and we were unable to recover it. 00:35:35.192 [2024-11-18 00:40:58.701534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.192 [2024-11-18 00:40:58.701561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.192 qpair failed and we were unable to recover it. 00:35:35.192 [2024-11-18 00:40:58.701652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.192 [2024-11-18 00:40:58.701683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.192 qpair failed and we were unable to recover it. 00:35:35.192 [2024-11-18 00:40:58.701773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.192 [2024-11-18 00:40:58.701801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.192 qpair failed and we were unable to recover it. 00:35:35.192 [2024-11-18 00:40:58.701883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.192 [2024-11-18 00:40:58.701910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.192 qpair failed and we were unable to recover it. 00:35:35.192 [2024-11-18 00:40:58.702034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.192 [2024-11-18 00:40:58.702075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.192 qpair failed and we were unable to recover it. 00:35:35.192 [2024-11-18 00:40:58.702176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.192 [2024-11-18 00:40:58.702205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.192 qpair failed and we were unable to recover it. 00:35:35.192 [2024-11-18 00:40:58.702288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.192 [2024-11-18 00:40:58.702321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.192 qpair failed and we were unable to recover it. 00:35:35.192 [2024-11-18 00:40:58.702434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.192 [2024-11-18 00:40:58.702462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.192 qpair failed and we were unable to recover it. 00:35:35.192 [2024-11-18 00:40:58.702578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.192 [2024-11-18 00:40:58.702605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.192 qpair failed and we were unable to recover it. 00:35:35.192 [2024-11-18 00:40:58.702685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.192 [2024-11-18 00:40:58.702712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.192 qpair failed and we were unable to recover it. 00:35:35.192 [2024-11-18 00:40:58.702798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.192 [2024-11-18 00:40:58.702826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.192 qpair failed and we were unable to recover it. 00:35:35.192 [2024-11-18 00:40:58.702913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.192 [2024-11-18 00:40:58.702943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.192 qpair failed and we were unable to recover it. 00:35:35.192 [2024-11-18 00:40:58.703033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.192 [2024-11-18 00:40:58.703063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.192 qpair failed and we were unable to recover it. 00:35:35.192 [2024-11-18 00:40:58.703180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.192 [2024-11-18 00:40:58.703208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.192 qpair failed and we were unable to recover it. 00:35:35.192 [2024-11-18 00:40:58.703291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.192 [2024-11-18 00:40:58.703324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.192 qpair failed and we were unable to recover it. 00:35:35.192 [2024-11-18 00:40:58.703414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.192 [2024-11-18 00:40:58.703443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.192 qpair failed and we were unable to recover it. 00:35:35.192 [2024-11-18 00:40:58.703532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.193 [2024-11-18 00:40:58.703561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.193 qpair failed and we were unable to recover it. 00:35:35.193 [2024-11-18 00:40:58.703706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.193 [2024-11-18 00:40:58.703734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.193 qpair failed and we were unable to recover it. 00:35:35.193 [2024-11-18 00:40:58.703849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.193 [2024-11-18 00:40:58.703875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.193 qpair failed and we were unable to recover it. 00:35:35.193 [2024-11-18 00:40:58.703966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.193 [2024-11-18 00:40:58.703995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.193 qpair failed and we were unable to recover it. 00:35:35.193 [2024-11-18 00:40:58.704074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.193 [2024-11-18 00:40:58.704101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.193 qpair failed and we were unable to recover it. 00:35:35.193 [2024-11-18 00:40:58.704185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.193 [2024-11-18 00:40:58.704213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.193 qpair failed and we were unable to recover it. 00:35:35.193 [2024-11-18 00:40:58.704331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.193 [2024-11-18 00:40:58.704359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.193 qpair failed and we were unable to recover it. 00:35:35.193 [2024-11-18 00:40:58.704469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.193 [2024-11-18 00:40:58.704495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.193 qpair failed and we were unable to recover it. 00:35:35.193 [2024-11-18 00:40:58.704580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.193 [2024-11-18 00:40:58.704607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.193 qpair failed and we were unable to recover it. 00:35:35.193 [2024-11-18 00:40:58.704751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.193 [2024-11-18 00:40:58.704777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.193 qpair failed and we were unable to recover it. 00:35:35.193 [2024-11-18 00:40:58.704871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.193 [2024-11-18 00:40:58.704901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.193 qpair failed and we were unable to recover it. 00:35:35.193 [2024-11-18 00:40:58.705021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.193 [2024-11-18 00:40:58.705050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.193 qpair failed and we were unable to recover it. 00:35:35.193 [2024-11-18 00:40:58.705139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.193 [2024-11-18 00:40:58.705167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.193 qpair failed and we were unable to recover it. 00:35:35.193 [2024-11-18 00:40:58.705266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.193 [2024-11-18 00:40:58.705293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.193 qpair failed and we were unable to recover it. 00:35:35.193 [2024-11-18 00:40:58.705419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.193 [2024-11-18 00:40:58.705447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.193 qpair failed and we were unable to recover it. 00:35:35.193 [2024-11-18 00:40:58.705561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.193 [2024-11-18 00:40:58.705588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.193 qpair failed and we were unable to recover it. 00:35:35.193 [2024-11-18 00:40:58.705719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.193 [2024-11-18 00:40:58.705747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.193 qpair failed and we were unable to recover it. 00:35:35.193 [2024-11-18 00:40:58.705860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.193 [2024-11-18 00:40:58.705887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.193 qpair failed and we were unable to recover it. 00:35:35.193 [2024-11-18 00:40:58.705985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.193 [2024-11-18 00:40:58.706014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.193 qpair failed and we were unable to recover it. 00:35:35.193 [2024-11-18 00:40:58.706098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.193 [2024-11-18 00:40:58.706126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.193 qpair failed and we were unable to recover it. 00:35:35.193 [2024-11-18 00:40:58.706213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.193 [2024-11-18 00:40:58.706243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.193 qpair failed and we were unable to recover it. 00:35:35.193 [2024-11-18 00:40:58.706335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.193 [2024-11-18 00:40:58.706364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.193 qpair failed and we were unable to recover it. 00:35:35.193 [2024-11-18 00:40:58.706453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.193 [2024-11-18 00:40:58.706480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.193 qpair failed and we were unable to recover it. 00:35:35.193 [2024-11-18 00:40:58.706558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.193 [2024-11-18 00:40:58.706585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.193 qpair failed and we were unable to recover it. 00:35:35.193 [2024-11-18 00:40:58.706669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.193 [2024-11-18 00:40:58.706697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.193 qpair failed and we were unable to recover it. 00:35:35.193 [2024-11-18 00:40:58.706780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.193 [2024-11-18 00:40:58.706809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.193 qpair failed and we were unable to recover it. 00:35:35.193 [2024-11-18 00:40:58.706901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.194 [2024-11-18 00:40:58.706931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.194 qpair failed and we were unable to recover it. 00:35:35.194 [2024-11-18 00:40:58.707011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.194 [2024-11-18 00:40:58.707038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.194 qpair failed and we were unable to recover it. 00:35:35.194 [2024-11-18 00:40:58.707113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.194 [2024-11-18 00:40:58.707140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.194 qpair failed and we were unable to recover it. 00:35:35.194 [2024-11-18 00:40:58.707221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.194 [2024-11-18 00:40:58.707249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.194 qpair failed and we were unable to recover it. 00:35:35.194 [2024-11-18 00:40:58.707341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.194 [2024-11-18 00:40:58.707370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.194 qpair failed and we were unable to recover it. 00:35:35.194 [2024-11-18 00:40:58.707459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.194 [2024-11-18 00:40:58.707488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.194 qpair failed and we were unable to recover it. 00:35:35.194 [2024-11-18 00:40:58.707571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.194 [2024-11-18 00:40:58.707598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.194 qpair failed and we were unable to recover it. 00:35:35.194 [2024-11-18 00:40:58.707625] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:35.194 [2024-11-18 00:40:58.707658] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:35.194 [2024-11-18 00:40:58.707672] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:35.194 [2024-11-18 00:40:58.707684] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:35.194 [2024-11-18 00:40:58.707688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.194 [2024-11-18 00:40:58.707695] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:35.194 [2024-11-18 00:40:58.707714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.194 qpair failed and we were unable to recover it. 00:35:35.194 [2024-11-18 00:40:58.707798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.194 [2024-11-18 00:40:58.707824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.194 qpair failed and we were unable to recover it. 00:35:35.194 [2024-11-18 00:40:58.707968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.194 [2024-11-18 00:40:58.707996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.194 qpair failed and we were unable to recover it. 00:35:35.194 [2024-11-18 00:40:58.708090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.194 [2024-11-18 00:40:58.708119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.194 qpair failed and we were unable to recover it. 00:35:35.194 [2024-11-18 00:40:58.708208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.194 [2024-11-18 00:40:58.708242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.194 qpair failed and we were unable to recover it. 00:35:35.194 [2024-11-18 00:40:58.708335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.194 [2024-11-18 00:40:58.708362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.194 qpair failed and we were unable to recover it. 00:35:35.194 [2024-11-18 00:40:58.708451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.194 [2024-11-18 00:40:58.708478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.194 qpair failed and we were unable to recover it. 00:35:35.194 [2024-11-18 00:40:58.708571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.194 [2024-11-18 00:40:58.708598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.194 qpair failed and we were unable to recover it. 00:35:35.194 [2024-11-18 00:40:58.708703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.194 [2024-11-18 00:40:58.708730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.194 qpair failed and we were unable to recover it. 00:35:35.194 [2024-11-18 00:40:58.708808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.194 [2024-11-18 00:40:58.708836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.194 qpair failed and we were unable to recover it. 00:35:35.194 [2024-11-18 00:40:58.708930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.194 [2024-11-18 00:40:58.708957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.194 qpair failed and we were unable to recover it. 00:35:35.194 [2024-11-18 00:40:58.709071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.194 [2024-11-18 00:40:58.709100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.194 qpair failed and we were unable to recover it. 00:35:35.194 [2024-11-18 00:40:58.709219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.194 [2024-11-18 00:40:58.709248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.194 qpair failed and we were unable to recover it. 00:35:35.194 [2024-11-18 00:40:58.709347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.194 [2024-11-18 00:40:58.709377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.194 qpair failed and we were unable to recover it. 00:35:35.194 [2024-11-18 00:40:58.709501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.194 [2024-11-18 00:40:58.709529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.194 qpair failed and we were unable to recover it. 00:35:35.194 [2024-11-18 00:40:58.709613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.194 [2024-11-18 00:40:58.709640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.194 qpair failed and we were unable to recover it. 00:35:35.194 [2024-11-18 00:40:58.709724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.194 [2024-11-18 00:40:58.709752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.194 qpair failed and we were unable to recover it. 00:35:35.194 [2024-11-18 00:40:58.709834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.194 [2024-11-18 00:40:58.709861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.195 qpair failed and we were unable to recover it. 00:35:35.195 [2024-11-18 00:40:58.710009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.195 [2024-11-18 00:40:58.710037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.195 qpair failed and we were unable to recover it. 00:35:35.195 [2024-11-18 00:40:58.710134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.195 [2024-11-18 00:40:58.710174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.195 qpair failed and we were unable to recover it. 00:35:35.195 [2024-11-18 00:40:58.710153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:35:35.195 [2024-11-18 00:40:58.710206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:35:35.195 [2024-11-18 00:40:58.710268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.195 [2024-11-18 00:40:58.710253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:35:35.195 [2024-11-18 00:40:58.710297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.195 [2024-11-18 00:40:58.710258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:35:35.195 qpair failed and we were unable to recover it. 00:35:35.195 [2024-11-18 00:40:58.710395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.195 [2024-11-18 00:40:58.710424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.195 qpair failed and we were unable to recover it. 00:35:35.195 [2024-11-18 00:40:58.710546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.195 [2024-11-18 00:40:58.710572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.195 qpair failed and we were unable to recover it. 00:35:35.195 [2024-11-18 00:40:58.710659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.195 [2024-11-18 00:40:58.710687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.195 qpair failed and we were unable to recover it. 00:35:35.195 [2024-11-18 00:40:58.710786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.195 [2024-11-18 00:40:58.710815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff44000b90 with addr=10.0.0.2, port=4420 00:35:35.195 qpair failed and we were unable to recover it. 00:35:35.195 [2024-11-18 00:40:58.710897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.195 [2024-11-18 00:40:58.710927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.195 qpair failed and we were unable to recover it. 00:35:35.195 [2024-11-18 00:40:58.711022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.195 [2024-11-18 00:40:58.711049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.195 qpair failed and we were unable to recover it. 00:35:35.195 [2024-11-18 00:40:58.711163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.195 [2024-11-18 00:40:58.711190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.195 qpair failed and we were unable to recover it. 00:35:35.195 [2024-11-18 00:40:58.711283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.195 [2024-11-18 00:40:58.711317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.195 qpair failed and we were unable to recover it. 00:35:35.195 [2024-11-18 00:40:58.711407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.195 [2024-11-18 00:40:58.711435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.195 qpair failed and we were unable to recover it. 00:35:35.195 [2024-11-18 00:40:58.711527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.195 [2024-11-18 00:40:58.711554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.195 qpair failed and we were unable to recover it. 00:35:35.195 [2024-11-18 00:40:58.711636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.195 [2024-11-18 00:40:58.711664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.195 qpair failed and we were unable to recover it. 00:35:35.195 [2024-11-18 00:40:58.711746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.195 [2024-11-18 00:40:58.711773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.195 qpair failed and we were unable to recover it. 00:35:35.195 [2024-11-18 00:40:58.711851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.195 [2024-11-18 00:40:58.711879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.195 qpair failed and we were unable to recover it. 00:35:35.195 [2024-11-18 00:40:58.711988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.195 [2024-11-18 00:40:58.712016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.195 qpair failed and we were unable to recover it. 00:35:35.195 [2024-11-18 00:40:58.712091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.195 [2024-11-18 00:40:58.712118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.195 qpair failed and we were unable to recover it. 00:35:35.195 [2024-11-18 00:40:58.712229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.195 [2024-11-18 00:40:58.712257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.195 qpair failed and we were unable to recover it. 00:35:35.195 [2024-11-18 00:40:58.712373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.195 [2024-11-18 00:40:58.712401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.195 qpair failed and we were unable to recover it. 00:35:35.195 [2024-11-18 00:40:58.712513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.195 [2024-11-18 00:40:58.712541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.195 qpair failed and we were unable to recover it. 00:35:35.195 [2024-11-18 00:40:58.712621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.195 [2024-11-18 00:40:58.712649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.195 qpair failed and we were unable to recover it. 00:35:35.195 [2024-11-18 00:40:58.712757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.195 [2024-11-18 00:40:58.712784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.195 qpair failed and we were unable to recover it. 00:35:35.195 [2024-11-18 00:40:58.712867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.195 [2024-11-18 00:40:58.712895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.195 qpair failed and we were unable to recover it. 00:35:35.195 [2024-11-18 00:40:58.713002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.196 [2024-11-18 00:40:58.713030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.196 qpair failed and we were unable to recover it. 00:35:35.196 [2024-11-18 00:40:58.713114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.196 [2024-11-18 00:40:58.713141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.196 qpair failed and we were unable to recover it. 00:35:35.196 [2024-11-18 00:40:58.713231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.196 [2024-11-18 00:40:58.713258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.196 qpair failed and we were unable to recover it. 00:35:35.196 [2024-11-18 00:40:58.713338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.196 [2024-11-18 00:40:58.713375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.196 qpair failed and we were unable to recover it. 00:35:35.196 [2024-11-18 00:40:58.713462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.196 [2024-11-18 00:40:58.713491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.196 qpair failed and we were unable to recover it. 00:35:35.196 [2024-11-18 00:40:58.713571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.196 [2024-11-18 00:40:58.713599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.196 qpair failed and we were unable to recover it. 00:35:35.196 [2024-11-18 00:40:58.713682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.196 [2024-11-18 00:40:58.713709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.196 qpair failed and we were unable to recover it. 00:35:35.196 [2024-11-18 00:40:58.713817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.196 [2024-11-18 00:40:58.713844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.196 qpair failed and we were unable to recover it. 00:35:35.196 [2024-11-18 00:40:58.713918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.196 [2024-11-18 00:40:58.713945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.196 qpair failed and we were unable to recover it. 00:35:35.196 [2024-11-18 00:40:58.714033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.196 [2024-11-18 00:40:58.714060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.196 qpair failed and we were unable to recover it. 00:35:35.196 [2024-11-18 00:40:58.714170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.196 [2024-11-18 00:40:58.714197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.196 qpair failed and we were unable to recover it. 00:35:35.196 [2024-11-18 00:40:58.714273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.196 [2024-11-18 00:40:58.714300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.196 qpair failed and we were unable to recover it. 00:35:35.196 [2024-11-18 00:40:58.714389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.196 [2024-11-18 00:40:58.714417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.196 qpair failed and we were unable to recover it. 00:35:35.196 [2024-11-18 00:40:58.714511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.196 [2024-11-18 00:40:58.714539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.196 qpair failed and we were unable to recover it. 00:35:35.196 [2024-11-18 00:40:58.714623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.196 [2024-11-18 00:40:58.714651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.196 qpair failed and we were unable to recover it. 00:35:35.196 [2024-11-18 00:40:58.714743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.196 [2024-11-18 00:40:58.714771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.196 qpair failed and we were unable to recover it. 00:35:35.196 [2024-11-18 00:40:58.714882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.196 [2024-11-18 00:40:58.714909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.196 qpair failed and we were unable to recover it. 00:35:35.196 [2024-11-18 00:40:58.714996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.196 [2024-11-18 00:40:58.715023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.196 qpair failed and we were unable to recover it. 00:35:35.196 [2024-11-18 00:40:58.715121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.196 [2024-11-18 00:40:58.715161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.196 qpair failed and we were unable to recover it. 00:35:35.196 [2024-11-18 00:40:58.715284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.196 [2024-11-18 00:40:58.715320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.196 qpair failed and we were unable to recover it. 00:35:35.196 [2024-11-18 00:40:58.715412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.196 [2024-11-18 00:40:58.715439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.196 qpair failed and we were unable to recover it. 00:35:35.196 [2024-11-18 00:40:58.715519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.196 [2024-11-18 00:40:58.715546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.196 qpair failed and we were unable to recover it. 00:35:35.196 [2024-11-18 00:40:58.715628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.196 [2024-11-18 00:40:58.715655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.196 qpair failed and we were unable to recover it. 00:35:35.196 [2024-11-18 00:40:58.715733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.196 [2024-11-18 00:40:58.715760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.196 qpair failed and we were unable to recover it. 00:35:35.196 [2024-11-18 00:40:58.715837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.196 [2024-11-18 00:40:58.715863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.196 qpair failed and we were unable to recover it. 00:35:35.196 [2024-11-18 00:40:58.715984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.196 [2024-11-18 00:40:58.716014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.196 qpair failed and we were unable to recover it. 00:35:35.196 [2024-11-18 00:40:58.716095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.197 [2024-11-18 00:40:58.716123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.197 qpair failed and we were unable to recover it. 00:35:35.197 [2024-11-18 00:40:58.716204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.197 [2024-11-18 00:40:58.716232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.197 qpair failed and we were unable to recover it. 00:35:35.197 [2024-11-18 00:40:58.716329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.197 [2024-11-18 00:40:58.716357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.197 qpair failed and we were unable to recover it. 00:35:35.197 [2024-11-18 00:40:58.716474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.197 [2024-11-18 00:40:58.716502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.197 qpair failed and we were unable to recover it. 00:35:35.197 [2024-11-18 00:40:58.716585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.197 [2024-11-18 00:40:58.716613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.197 qpair failed and we were unable to recover it. 00:35:35.197 [2024-11-18 00:40:58.716725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.197 [2024-11-18 00:40:58.716752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.197 qpair failed and we were unable to recover it. 00:35:35.197 [2024-11-18 00:40:58.716841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.197 [2024-11-18 00:40:58.716868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.197 qpair failed and we were unable to recover it. 00:35:35.197 [2024-11-18 00:40:58.716947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.197 [2024-11-18 00:40:58.716975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.197 qpair failed and we were unable to recover it. 00:35:35.197 [2024-11-18 00:40:58.717070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.197 [2024-11-18 00:40:58.717100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.197 qpair failed and we were unable to recover it. 00:35:35.197 [2024-11-18 00:40:58.717214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.197 [2024-11-18 00:40:58.717241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.197 qpair failed and we were unable to recover it. 00:35:35.197 [2024-11-18 00:40:58.717365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.197 [2024-11-18 00:40:58.717393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.197 qpair failed and we were unable to recover it. 00:35:35.197 [2024-11-18 00:40:58.717478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.197 [2024-11-18 00:40:58.717505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.197 qpair failed and we were unable to recover it. 00:35:35.197 [2024-11-18 00:40:58.717589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.197 [2024-11-18 00:40:58.717615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.197 qpair failed and we were unable to recover it. 00:35:35.197 [2024-11-18 00:40:58.717698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.197 [2024-11-18 00:40:58.717725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.197 qpair failed and we were unable to recover it. 00:35:35.197 [2024-11-18 00:40:58.717837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.197 [2024-11-18 00:40:58.717865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.197 qpair failed and we were unable to recover it. 00:35:35.197 [2024-11-18 00:40:58.717948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.197 [2024-11-18 00:40:58.717979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.197 qpair failed and we were unable to recover it. 00:35:35.197 [2024-11-18 00:40:58.718098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.197 [2024-11-18 00:40:58.718125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.197 qpair failed and we were unable to recover it. 00:35:35.197 [2024-11-18 00:40:58.718257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.197 [2024-11-18 00:40:58.718286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.197 qpair failed and we were unable to recover it. 00:35:35.197 [2024-11-18 00:40:58.718382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.197 [2024-11-18 00:40:58.718409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.197 qpair failed and we were unable to recover it. 00:35:35.197 [2024-11-18 00:40:58.718506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.197 [2024-11-18 00:40:58.718533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.197 qpair failed and we were unable to recover it. 00:35:35.197 [2024-11-18 00:40:58.718620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.197 [2024-11-18 00:40:58.718648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.197 qpair failed and we were unable to recover it. 00:35:35.197 [2024-11-18 00:40:58.718789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.197 [2024-11-18 00:40:58.718816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.197 qpair failed and we were unable to recover it. 00:35:35.197 [2024-11-18 00:40:58.718903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.197 [2024-11-18 00:40:58.718930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.197 qpair failed and we were unable to recover it. 00:35:35.197 [2024-11-18 00:40:58.719010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.197 [2024-11-18 00:40:58.719038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.197 qpair failed and we were unable to recover it. 00:35:35.197 [2024-11-18 00:40:58.719115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.197 [2024-11-18 00:40:58.719142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.197 qpair failed and we were unable to recover it. 00:35:35.197 [2024-11-18 00:40:58.719278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.198 [2024-11-18 00:40:58.719323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.198 qpair failed and we were unable to recover it. 00:35:35.198 [2024-11-18 00:40:58.719416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.198 [2024-11-18 00:40:58.719446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.198 qpair failed and we were unable to recover it. 00:35:35.198 [2024-11-18 00:40:58.719546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.198 [2024-11-18 00:40:58.719573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.198 qpair failed and we were unable to recover it. 00:35:35.198 [2024-11-18 00:40:58.719661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.198 [2024-11-18 00:40:58.719688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.198 qpair failed and we were unable to recover it. 00:35:35.198 [2024-11-18 00:40:58.719771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.198 [2024-11-18 00:40:58.719798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.198 qpair failed and we were unable to recover it. 00:35:35.198 [2024-11-18 00:40:58.719884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.198 [2024-11-18 00:40:58.719911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.198 qpair failed and we were unable to recover it. 00:35:35.198 [2024-11-18 00:40:58.720025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.198 [2024-11-18 00:40:58.720054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.198 qpair failed and we were unable to recover it. 00:35:35.198 [2024-11-18 00:40:58.720137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.198 [2024-11-18 00:40:58.720166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.198 qpair failed and we were unable to recover it. 00:35:35.198 [2024-11-18 00:40:58.720253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.198 [2024-11-18 00:40:58.720281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.198 qpair failed and we were unable to recover it. 00:35:35.198 [2024-11-18 00:40:58.720438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.198 [2024-11-18 00:40:58.720466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.198 qpair failed and we were unable to recover it. 00:35:35.198 [2024-11-18 00:40:58.720546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.198 [2024-11-18 00:40:58.720574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.198 qpair failed and we were unable to recover it. 00:35:35.198 [2024-11-18 00:40:58.720660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.198 [2024-11-18 00:40:58.720688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.198 qpair failed and we were unable to recover it. 00:35:35.198 [2024-11-18 00:40:58.720765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.198 [2024-11-18 00:40:58.720793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.198 qpair failed and we were unable to recover it. 00:35:35.198 [2024-11-18 00:40:58.720879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.198 [2024-11-18 00:40:58.720907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.198 qpair failed and we were unable to recover it. 00:35:35.198 [2024-11-18 00:40:58.720992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.198 [2024-11-18 00:40:58.721019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.198 qpair failed and we were unable to recover it. 00:35:35.198 [2024-11-18 00:40:58.721135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.198 [2024-11-18 00:40:58.721163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.198 qpair failed and we were unable to recover it. 00:35:35.198 [2024-11-18 00:40:58.721286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.198 [2024-11-18 00:40:58.721335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.198 qpair failed and we were unable to recover it. 00:35:35.198 [2024-11-18 00:40:58.721465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.198 [2024-11-18 00:40:58.721495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.198 qpair failed and we were unable to recover it. 00:35:35.198 [2024-11-18 00:40:58.721589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.198 [2024-11-18 00:40:58.721618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.198 qpair failed and we were unable to recover it. 00:35:35.198 [2024-11-18 00:40:58.721733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.198 [2024-11-18 00:40:58.721761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.198 qpair failed and we were unable to recover it. 00:35:35.198 [2024-11-18 00:40:58.721846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.198 [2024-11-18 00:40:58.721873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.198 qpair failed and we were unable to recover it. 00:35:35.198 [2024-11-18 00:40:58.721953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.199 [2024-11-18 00:40:58.721981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.199 qpair failed and we were unable to recover it. 00:35:35.199 [2024-11-18 00:40:58.722071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.199 [2024-11-18 00:40:58.722099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.199 qpair failed and we were unable to recover it. 00:35:35.199 [2024-11-18 00:40:58.722212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.199 [2024-11-18 00:40:58.722239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.199 qpair failed and we were unable to recover it. 00:35:35.199 [2024-11-18 00:40:58.722350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.199 [2024-11-18 00:40:58.722377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.199 qpair failed and we were unable to recover it. 00:35:35.199 [2024-11-18 00:40:58.722461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.199 [2024-11-18 00:40:58.722490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.199 qpair failed and we were unable to recover it. 00:35:35.199 [2024-11-18 00:40:58.722598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.199 [2024-11-18 00:40:58.722625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.199 qpair failed and we were unable to recover it. 00:35:35.199 [2024-11-18 00:40:58.722719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.199 [2024-11-18 00:40:58.722748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.199 qpair failed and we were unable to recover it. 00:35:35.199 [2024-11-18 00:40:58.722875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.199 [2024-11-18 00:40:58.722903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.199 qpair failed and we were unable to recover it. 00:35:35.199 [2024-11-18 00:40:58.722987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.199 [2024-11-18 00:40:58.723014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.199 qpair failed and we were unable to recover it. 00:35:35.199 [2024-11-18 00:40:58.723096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.199 [2024-11-18 00:40:58.723129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.199 qpair failed and we were unable to recover it. 00:35:35.199 [2024-11-18 00:40:58.723262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.199 [2024-11-18 00:40:58.723290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.199 qpair failed and we were unable to recover it. 00:35:35.199 [2024-11-18 00:40:58.723411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.199 [2024-11-18 00:40:58.723438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.199 qpair failed and we were unable to recover it. 00:35:35.199 [2024-11-18 00:40:58.723534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.199 [2024-11-18 00:40:58.723562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.199 qpair failed and we were unable to recover it. 00:35:35.199 [2024-11-18 00:40:58.723643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.199 [2024-11-18 00:40:58.723671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.199 qpair failed and we were unable to recover it. 00:35:35.199 [2024-11-18 00:40:58.723797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.199 [2024-11-18 00:40:58.723824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.199 qpair failed and we were unable to recover it. 00:35:35.199 [2024-11-18 00:40:58.723910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.199 [2024-11-18 00:40:58.723937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.199 qpair failed and we were unable to recover it. 00:35:35.199 [2024-11-18 00:40:58.724037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.199 [2024-11-18 00:40:58.724077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.199 qpair failed and we were unable to recover it. 00:35:35.199 [2024-11-18 00:40:58.724172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.199 [2024-11-18 00:40:58.724201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.199 qpair failed and we were unable to recover it. 00:35:35.199 [2024-11-18 00:40:58.724299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.199 [2024-11-18 00:40:58.724334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.199 qpair failed and we were unable to recover it. 00:35:35.199 [2024-11-18 00:40:58.724434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.199 [2024-11-18 00:40:58.724462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.199 qpair failed and we were unable to recover it. 00:35:35.199 [2024-11-18 00:40:58.724551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.199 [2024-11-18 00:40:58.724577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.199 qpair failed and we were unable to recover it. 00:35:35.199 [2024-11-18 00:40:58.724695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.199 [2024-11-18 00:40:58.724723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.199 qpair failed and we were unable to recover it. 00:35:35.199 [2024-11-18 00:40:58.724808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.199 [2024-11-18 00:40:58.724837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.199 qpair failed and we were unable to recover it. 00:35:35.199 [2024-11-18 00:40:58.724958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.199 [2024-11-18 00:40:58.724984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.199 qpair failed and we were unable to recover it. 00:35:35.199 [2024-11-18 00:40:58.725073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.199 [2024-11-18 00:40:58.725102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.199 qpair failed and we were unable to recover it. 00:35:35.199 [2024-11-18 00:40:58.725192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.199 [2024-11-18 00:40:58.725220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.199 qpair failed and we were unable to recover it. 00:35:35.200 [2024-11-18 00:40:58.725329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.200 [2024-11-18 00:40:58.725357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.200 qpair failed and we were unable to recover it. 00:35:35.200 [2024-11-18 00:40:58.725436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.200 [2024-11-18 00:40:58.725463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.200 qpair failed and we were unable to recover it. 00:35:35.200 [2024-11-18 00:40:58.725545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.200 [2024-11-18 00:40:58.725572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.200 qpair failed and we were unable to recover it. 00:35:35.200 [2024-11-18 00:40:58.725651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.200 [2024-11-18 00:40:58.725679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.200 qpair failed and we were unable to recover it. 00:35:35.200 [2024-11-18 00:40:58.725767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.200 [2024-11-18 00:40:58.725794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.200 qpair failed and we were unable to recover it. 00:35:35.200 [2024-11-18 00:40:58.725879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.200 [2024-11-18 00:40:58.725906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.200 qpair failed and we were unable to recover it. 00:35:35.200 [2024-11-18 00:40:58.725987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.200 [2024-11-18 00:40:58.726014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.200 qpair failed and we were unable to recover it. 00:35:35.200 [2024-11-18 00:40:58.726106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.200 [2024-11-18 00:40:58.726135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.200 qpair failed and we were unable to recover it. 00:35:35.200 [2024-11-18 00:40:58.726233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.200 [2024-11-18 00:40:58.726262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.200 qpair failed and we were unable to recover it. 00:35:35.200 [2024-11-18 00:40:58.726399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.200 [2024-11-18 00:40:58.726426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.200 qpair failed and we were unable to recover it. 00:35:35.200 [2024-11-18 00:40:58.726500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.200 [2024-11-18 00:40:58.726532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.200 qpair failed and we were unable to recover it. 00:35:35.200 [2024-11-18 00:40:58.726613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.200 [2024-11-18 00:40:58.726641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.200 qpair failed and we were unable to recover it. 00:35:35.200 [2024-11-18 00:40:58.726719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.200 [2024-11-18 00:40:58.726746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.200 qpair failed and we were unable to recover it. 00:35:35.200 [2024-11-18 00:40:58.726837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.200 [2024-11-18 00:40:58.726866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.200 qpair failed and we were unable to recover it. 00:35:35.200 [2024-11-18 00:40:58.726955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.200 [2024-11-18 00:40:58.726984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.200 qpair failed and we were unable to recover it. 00:35:35.200 [2024-11-18 00:40:58.727070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.200 [2024-11-18 00:40:58.727098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.200 qpair failed and we were unable to recover it. 00:35:35.200 [2024-11-18 00:40:58.727181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.200 [2024-11-18 00:40:58.727209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.200 qpair failed and we were unable to recover it. 00:35:35.200 [2024-11-18 00:40:58.727321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.200 [2024-11-18 00:40:58.727348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.200 qpair failed and we were unable to recover it. 00:35:35.200 [2024-11-18 00:40:58.727459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.200 [2024-11-18 00:40:58.727486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.200 qpair failed and we were unable to recover it. 00:35:35.200 [2024-11-18 00:40:58.727574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.200 [2024-11-18 00:40:58.727601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.200 qpair failed and we were unable to recover it. 00:35:35.200 [2024-11-18 00:40:58.727684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.200 [2024-11-18 00:40:58.727711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.200 qpair failed and we were unable to recover it. 00:35:35.200 [2024-11-18 00:40:58.727829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.200 [2024-11-18 00:40:58.727856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.200 qpair failed and we were unable to recover it. 00:35:35.200 [2024-11-18 00:40:58.727949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.200 [2024-11-18 00:40:58.727978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.200 qpair failed and we were unable to recover it. 00:35:35.200 [2024-11-18 00:40:58.728060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.200 [2024-11-18 00:40:58.728090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.200 qpair failed and we were unable to recover it. 00:35:35.200 [2024-11-18 00:40:58.728183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.200 [2024-11-18 00:40:58.728212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.200 qpair failed and we were unable to recover it. 00:35:35.200 [2024-11-18 00:40:58.728300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.200 [2024-11-18 00:40:58.728340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.200 qpair failed and we were unable to recover it. 00:35:35.201 [2024-11-18 00:40:58.728420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.201 [2024-11-18 00:40:58.728447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.201 qpair failed and we were unable to recover it. 00:35:35.201 [2024-11-18 00:40:58.728531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.201 [2024-11-18 00:40:58.728558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.201 qpair failed and we were unable to recover it. 00:35:35.201 [2024-11-18 00:40:58.728679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.201 [2024-11-18 00:40:58.728706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.201 qpair failed and we were unable to recover it. 00:35:35.201 [2024-11-18 00:40:58.728783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.201 [2024-11-18 00:40:58.728810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.201 qpair failed and we were unable to recover it. 00:35:35.201 [2024-11-18 00:40:58.728897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.201 [2024-11-18 00:40:58.728924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.201 qpair failed and we were unable to recover it. 00:35:35.201 [2024-11-18 00:40:58.729037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.201 [2024-11-18 00:40:58.729065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.201 qpair failed and we were unable to recover it. 00:35:35.201 [2024-11-18 00:40:58.729184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.201 [2024-11-18 00:40:58.729211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.201 qpair failed and we were unable to recover it. 00:35:35.201 [2024-11-18 00:40:58.729295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.201 [2024-11-18 00:40:58.729330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.201 qpair failed and we were unable to recover it. 00:35:35.201 [2024-11-18 00:40:58.729419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.201 [2024-11-18 00:40:58.729447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.201 qpair failed and we were unable to recover it. 00:35:35.201 [2024-11-18 00:40:58.729526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.201 [2024-11-18 00:40:58.729552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.201 qpair failed and we were unable to recover it. 00:35:35.201 [2024-11-18 00:40:58.729634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.201 [2024-11-18 00:40:58.729661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.201 qpair failed and we were unable to recover it. 00:35:35.201 [2024-11-18 00:40:58.729771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.201 [2024-11-18 00:40:58.729799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.201 qpair failed and we were unable to recover it. 00:35:35.201 [2024-11-18 00:40:58.729922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.201 [2024-11-18 00:40:58.729950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.201 qpair failed and we were unable to recover it. 00:35:35.201 [2024-11-18 00:40:58.730033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.201 [2024-11-18 00:40:58.730060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.201 qpair failed and we were unable to recover it. 00:35:35.201 [2024-11-18 00:40:58.730182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.201 [2024-11-18 00:40:58.730209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.201 qpair failed and we were unable to recover it. 00:35:35.201 [2024-11-18 00:40:58.730297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.201 [2024-11-18 00:40:58.730330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.201 qpair failed and we were unable to recover it. 00:35:35.201 [2024-11-18 00:40:58.730429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.201 [2024-11-18 00:40:58.730456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.201 qpair failed and we were unable to recover it. 00:35:35.201 [2024-11-18 00:40:58.730542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.201 [2024-11-18 00:40:58.730570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.201 qpair failed and we were unable to recover it. 00:35:35.201 [2024-11-18 00:40:58.730668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.201 [2024-11-18 00:40:58.730695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.201 qpair failed and we were unable to recover it. 00:35:35.201 [2024-11-18 00:40:58.730776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.201 [2024-11-18 00:40:58.730804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.201 qpair failed and we were unable to recover it. 00:35:35.201 [2024-11-18 00:40:58.730880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.201 [2024-11-18 00:40:58.730907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.201 qpair failed and we were unable to recover it. 00:35:35.201 [2024-11-18 00:40:58.731047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.201 [2024-11-18 00:40:58.731088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.201 qpair failed and we were unable to recover it. 00:35:35.201 [2024-11-18 00:40:58.731176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.201 [2024-11-18 00:40:58.731205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.201 qpair failed and we were unable to recover it. 00:35:35.201 [2024-11-18 00:40:58.731322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.201 [2024-11-18 00:40:58.731358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.201 qpair failed and we were unable to recover it. 00:35:35.201 [2024-11-18 00:40:58.731449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.201 [2024-11-18 00:40:58.731476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.201 qpair failed and we were unable to recover it. 00:35:35.201 [2024-11-18 00:40:58.731595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.201 [2024-11-18 00:40:58.731628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.201 qpair failed and we were unable to recover it. 00:35:35.202 [2024-11-18 00:40:58.731725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.202 [2024-11-18 00:40:58.731752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.202 qpair failed and we were unable to recover it. 00:35:35.202 [2024-11-18 00:40:58.731839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.202 [2024-11-18 00:40:58.731866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.202 qpair failed and we were unable to recover it. 00:35:35.202 [2024-11-18 00:40:58.731945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.202 [2024-11-18 00:40:58.731972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.202 qpair failed and we were unable to recover it. 00:35:35.202 [2024-11-18 00:40:58.732045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.202 [2024-11-18 00:40:58.732072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.202 qpair failed and we were unable to recover it. 00:35:35.202 [2024-11-18 00:40:58.732185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.202 [2024-11-18 00:40:58.732212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.202 qpair failed and we were unable to recover it. 00:35:35.202 [2024-11-18 00:40:58.732320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.202 [2024-11-18 00:40:58.732347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.202 qpair failed and we were unable to recover it. 00:35:35.202 [2024-11-18 00:40:58.732430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.202 [2024-11-18 00:40:58.732457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.202 qpair failed and we were unable to recover it. 00:35:35.202 [2024-11-18 00:40:58.732565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.202 [2024-11-18 00:40:58.732598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.202 qpair failed and we were unable to recover it. 00:35:35.202 [2024-11-18 00:40:58.732683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.202 [2024-11-18 00:40:58.732710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.202 qpair failed and we were unable to recover it. 00:35:35.202 [2024-11-18 00:40:58.732826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.202 [2024-11-18 00:40:58.732853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.202 qpair failed and we were unable to recover it. 00:35:35.202 [2024-11-18 00:40:58.732945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.202 [2024-11-18 00:40:58.732971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.202 qpair failed and we were unable to recover it. 00:35:35.202 [2024-11-18 00:40:58.733059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.202 [2024-11-18 00:40:58.733086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.202 qpair failed and we were unable to recover it. 00:35:35.202 [2024-11-18 00:40:58.733178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.202 [2024-11-18 00:40:58.733208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.202 qpair failed and we were unable to recover it. 00:35:35.202 [2024-11-18 00:40:58.733302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.202 [2024-11-18 00:40:58.733336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.202 qpair failed and we were unable to recover it. 00:35:35.202 [2024-11-18 00:40:58.733438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.202 [2024-11-18 00:40:58.733466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.202 qpair failed and we were unable to recover it. 00:35:35.202 [2024-11-18 00:40:58.733544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.202 [2024-11-18 00:40:58.733571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.202 qpair failed and we were unable to recover it. 00:35:35.202 [2024-11-18 00:40:58.733670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.202 [2024-11-18 00:40:58.733699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.202 qpair failed and we were unable to recover it. 00:35:35.202 [2024-11-18 00:40:58.733816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.202 [2024-11-18 00:40:58.733844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.202 qpair failed and we were unable to recover it. 00:35:35.202 [2024-11-18 00:40:58.733953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.202 [2024-11-18 00:40:58.733981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.202 qpair failed and we were unable to recover it. 00:35:35.202 [2024-11-18 00:40:58.734061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.202 [2024-11-18 00:40:58.734090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.202 qpair failed and we were unable to recover it. 00:35:35.202 [2024-11-18 00:40:58.734175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.202 [2024-11-18 00:40:58.734203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.202 qpair failed and we were unable to recover it. 00:35:35.202 [2024-11-18 00:40:58.734324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.202 [2024-11-18 00:40:58.734359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.202 qpair failed and we were unable to recover it. 00:35:35.202 [2024-11-18 00:40:58.734454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.202 [2024-11-18 00:40:58.734481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.202 qpair failed and we were unable to recover it. 00:35:35.202 [2024-11-18 00:40:58.734588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.202 [2024-11-18 00:40:58.734621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.202 qpair failed and we were unable to recover it. 00:35:35.202 [2024-11-18 00:40:58.734712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.202 [2024-11-18 00:40:58.734739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.202 qpair failed and we were unable to recover it. 00:35:35.202 [2024-11-18 00:40:58.734820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.202 [2024-11-18 00:40:58.734854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.202 qpair failed and we were unable to recover it. 00:35:35.203 [2024-11-18 00:40:58.734933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.203 [2024-11-18 00:40:58.734960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.203 qpair failed and we were unable to recover it. 00:35:35.203 [2024-11-18 00:40:58.735039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.203 [2024-11-18 00:40:58.735066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.203 qpair failed and we were unable to recover it. 00:35:35.203 [2024-11-18 00:40:58.735147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.203 [2024-11-18 00:40:58.735175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.203 qpair failed and we were unable to recover it. 00:35:35.203 [2024-11-18 00:40:58.735276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.203 [2024-11-18 00:40:58.735303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.203 qpair failed and we were unable to recover it. 00:35:35.203 [2024-11-18 00:40:58.735395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.203 [2024-11-18 00:40:58.735422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.203 qpair failed and we were unable to recover it. 00:35:35.203 [2024-11-18 00:40:58.735507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.203 [2024-11-18 00:40:58.735534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.203 qpair failed and we were unable to recover it. 00:35:35.203 [2024-11-18 00:40:58.735623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.203 [2024-11-18 00:40:58.735650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.203 qpair failed and we were unable to recover it. 00:35:35.203 [2024-11-18 00:40:58.735755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.203 [2024-11-18 00:40:58.735782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.203 qpair failed and we were unable to recover it. 00:35:35.203 [2024-11-18 00:40:58.735870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.203 [2024-11-18 00:40:58.735898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.203 qpair failed and we were unable to recover it. 00:35:35.203 [2024-11-18 00:40:58.736013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.203 [2024-11-18 00:40:58.736040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.203 qpair failed and we were unable to recover it. 00:35:35.203 [2024-11-18 00:40:58.736123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.203 [2024-11-18 00:40:58.736151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.203 qpair failed and we were unable to recover it. 00:35:35.203 [2024-11-18 00:40:58.736238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.203 [2024-11-18 00:40:58.736267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.203 qpair failed and we were unable to recover it. 00:35:35.203 [2024-11-18 00:40:58.736360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.203 [2024-11-18 00:40:58.736388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.203 qpair failed and we were unable to recover it. 00:35:35.203 [2024-11-18 00:40:58.736489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.203 [2024-11-18 00:40:58.736529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.203 qpair failed and we were unable to recover it. 00:35:35.203 [2024-11-18 00:40:58.736629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.203 [2024-11-18 00:40:58.736657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.203 qpair failed and we were unable to recover it. 00:35:35.203 [2024-11-18 00:40:58.736745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.203 [2024-11-18 00:40:58.736773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.203 qpair failed and we were unable to recover it. 00:35:35.203 [2024-11-18 00:40:58.736917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.203 [2024-11-18 00:40:58.736945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.203 qpair failed and we were unable to recover it. 00:35:35.203 [2024-11-18 00:40:58.737025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.203 [2024-11-18 00:40:58.737053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.203 qpair failed and we were unable to recover it. 00:35:35.203 [2024-11-18 00:40:58.737169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.203 [2024-11-18 00:40:58.737195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.203 qpair failed and we were unable to recover it. 00:35:35.203 [2024-11-18 00:40:58.737283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.203 [2024-11-18 00:40:58.737318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.203 qpair failed and we were unable to recover it. 00:35:35.203 [2024-11-18 00:40:58.737437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.203 [2024-11-18 00:40:58.737465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.203 qpair failed and we were unable to recover it. 00:35:35.203 [2024-11-18 00:40:58.737583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.203 [2024-11-18 00:40:58.737620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.203 qpair failed and we were unable to recover it. 00:35:35.203 [2024-11-18 00:40:58.737728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.203 [2024-11-18 00:40:58.737755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.203 qpair failed and we were unable to recover it. 00:35:35.203 [2024-11-18 00:40:58.737853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.203 [2024-11-18 00:40:58.737880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.203 qpair failed and we were unable to recover it. 00:35:35.203 [2024-11-18 00:40:58.737963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.203 [2024-11-18 00:40:58.737992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.203 qpair failed and we were unable to recover it. 00:35:35.203 [2024-11-18 00:40:58.738076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.203 [2024-11-18 00:40:58.738105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.203 qpair failed and we were unable to recover it. 00:35:35.204 [2024-11-18 00:40:58.738186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.204 [2024-11-18 00:40:58.738214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.204 qpair failed and we were unable to recover it. 00:35:35.204 [2024-11-18 00:40:58.738290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.204 [2024-11-18 00:40:58.738327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.204 qpair failed and we were unable to recover it. 00:35:35.204 [2024-11-18 00:40:58.738448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.204 [2024-11-18 00:40:58.738475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.204 qpair failed and we were unable to recover it. 00:35:35.204 [2024-11-18 00:40:58.738550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.204 [2024-11-18 00:40:58.738577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.204 qpair failed and we were unable to recover it. 00:35:35.204 [2024-11-18 00:40:58.738697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.204 [2024-11-18 00:40:58.738724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.204 qpair failed and we were unable to recover it. 00:35:35.204 [2024-11-18 00:40:58.738815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.204 [2024-11-18 00:40:58.738844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.204 qpair failed and we were unable to recover it. 00:35:35.204 [2024-11-18 00:40:58.738930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.204 [2024-11-18 00:40:58.738957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.204 qpair failed and we were unable to recover it. 00:35:35.204 [2024-11-18 00:40:58.739070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.204 [2024-11-18 00:40:58.739097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.204 qpair failed and we were unable to recover it. 00:35:35.204 [2024-11-18 00:40:58.739184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.204 [2024-11-18 00:40:58.739211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.204 qpair failed and we were unable to recover it. 00:35:35.204 [2024-11-18 00:40:58.739307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.204 [2024-11-18 00:40:58.739339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.204 qpair failed and we were unable to recover it. 00:35:35.204 [2024-11-18 00:40:58.739482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.204 [2024-11-18 00:40:58.739509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.204 qpair failed and we were unable to recover it. 00:35:35.204 [2024-11-18 00:40:58.739587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.204 [2024-11-18 00:40:58.739614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.204 qpair failed and we were unable to recover it. 00:35:35.204 [2024-11-18 00:40:58.739706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.204 [2024-11-18 00:40:58.739733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.204 qpair failed and we were unable to recover it. 00:35:35.204 [2024-11-18 00:40:58.739846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.204 [2024-11-18 00:40:58.739880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.204 qpair failed and we were unable to recover it. 00:35:35.204 [2024-11-18 00:40:58.739965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.204 [2024-11-18 00:40:58.739991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.204 qpair failed and we were unable to recover it. 00:35:35.204 [2024-11-18 00:40:58.740096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.204 [2024-11-18 00:40:58.740123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.204 qpair failed and we were unable to recover it. 00:35:35.204 [2024-11-18 00:40:58.740205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.204 [2024-11-18 00:40:58.740232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.204 qpair failed and we were unable to recover it. 00:35:35.204 [2024-11-18 00:40:58.740324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.204 [2024-11-18 00:40:58.740364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.204 qpair failed and we were unable to recover it. 00:35:35.204 [2024-11-18 00:40:58.740468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.204 [2024-11-18 00:40:58.740509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.204 qpair failed and we were unable to recover it. 00:35:35.204 [2024-11-18 00:40:58.740632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.204 [2024-11-18 00:40:58.740661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.204 qpair failed and we were unable to recover it. 00:35:35.204 [2024-11-18 00:40:58.740743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.204 [2024-11-18 00:40:58.740770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.204 qpair failed and we were unable to recover it. 00:35:35.204 [2024-11-18 00:40:58.740851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.204 [2024-11-18 00:40:58.740879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.204 qpair failed and we were unable to recover it. 00:35:35.204 [2024-11-18 00:40:58.740964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.204 [2024-11-18 00:40:58.740992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.204 qpair failed and we were unable to recover it. 00:35:35.204 [2024-11-18 00:40:58.741112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.204 [2024-11-18 00:40:58.741140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.204 qpair failed and we were unable to recover it. 00:35:35.204 [2024-11-18 00:40:58.741250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.204 [2024-11-18 00:40:58.741278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.204 qpair failed and we were unable to recover it. 00:35:35.204 [2024-11-18 00:40:58.741383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.204 [2024-11-18 00:40:58.741410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.204 qpair failed and we were unable to recover it. 00:35:35.205 [2024-11-18 00:40:58.741495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.205 [2024-11-18 00:40:58.741522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.205 qpair failed and we were unable to recover it. 00:35:35.205 [2024-11-18 00:40:58.741621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.205 [2024-11-18 00:40:58.741649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.205 qpair failed and we were unable to recover it. 00:35:35.205 [2024-11-18 00:40:58.741744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.205 [2024-11-18 00:40:58.741771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.205 qpair failed and we were unable to recover it. 00:35:35.205 [2024-11-18 00:40:58.741859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.205 [2024-11-18 00:40:58.741888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.205 qpair failed and we were unable to recover it. 00:35:35.205 [2024-11-18 00:40:58.742003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.205 [2024-11-18 00:40:58.742031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.205 qpair failed and we were unable to recover it. 00:35:35.205 [2024-11-18 00:40:58.742148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.205 [2024-11-18 00:40:58.742175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.205 qpair failed and we were unable to recover it. 00:35:35.205 [2024-11-18 00:40:58.742254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.205 [2024-11-18 00:40:58.742281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.205 qpair failed and we were unable to recover it. 00:35:35.205 [2024-11-18 00:40:58.742378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.205 [2024-11-18 00:40:58.742406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.205 qpair failed and we were unable to recover it. 00:35:35.205 [2024-11-18 00:40:58.742486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.205 [2024-11-18 00:40:58.742514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.205 qpair failed and we were unable to recover it. 00:35:35.205 [2024-11-18 00:40:58.742621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.205 [2024-11-18 00:40:58.742648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.205 qpair failed and we were unable to recover it. 00:35:35.205 [2024-11-18 00:40:58.742731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.205 [2024-11-18 00:40:58.742758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.205 qpair failed and we were unable to recover it. 00:35:35.205 [2024-11-18 00:40:58.742871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.205 [2024-11-18 00:40:58.742898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.205 qpair failed and we were unable to recover it. 00:35:35.205 [2024-11-18 00:40:58.742982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.205 [2024-11-18 00:40:58.743011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.205 qpair failed and we were unable to recover it. 00:35:35.205 [2024-11-18 00:40:58.743088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.205 [2024-11-18 00:40:58.743116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.205 qpair failed and we were unable to recover it. 00:35:35.205 [2024-11-18 00:40:58.743199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.205 [2024-11-18 00:40:58.743227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.205 qpair failed and we were unable to recover it. 00:35:35.205 [2024-11-18 00:40:58.743504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.205 [2024-11-18 00:40:58.743531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.205 qpair failed and we were unable to recover it. 00:35:35.205 [2024-11-18 00:40:58.743611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.205 [2024-11-18 00:40:58.743638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.205 qpair failed and we were unable to recover it. 00:35:35.205 [2024-11-18 00:40:58.743719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.205 [2024-11-18 00:40:58.743746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.205 qpair failed and we were unable to recover it. 00:35:35.205 [2024-11-18 00:40:58.743860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.205 [2024-11-18 00:40:58.743888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.205 qpair failed and we were unable to recover it. 00:35:35.205 [2024-11-18 00:40:58.744008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.206 [2024-11-18 00:40:58.744037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.206 qpair failed and we were unable to recover it. 00:35:35.206 [2024-11-18 00:40:58.744125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.206 [2024-11-18 00:40:58.744153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.206 qpair failed and we were unable to recover it. 00:35:35.206 [2024-11-18 00:40:58.744250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.206 [2024-11-18 00:40:58.744277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.206 qpair failed and we were unable to recover it. 00:35:35.206 [2024-11-18 00:40:58.744386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.206 [2024-11-18 00:40:58.744414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.206 qpair failed and we were unable to recover it. 00:35:35.206 [2024-11-18 00:40:58.744500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.206 [2024-11-18 00:40:58.744527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.206 qpair failed and we were unable to recover it. 00:35:35.206 [2024-11-18 00:40:58.744635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.206 [2024-11-18 00:40:58.744662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.206 qpair failed and we were unable to recover it. 00:35:35.206 [2024-11-18 00:40:58.744786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.206 [2024-11-18 00:40:58.744813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.206 qpair failed and we were unable to recover it. 00:35:35.206 [2024-11-18 00:40:58.744892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.206 [2024-11-18 00:40:58.744919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.206 qpair failed and we were unable to recover it. 00:35:35.206 [2024-11-18 00:40:58.745006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.206 [2024-11-18 00:40:58.745033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.206 qpair failed and we were unable to recover it. 00:35:35.206 [2024-11-18 00:40:58.745175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.206 [2024-11-18 00:40:58.745215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.206 qpair failed and we were unable to recover it. 00:35:35.206 [2024-11-18 00:40:58.745328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.206 [2024-11-18 00:40:58.745358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.206 qpair failed and we were unable to recover it. 00:35:35.206 [2024-11-18 00:40:58.745557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.206 [2024-11-18 00:40:58.745586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.206 qpair failed and we were unable to recover it. 00:35:35.206 [2024-11-18 00:40:58.745695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.206 [2024-11-18 00:40:58.745723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.206 qpair failed and we were unable to recover it. 00:35:35.206 [2024-11-18 00:40:58.745837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.206 [2024-11-18 00:40:58.745865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.206 qpair failed and we were unable to recover it. 00:35:35.206 [2024-11-18 00:40:58.746007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.206 [2024-11-18 00:40:58.746034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.206 qpair failed and we were unable to recover it. 00:35:35.206 [2024-11-18 00:40:58.746121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.206 [2024-11-18 00:40:58.746149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.206 qpair failed and we were unable to recover it. 00:35:35.206 [2024-11-18 00:40:58.746233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.206 [2024-11-18 00:40:58.746260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.206 qpair failed and we were unable to recover it. 00:35:35.206 [2024-11-18 00:40:58.746342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.206 [2024-11-18 00:40:58.746370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.206 qpair failed and we were unable to recover it. 00:35:35.206 [2024-11-18 00:40:58.746458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.206 [2024-11-18 00:40:58.746485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.206 qpair failed and we were unable to recover it. 00:35:35.206 [2024-11-18 00:40:58.746599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.206 [2024-11-18 00:40:58.746626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.206 qpair failed and we were unable to recover it. 00:35:35.206 [2024-11-18 00:40:58.746705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.206 [2024-11-18 00:40:58.746732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.206 qpair failed and we were unable to recover it. 00:35:35.206 [2024-11-18 00:40:58.746829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.206 [2024-11-18 00:40:58.746858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.206 qpair failed and we were unable to recover it. 00:35:35.206 [2024-11-18 00:40:58.746974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.206 [2024-11-18 00:40:58.747003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.206 qpair failed and we were unable to recover it. 00:35:35.206 [2024-11-18 00:40:58.747085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.206 [2024-11-18 00:40:58.747112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.206 qpair failed and we were unable to recover it. 00:35:35.206 [2024-11-18 00:40:58.747189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.206 [2024-11-18 00:40:58.747217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.206 qpair failed and we were unable to recover it. 00:35:35.206 [2024-11-18 00:40:58.747298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.206 [2024-11-18 00:40:58.747331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.206 qpair failed and we were unable to recover it. 00:35:35.206 [2024-11-18 00:40:58.747525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.206 [2024-11-18 00:40:58.747552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.206 qpair failed and we were unable to recover it. 00:35:35.206 [2024-11-18 00:40:58.747639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.206 [2024-11-18 00:40:58.747671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.206 qpair failed and we were unable to recover it. 00:35:35.207 [2024-11-18 00:40:58.747780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.207 [2024-11-18 00:40:58.747807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.207 qpair failed and we were unable to recover it. 00:35:35.207 [2024-11-18 00:40:58.747889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.207 [2024-11-18 00:40:58.747916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.207 qpair failed and we were unable to recover it. 00:35:35.207 [2024-11-18 00:40:58.748013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.207 [2024-11-18 00:40:58.748041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.207 qpair failed and we were unable to recover it. 00:35:35.207 [2024-11-18 00:40:58.748120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.207 [2024-11-18 00:40:58.748147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.207 qpair failed and we were unable to recover it. 00:35:35.207 [2024-11-18 00:40:58.748255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.207 [2024-11-18 00:40:58.748282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.207 qpair failed and we were unable to recover it. 00:35:35.207 [2024-11-18 00:40:58.748369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.207 [2024-11-18 00:40:58.748397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.207 qpair failed and we were unable to recover it. 00:35:35.207 [2024-11-18 00:40:58.748475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.207 [2024-11-18 00:40:58.748502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.207 qpair failed and we were unable to recover it. 00:35:35.207 [2024-11-18 00:40:58.748602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.207 [2024-11-18 00:40:58.748629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.207 qpair failed and we were unable to recover it. 00:35:35.207 [2024-11-18 00:40:58.748741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.207 [2024-11-18 00:40:58.748769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.207 qpair failed and we were unable to recover it. 00:35:35.207 [2024-11-18 00:40:58.748848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.207 [2024-11-18 00:40:58.748876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.207 qpair failed and we were unable to recover it. 00:35:35.207 [2024-11-18 00:40:58.748985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.207 [2024-11-18 00:40:58.749020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.207 qpair failed and we were unable to recover it. 00:35:35.207 [2024-11-18 00:40:58.749103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.207 [2024-11-18 00:40:58.749130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.207 qpair failed and we were unable to recover it. 00:35:35.207 [2024-11-18 00:40:58.749237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.207 [2024-11-18 00:40:58.749264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.207 qpair failed and we were unable to recover it. 00:35:35.207 [2024-11-18 00:40:58.749388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.207 [2024-11-18 00:40:58.749428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.207 qpair failed and we were unable to recover it. 00:35:35.207 [2024-11-18 00:40:58.749558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.207 [2024-11-18 00:40:58.749586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.207 qpair failed and we were unable to recover it. 00:35:35.207 [2024-11-18 00:40:58.749681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.207 [2024-11-18 00:40:58.749708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.207 qpair failed and we were unable to recover it. 00:35:35.207 [2024-11-18 00:40:58.749791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.207 [2024-11-18 00:40:58.749818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.207 qpair failed and we were unable to recover it. 00:35:35.207 [2024-11-18 00:40:58.749918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.207 [2024-11-18 00:40:58.749946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.207 qpair failed and we were unable to recover it. 00:35:35.207 [2024-11-18 00:40:58.750033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.207 [2024-11-18 00:40:58.750061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.207 qpair failed and we were unable to recover it. 00:35:35.207 [2024-11-18 00:40:58.750137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.207 [2024-11-18 00:40:58.750164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.207 qpair failed and we were unable to recover it. 00:35:35.207 [2024-11-18 00:40:58.750280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.207 [2024-11-18 00:40:58.750308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.207 qpair failed and we were unable to recover it. 00:35:35.207 [2024-11-18 00:40:58.750399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.207 [2024-11-18 00:40:58.750427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.207 qpair failed and we were unable to recover it. 00:35:35.207 [2024-11-18 00:40:58.750505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.207 [2024-11-18 00:40:58.750533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.207 qpair failed and we were unable to recover it. 00:35:35.207 [2024-11-18 00:40:58.750659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.207 [2024-11-18 00:40:58.750687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.207 qpair failed and we were unable to recover it. 00:35:35.207 [2024-11-18 00:40:58.750762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.207 [2024-11-18 00:40:58.750789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.207 qpair failed and we were unable to recover it. 00:35:35.207 [2024-11-18 00:40:58.750905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.207 [2024-11-18 00:40:58.750932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.207 qpair failed and we were unable to recover it. 00:35:35.207 [2024-11-18 00:40:58.751015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.207 [2024-11-18 00:40:58.751043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.208 qpair failed and we were unable to recover it. 00:35:35.208 [2024-11-18 00:40:58.751121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.208 [2024-11-18 00:40:58.751148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.208 qpair failed and we were unable to recover it. 00:35:35.208 [2024-11-18 00:40:58.751224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.208 [2024-11-18 00:40:58.751253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.208 qpair failed and we were unable to recover it. 00:35:35.208 [2024-11-18 00:40:58.751354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.208 [2024-11-18 00:40:58.751382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.208 qpair failed and we were unable to recover it. 00:35:35.208 [2024-11-18 00:40:58.751576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.208 [2024-11-18 00:40:58.751611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.208 qpair failed and we were unable to recover it. 00:35:35.208 [2024-11-18 00:40:58.751728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.208 [2024-11-18 00:40:58.751755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.208 qpair failed and we were unable to recover it. 00:35:35.208 [2024-11-18 00:40:58.751860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.208 [2024-11-18 00:40:58.751887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.208 qpair failed and we were unable to recover it. 00:35:35.208 [2024-11-18 00:40:58.751967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.208 [2024-11-18 00:40:58.751994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.208 qpair failed and we were unable to recover it. 00:35:35.208 [2024-11-18 00:40:58.752075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.208 [2024-11-18 00:40:58.752111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.208 qpair failed and we were unable to recover it. 00:35:35.208 [2024-11-18 00:40:58.752230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.208 [2024-11-18 00:40:58.752258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.208 qpair failed and we were unable to recover it. 00:35:35.208 [2024-11-18 00:40:58.752360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.208 [2024-11-18 00:40:58.752387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.208 qpair failed and we were unable to recover it. 00:35:35.208 [2024-11-18 00:40:58.752510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.208 [2024-11-18 00:40:58.752539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.208 qpair failed and we were unable to recover it. 00:35:35.208 [2024-11-18 00:40:58.752627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.208 [2024-11-18 00:40:58.752654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.208 qpair failed and we were unable to recover it. 00:35:35.208 [2024-11-18 00:40:58.752766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.208 [2024-11-18 00:40:58.752793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.208 qpair failed and we were unable to recover it. 00:35:35.208 [2024-11-18 00:40:58.752878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.208 [2024-11-18 00:40:58.752906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.208 qpair failed and we were unable to recover it. 00:35:35.208 [2024-11-18 00:40:58.753024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.208 [2024-11-18 00:40:58.753052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.208 qpair failed and we were unable to recover it. 00:35:35.208 [2024-11-18 00:40:58.753134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.208 [2024-11-18 00:40:58.753161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.208 qpair failed and we were unable to recover it. 00:35:35.208 [2024-11-18 00:40:58.753282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.208 [2024-11-18 00:40:58.753323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.208 qpair failed and we were unable to recover it. 00:35:35.208 [2024-11-18 00:40:58.753427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.208 [2024-11-18 00:40:58.753454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.208 qpair failed and we were unable to recover it. 00:35:35.208 [2024-11-18 00:40:58.753534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.208 [2024-11-18 00:40:58.753560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.208 qpair failed and we were unable to recover it. 00:35:35.208 [2024-11-18 00:40:58.753647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.208 [2024-11-18 00:40:58.753674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.208 qpair failed and we were unable to recover it. 00:35:35.208 [2024-11-18 00:40:58.753793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.208 [2024-11-18 00:40:58.753820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.208 qpair failed and we were unable to recover it. 00:35:35.208 [2024-11-18 00:40:58.753936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.208 [2024-11-18 00:40:58.753962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.208 qpair failed and we were unable to recover it. 00:35:35.208 [2024-11-18 00:40:58.754061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.208 [2024-11-18 00:40:58.754102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.208 qpair failed and we were unable to recover it. 00:35:35.208 [2024-11-18 00:40:58.754188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.208 [2024-11-18 00:40:58.754217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.208 qpair failed and we were unable to recover it. 00:35:35.208 [2024-11-18 00:40:58.754307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.208 [2024-11-18 00:40:58.754339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.208 qpair failed and we were unable to recover it. 00:35:35.208 [2024-11-18 00:40:58.754425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.208 [2024-11-18 00:40:58.754451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.208 qpair failed and we were unable to recover it. 00:35:35.209 [2024-11-18 00:40:58.754542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.209 [2024-11-18 00:40:58.754569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.209 qpair failed and we were unable to recover it. 00:35:35.209 [2024-11-18 00:40:58.754714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.209 [2024-11-18 00:40:58.754742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.209 qpair failed and we were unable to recover it. 00:35:35.209 [2024-11-18 00:40:58.754826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.209 [2024-11-18 00:40:58.754855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.209 qpair failed and we were unable to recover it. 00:35:35.209 [2024-11-18 00:40:58.754946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.209 [2024-11-18 00:40:58.754972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.209 qpair failed and we were unable to recover it. 00:35:35.209 [2024-11-18 00:40:58.755056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.209 [2024-11-18 00:40:58.755083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.209 qpair failed and we were unable to recover it. 00:35:35.209 [2024-11-18 00:40:58.755200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.209 [2024-11-18 00:40:58.755228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.209 qpair failed and we were unable to recover it. 00:35:35.209 [2024-11-18 00:40:58.755354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.209 [2024-11-18 00:40:58.755386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.209 qpair failed and we were unable to recover it. 00:35:35.209 [2024-11-18 00:40:58.755466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.209 [2024-11-18 00:40:58.755494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.209 qpair failed and we were unable to recover it. 00:35:35.209 [2024-11-18 00:40:58.755618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.209 [2024-11-18 00:40:58.755647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.209 qpair failed and we were unable to recover it. 00:35:35.209 [2024-11-18 00:40:58.755730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.209 [2024-11-18 00:40:58.755757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.209 qpair failed and we were unable to recover it. 00:35:35.209 [2024-11-18 00:40:58.755832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.209 [2024-11-18 00:40:58.755859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.209 qpair failed and we were unable to recover it. 00:35:35.209 [2024-11-18 00:40:58.755939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.209 [2024-11-18 00:40:58.755966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.209 qpair failed and we were unable to recover it. 00:35:35.209 [2024-11-18 00:40:58.756073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.209 [2024-11-18 00:40:58.756100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.209 qpair failed and we were unable to recover it. 00:35:35.209 [2024-11-18 00:40:58.756184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.209 [2024-11-18 00:40:58.756211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.209 qpair failed and we were unable to recover it. 00:35:35.209 [2024-11-18 00:40:58.756298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.209 [2024-11-18 00:40:58.756332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.209 qpair failed and we were unable to recover it. 00:35:35.209 [2024-11-18 00:40:58.756429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.209 [2024-11-18 00:40:58.756457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.209 qpair failed and we were unable to recover it. 00:35:35.209 [2024-11-18 00:40:58.756550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.209 [2024-11-18 00:40:58.756577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.209 qpair failed and we were unable to recover it. 00:35:35.209 [2024-11-18 00:40:58.756700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.209 [2024-11-18 00:40:58.756727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.209 qpair failed and we were unable to recover it. 00:35:35.209 [2024-11-18 00:40:58.756814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.209 [2024-11-18 00:40:58.756842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.209 qpair failed and we were unable to recover it. 00:35:35.209 [2024-11-18 00:40:58.756928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.209 [2024-11-18 00:40:58.756955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.209 qpair failed and we were unable to recover it. 00:35:35.209 [2024-11-18 00:40:58.757045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.209 [2024-11-18 00:40:58.757073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.209 qpair failed and we were unable to recover it. 00:35:35.209 [2024-11-18 00:40:58.757182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.209 [2024-11-18 00:40:58.757215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.209 qpair failed and we were unable to recover it. 00:35:35.209 [2024-11-18 00:40:58.757304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.209 [2024-11-18 00:40:58.757358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.209 qpair failed and we were unable to recover it. 00:35:35.209 [2024-11-18 00:40:58.757499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.209 [2024-11-18 00:40:58.757525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.209 qpair failed and we were unable to recover it. 00:35:35.209 [2024-11-18 00:40:58.757620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.209 [2024-11-18 00:40:58.757647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.209 qpair failed and we were unable to recover it. 00:35:35.209 [2024-11-18 00:40:58.757725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.209 [2024-11-18 00:40:58.757753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.210 qpair failed and we were unable to recover it. 00:35:35.210 [2024-11-18 00:40:58.757870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.210 [2024-11-18 00:40:58.757900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.210 qpair failed and we were unable to recover it. 00:35:35.210 [2024-11-18 00:40:58.757987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.210 [2024-11-18 00:40:58.758016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.210 qpair failed and we were unable to recover it. 00:35:35.210 [2024-11-18 00:40:58.758127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.210 [2024-11-18 00:40:58.758154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.210 qpair failed and we were unable to recover it. 00:35:35.210 [2024-11-18 00:40:58.758237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.210 [2024-11-18 00:40:58.758264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.210 qpair failed and we were unable to recover it. 00:35:35.210 [2024-11-18 00:40:58.758356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.210 [2024-11-18 00:40:58.758384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.210 qpair failed and we were unable to recover it. 00:35:35.210 [2024-11-18 00:40:58.758466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.210 [2024-11-18 00:40:58.758494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.210 qpair failed and we were unable to recover it. 00:35:35.210 [2024-11-18 00:40:58.758590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.210 [2024-11-18 00:40:58.758627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.210 qpair failed and we were unable to recover it. 00:35:35.210 [2024-11-18 00:40:58.758734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.210 [2024-11-18 00:40:58.758761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.210 qpair failed and we were unable to recover it. 00:35:35.210 [2024-11-18 00:40:58.758851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.210 [2024-11-18 00:40:58.758878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.210 qpair failed and we were unable to recover it. 00:35:35.210 [2024-11-18 00:40:58.758995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.210 [2024-11-18 00:40:58.759024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.210 qpair failed and we were unable to recover it. 00:35:35.210 [2024-11-18 00:40:58.759115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.210 [2024-11-18 00:40:58.759143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.210 qpair failed and we were unable to recover it. 00:35:35.210 [2024-11-18 00:40:58.759217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.210 [2024-11-18 00:40:58.759244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.210 qpair failed and we were unable to recover it. 00:35:35.210 [2024-11-18 00:40:58.759331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.210 [2024-11-18 00:40:58.759369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.210 qpair failed and we were unable to recover it. 00:35:35.210 [2024-11-18 00:40:58.759479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.210 [2024-11-18 00:40:58.759506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.210 qpair failed and we were unable to recover it. 00:35:35.210 [2024-11-18 00:40:58.759593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.210 [2024-11-18 00:40:58.759623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.210 qpair failed and we were unable to recover it. 00:35:35.210 [2024-11-18 00:40:58.759732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.210 [2024-11-18 00:40:58.759761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.210 qpair failed and we were unable to recover it. 00:35:35.210 [2024-11-18 00:40:58.759845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.210 [2024-11-18 00:40:58.759873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.210 qpair failed and we were unable to recover it. 00:35:35.210 [2024-11-18 00:40:58.759957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.210 [2024-11-18 00:40:58.759984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.210 qpair failed and we were unable to recover it. 00:35:35.210 [2024-11-18 00:40:58.760102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.210 [2024-11-18 00:40:58.760130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.210 qpair failed and we were unable to recover it. 00:35:35.210 [2024-11-18 00:40:58.760221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.210 [2024-11-18 00:40:58.760250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.210 qpair failed and we were unable to recover it. 00:35:35.210 [2024-11-18 00:40:58.760370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.210 [2024-11-18 00:40:58.760398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.210 qpair failed and we were unable to recover it. 00:35:35.210 [2024-11-18 00:40:58.760482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.210 [2024-11-18 00:40:58.760509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.210 qpair failed and we were unable to recover it. 00:35:35.210 [2024-11-18 00:40:58.760632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.210 [2024-11-18 00:40:58.760664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.210 qpair failed and we were unable to recover it. 00:35:35.210 [2024-11-18 00:40:58.760749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.210 [2024-11-18 00:40:58.760776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.210 qpair failed and we were unable to recover it. 00:35:35.210 [2024-11-18 00:40:58.760917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.210 [2024-11-18 00:40:58.760946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.210 qpair failed and we were unable to recover it. 00:35:35.210 [2024-11-18 00:40:58.761036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.210 [2024-11-18 00:40:58.761064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.211 qpair failed and we were unable to recover it. 00:35:35.211 [2024-11-18 00:40:58.761147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.211 [2024-11-18 00:40:58.761174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.211 qpair failed and we were unable to recover it. 00:35:35.211 [2024-11-18 00:40:58.761251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.211 [2024-11-18 00:40:58.761278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.211 qpair failed and we were unable to recover it. 00:35:35.211 [2024-11-18 00:40:58.761387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.211 [2024-11-18 00:40:58.761414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.211 qpair failed and we were unable to recover it. 00:35:35.211 [2024-11-18 00:40:58.761492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.211 [2024-11-18 00:40:58.761518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.211 qpair failed and we were unable to recover it. 00:35:35.211 [2024-11-18 00:40:58.761593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.211 [2024-11-18 00:40:58.761629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.211 qpair failed and we were unable to recover it. 00:35:35.211 [2024-11-18 00:40:58.761711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.211 [2024-11-18 00:40:58.761739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.211 qpair failed and we were unable to recover it. 00:35:35.211 [2024-11-18 00:40:58.761822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.211 [2024-11-18 00:40:58.761849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.211 qpair failed and we were unable to recover it. 00:35:35.211 [2024-11-18 00:40:58.761926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.211 [2024-11-18 00:40:58.761953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.211 qpair failed and we were unable to recover it. 00:35:35.211 [2024-11-18 00:40:58.762062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.211 [2024-11-18 00:40:58.762089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.211 qpair failed and we were unable to recover it. 00:35:35.211 [2024-11-18 00:40:58.762196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.211 [2024-11-18 00:40:58.762223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.211 qpair failed and we were unable to recover it. 00:35:35.211 [2024-11-18 00:40:58.762333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.211 [2024-11-18 00:40:58.762372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.211 qpair failed and we were unable to recover it. 00:35:35.211 [2024-11-18 00:40:58.762462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.211 [2024-11-18 00:40:58.762489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.211 qpair failed and we were unable to recover it. 00:35:35.211 [2024-11-18 00:40:58.762581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.211 [2024-11-18 00:40:58.762607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.211 qpair failed and we were unable to recover it. 00:35:35.211 [2024-11-18 00:40:58.762712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.211 [2024-11-18 00:40:58.762739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.211 qpair failed and we were unable to recover it. 00:35:35.211 [2024-11-18 00:40:58.762830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.211 [2024-11-18 00:40:58.762859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.211 qpair failed and we were unable to recover it. 00:35:35.211 [2024-11-18 00:40:58.763004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.211 [2024-11-18 00:40:58.763030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.211 qpair failed and we were unable to recover it. 00:35:35.211 [2024-11-18 00:40:58.763122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.211 [2024-11-18 00:40:58.763151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.211 qpair failed and we were unable to recover it. 00:35:35.211 [2024-11-18 00:40:58.763234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.211 [2024-11-18 00:40:58.763261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.211 qpair failed and we were unable to recover it. 00:35:35.211 [2024-11-18 00:40:58.763374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.211 [2024-11-18 00:40:58.763401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.211 qpair failed and we were unable to recover it. 00:35:35.211 [2024-11-18 00:40:58.763515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.211 [2024-11-18 00:40:58.763542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.211 qpair failed and we were unable to recover it. 00:35:35.211 [2024-11-18 00:40:58.763651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.211 [2024-11-18 00:40:58.763677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.211 qpair failed and we were unable to recover it. 00:35:35.211 [2024-11-18 00:40:58.763754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.211 [2024-11-18 00:40:58.763779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.211 qpair failed and we were unable to recover it. 00:35:35.211 [2024-11-18 00:40:58.763856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.211 [2024-11-18 00:40:58.763883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.211 qpair failed and we were unable to recover it. 00:35:35.211 [2024-11-18 00:40:58.763963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.211 [2024-11-18 00:40:58.763995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.211 qpair failed and we were unable to recover it. 00:35:35.211 [2024-11-18 00:40:58.764119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.211 [2024-11-18 00:40:58.764145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.211 qpair failed and we were unable to recover it. 00:35:35.211 [2024-11-18 00:40:58.764224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.211 [2024-11-18 00:40:58.764252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.211 qpair failed and we were unable to recover it. 00:35:35.212 [2024-11-18 00:40:58.764348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.212 [2024-11-18 00:40:58.764376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.212 qpair failed and we were unable to recover it. 00:35:35.212 [2024-11-18 00:40:58.764485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.212 [2024-11-18 00:40:58.764513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.212 qpair failed and we were unable to recover it. 00:35:35.212 [2024-11-18 00:40:58.764631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.212 [2024-11-18 00:40:58.764658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.212 qpair failed and we were unable to recover it. 00:35:35.212 [2024-11-18 00:40:58.764739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.212 [2024-11-18 00:40:58.764767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.212 qpair failed and we were unable to recover it. 00:35:35.212 [2024-11-18 00:40:58.764902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.212 [2024-11-18 00:40:58.764929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.212 qpair failed and we were unable to recover it. 00:35:35.212 [2024-11-18 00:40:58.765038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.212 [2024-11-18 00:40:58.765066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.212 qpair failed and we were unable to recover it. 00:35:35.212 [2024-11-18 00:40:58.765147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.212 [2024-11-18 00:40:58.765174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.212 qpair failed and we were unable to recover it. 00:35:35.212 [2024-11-18 00:40:58.765271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.212 [2024-11-18 00:40:58.765298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.212 qpair failed and we were unable to recover it. 00:35:35.212 [2024-11-18 00:40:58.765428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.212 [2024-11-18 00:40:58.765455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.212 qpair failed and we were unable to recover it. 00:35:35.212 [2024-11-18 00:40:58.765534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.212 [2024-11-18 00:40:58.765561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.212 qpair failed and we were unable to recover it. 00:35:35.212 [2024-11-18 00:40:58.765675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.212 [2024-11-18 00:40:58.765703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.212 qpair failed and we were unable to recover it. 00:35:35.212 [2024-11-18 00:40:58.765821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.212 [2024-11-18 00:40:58.765849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.212 qpair failed and we were unable to recover it. 00:35:35.212 [2024-11-18 00:40:58.765934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.212 [2024-11-18 00:40:58.765960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.212 qpair failed and we were unable to recover it. 00:35:35.212 [2024-11-18 00:40:58.766036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.212 [2024-11-18 00:40:58.766061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.212 qpair failed and we were unable to recover it. 00:35:35.212 [2024-11-18 00:40:58.766141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.212 [2024-11-18 00:40:58.766168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.212 qpair failed and we were unable to recover it. 00:35:35.212 [2024-11-18 00:40:58.766272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.212 [2024-11-18 00:40:58.766321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.212 qpair failed and we were unable to recover it. 00:35:35.212 [2024-11-18 00:40:58.766411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.212 [2024-11-18 00:40:58.766438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.212 qpair failed and we were unable to recover it. 00:35:35.212 [2024-11-18 00:40:58.766514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.212 [2024-11-18 00:40:58.766540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.212 qpair failed and we were unable to recover it. 00:35:35.212 [2024-11-18 00:40:58.766632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.212 [2024-11-18 00:40:58.766658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.212 qpair failed and we were unable to recover it. 00:35:35.213 [2024-11-18 00:40:58.766737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.213 [2024-11-18 00:40:58.766763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.213 qpair failed and we were unable to recover it. 00:35:35.213 [2024-11-18 00:40:58.766839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.213 [2024-11-18 00:40:58.766868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.213 qpair failed and we were unable to recover it. 00:35:35.213 [2024-11-18 00:40:58.766949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.213 [2024-11-18 00:40:58.766976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.213 qpair failed and we were unable to recover it. 00:35:35.213 [2024-11-18 00:40:58.767081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.213 [2024-11-18 00:40:58.767112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.213 qpair failed and we were unable to recover it. 00:35:35.213 [2024-11-18 00:40:58.767196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.213 [2024-11-18 00:40:58.767224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.213 qpair failed and we were unable to recover it. 00:35:35.213 [2024-11-18 00:40:58.767333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.213 [2024-11-18 00:40:58.767365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.213 qpair failed and we were unable to recover it. 00:35:35.213 [2024-11-18 00:40:58.767445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.213 [2024-11-18 00:40:58.767472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.213 qpair failed and we were unable to recover it. 00:35:35.213 [2024-11-18 00:40:58.767559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.213 [2024-11-18 00:40:58.767586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.213 qpair failed and we were unable to recover it. 00:35:35.213 [2024-11-18 00:40:58.767694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.213 [2024-11-18 00:40:58.767722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.213 qpair failed and we were unable to recover it. 00:35:35.213 [2024-11-18 00:40:58.767809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.213 [2024-11-18 00:40:58.767837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.213 qpair failed and we were unable to recover it. 00:35:35.213 [2024-11-18 00:40:58.767946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.213 [2024-11-18 00:40:58.767973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.213 qpair failed and we were unable to recover it. 00:35:35.213 [2024-11-18 00:40:58.768101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.213 [2024-11-18 00:40:58.768128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.213 qpair failed and we were unable to recover it. 00:35:35.213 [2024-11-18 00:40:58.768218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.213 [2024-11-18 00:40:58.768245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.213 qpair failed and we were unable to recover it. 00:35:35.213 [2024-11-18 00:40:58.768377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.213 [2024-11-18 00:40:58.768406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.213 qpair failed and we were unable to recover it. 00:35:35.213 [2024-11-18 00:40:58.768489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.213 [2024-11-18 00:40:58.768517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.213 qpair failed and we were unable to recover it. 00:35:35.213 [2024-11-18 00:40:58.768599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.213 [2024-11-18 00:40:58.768626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.213 qpair failed and we were unable to recover it. 00:35:35.213 [2024-11-18 00:40:58.768712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.213 [2024-11-18 00:40:58.768740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.213 qpair failed and we were unable to recover it. 00:35:35.213 [2024-11-18 00:40:58.768852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.213 [2024-11-18 00:40:58.768880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.213 qpair failed and we were unable to recover it. 00:35:35.213 [2024-11-18 00:40:58.768955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.213 [2024-11-18 00:40:58.768988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.213 qpair failed and we were unable to recover it. 00:35:35.213 [2024-11-18 00:40:58.769069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.213 [2024-11-18 00:40:58.769098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.213 qpair failed and we were unable to recover it. 00:35:35.213 [2024-11-18 00:40:58.769204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.213 [2024-11-18 00:40:58.769231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.213 qpair failed and we were unable to recover it. 00:35:35.213 [2024-11-18 00:40:58.769327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.213 [2024-11-18 00:40:58.769363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.213 qpair failed and we were unable to recover it. 00:35:35.213 [2024-11-18 00:40:58.769444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.213 [2024-11-18 00:40:58.769472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.213 qpair failed and we were unable to recover it. 00:35:35.213 [2024-11-18 00:40:58.769579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.213 [2024-11-18 00:40:58.769616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.213 qpair failed and we were unable to recover it. 00:35:35.213 [2024-11-18 00:40:58.769699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.213 [2024-11-18 00:40:58.769726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.213 qpair failed and we were unable to recover it. 00:35:35.213 [2024-11-18 00:40:58.769808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.213 [2024-11-18 00:40:58.769836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.213 qpair failed and we were unable to recover it. 00:35:35.213 [2024-11-18 00:40:58.769942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.213 [2024-11-18 00:40:58.769982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.213 qpair failed and we were unable to recover it. 00:35:35.213 [2024-11-18 00:40:58.770065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.213 [2024-11-18 00:40:58.770094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.213 qpair failed and we were unable to recover it. 00:35:35.213 [2024-11-18 00:40:58.770237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.213 [2024-11-18 00:40:58.770265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.213 qpair failed and we were unable to recover it. 00:35:35.213 [2024-11-18 00:40:58.770378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.213 [2024-11-18 00:40:58.770406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.214 qpair failed and we were unable to recover it. 00:35:35.214 [2024-11-18 00:40:58.770493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.214 [2024-11-18 00:40:58.770520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.214 qpair failed and we were unable to recover it. 00:35:35.214 [2024-11-18 00:40:58.770609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.214 [2024-11-18 00:40:58.770637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.214 qpair failed and we were unable to recover it. 00:35:35.214 [2024-11-18 00:40:58.770729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.214 [2024-11-18 00:40:58.770756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.214 qpair failed and we were unable to recover it. 00:35:35.214 [2024-11-18 00:40:58.770874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.214 [2024-11-18 00:40:58.770901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.214 qpair failed and we were unable to recover it. 00:35:35.214 [2024-11-18 00:40:58.770981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.214 [2024-11-18 00:40:58.771009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.214 qpair failed and we were unable to recover it. 00:35:35.214 [2024-11-18 00:40:58.771100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.214 [2024-11-18 00:40:58.771130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.214 qpair failed and we were unable to recover it. 00:35:35.214 [2024-11-18 00:40:58.771208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.214 [2024-11-18 00:40:58.771235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.214 qpair failed and we were unable to recover it. 00:35:35.214 [2024-11-18 00:40:58.771328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.214 [2024-11-18 00:40:58.771360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.214 qpair failed and we were unable to recover it. 00:35:35.214 [2024-11-18 00:40:58.771466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.214 [2024-11-18 00:40:58.771493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.214 qpair failed and we were unable to recover it. 00:35:35.214 [2024-11-18 00:40:58.771571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.214 [2024-11-18 00:40:58.771598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.214 qpair failed and we were unable to recover it. 00:35:35.214 [2024-11-18 00:40:58.771717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.214 [2024-11-18 00:40:58.771745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.214 qpair failed and we were unable to recover it. 00:35:35.214 [2024-11-18 00:40:58.771870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.214 [2024-11-18 00:40:58.771899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.214 qpair failed and we were unable to recover it. 00:35:35.214 [2024-11-18 00:40:58.771993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.214 [2024-11-18 00:40:58.772021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.214 qpair failed and we were unable to recover it. 00:35:35.214 [2024-11-18 00:40:58.772114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.214 [2024-11-18 00:40:58.772142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.214 qpair failed and we were unable to recover it. 00:35:35.214 [2024-11-18 00:40:58.772226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.214 [2024-11-18 00:40:58.772253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.214 qpair failed and we were unable to recover it. 00:35:35.214 [2024-11-18 00:40:58.772376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.214 [2024-11-18 00:40:58.772408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.214 qpair failed and we were unable to recover it. 00:35:35.214 [2024-11-18 00:40:58.772519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.214 [2024-11-18 00:40:58.772546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.214 qpair failed and we were unable to recover it. 00:35:35.214 [2024-11-18 00:40:58.772636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.214 [2024-11-18 00:40:58.772663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.214 qpair failed and we were unable to recover it. 00:35:35.214 [2024-11-18 00:40:58.772771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.214 [2024-11-18 00:40:58.772799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.214 qpair failed and we were unable to recover it. 00:35:35.214 [2024-11-18 00:40:58.772883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.214 [2024-11-18 00:40:58.772911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.214 qpair failed and we were unable to recover it. 00:35:35.214 [2024-11-18 00:40:58.773030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.214 [2024-11-18 00:40:58.773059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.214 qpair failed and we were unable to recover it. 00:35:35.214 [2024-11-18 00:40:58.773145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.214 [2024-11-18 00:40:58.773174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.214 qpair failed and we were unable to recover it. 00:35:35.214 [2024-11-18 00:40:58.773285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.214 [2024-11-18 00:40:58.773318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.214 qpair failed and we were unable to recover it. 00:35:35.214 [2024-11-18 00:40:58.773407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.214 [2024-11-18 00:40:58.773435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.214 qpair failed and we were unable to recover it. 00:35:35.214 [2024-11-18 00:40:58.773529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.214 [2024-11-18 00:40:58.773556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.214 qpair failed and we were unable to recover it. 00:35:35.214 [2024-11-18 00:40:58.773635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.214 [2024-11-18 00:40:58.773663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.214 qpair failed and we were unable to recover it. 00:35:35.214 [2024-11-18 00:40:58.773772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.214 [2024-11-18 00:40:58.773800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.214 qpair failed and we were unable to recover it. 00:35:35.214 [2024-11-18 00:40:58.773890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.214 [2024-11-18 00:40:58.773917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.214 qpair failed and we were unable to recover it. 00:35:35.214 [2024-11-18 00:40:58.774009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.214 [2024-11-18 00:40:58.774037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.214 qpair failed and we were unable to recover it. 00:35:35.214 [2024-11-18 00:40:58.774163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.215 [2024-11-18 00:40:58.774191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.215 qpair failed and we were unable to recover it. 00:35:35.215 [2024-11-18 00:40:58.774289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.215 [2024-11-18 00:40:58.774323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.215 qpair failed and we were unable to recover it. 00:35:35.215 [2024-11-18 00:40:58.774424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.215 [2024-11-18 00:40:58.774452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.215 qpair failed and we were unable to recover it. 00:35:35.215 [2024-11-18 00:40:58.774564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.215 [2024-11-18 00:40:58.774591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.215 qpair failed and we were unable to recover it. 00:35:35.215 [2024-11-18 00:40:58.774672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.215 [2024-11-18 00:40:58.774700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.215 qpair failed and we were unable to recover it. 00:35:35.215 [2024-11-18 00:40:58.774804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.215 [2024-11-18 00:40:58.774831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.215 qpair failed and we were unable to recover it. 00:35:35.215 [2024-11-18 00:40:58.774907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.215 [2024-11-18 00:40:58.774932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.215 qpair failed and we were unable to recover it. 00:35:35.215 [2024-11-18 00:40:58.775021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.215 [2024-11-18 00:40:58.775049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.215 qpair failed and we were unable to recover it. 00:35:35.215 [2024-11-18 00:40:58.775158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.215 [2024-11-18 00:40:58.775187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.215 qpair failed and we were unable to recover it. 00:35:35.215 [2024-11-18 00:40:58.775333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.215 [2024-11-18 00:40:58.775374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.215 qpair failed and we were unable to recover it. 00:35:35.215 [2024-11-18 00:40:58.775457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.215 [2024-11-18 00:40:58.775485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.215 qpair failed and we were unable to recover it. 00:35:35.215 [2024-11-18 00:40:58.775570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.215 [2024-11-18 00:40:58.775607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.215 qpair failed and we were unable to recover it. 00:35:35.215 [2024-11-18 00:40:58.775688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.215 [2024-11-18 00:40:58.775715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.215 qpair failed and we were unable to recover it. 00:35:35.215 [2024-11-18 00:40:58.775796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.215 [2024-11-18 00:40:58.775823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.215 qpair failed and we were unable to recover it. 00:35:35.215 [2024-11-18 00:40:58.775936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.215 [2024-11-18 00:40:58.775965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.215 qpair failed and we were unable to recover it. 00:35:35.215 [2024-11-18 00:40:58.776078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.215 [2024-11-18 00:40:58.776108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.215 qpair failed and we were unable to recover it. 00:35:35.215 [2024-11-18 00:40:58.776192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.215 [2024-11-18 00:40:58.776220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.215 qpair failed and we were unable to recover it. 00:35:35.215 [2024-11-18 00:40:58.776302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.215 [2024-11-18 00:40:58.776353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.215 qpair failed and we were unable to recover it. 00:35:35.215 [2024-11-18 00:40:58.776546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.215 [2024-11-18 00:40:58.776573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.215 qpair failed and we were unable to recover it. 00:35:35.215 [2024-11-18 00:40:58.776683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.215 [2024-11-18 00:40:58.776710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.215 qpair failed and we were unable to recover it. 00:35:35.215 [2024-11-18 00:40:58.776798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.215 [2024-11-18 00:40:58.776826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.215 qpair failed and we were unable to recover it. 00:35:35.215 [2024-11-18 00:40:58.776921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.215 [2024-11-18 00:40:58.776949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.215 qpair failed and we were unable to recover it. 00:35:35.215 [2024-11-18 00:40:58.777033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.215 [2024-11-18 00:40:58.777060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.215 qpair failed and we were unable to recover it. 00:35:35.215 [2024-11-18 00:40:58.777139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.215 [2024-11-18 00:40:58.777169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.215 qpair failed and we were unable to recover it. 00:35:35.215 [2024-11-18 00:40:58.777284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.215 [2024-11-18 00:40:58.777320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.215 qpair failed and we were unable to recover it. 00:35:35.215 [2024-11-18 00:40:58.777413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.215 [2024-11-18 00:40:58.777440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.215 qpair failed and we were unable to recover it. 00:35:35.215 [2024-11-18 00:40:58.777519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.215 [2024-11-18 00:40:58.777551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.215 qpair failed and we were unable to recover it. 00:35:35.215 [2024-11-18 00:40:58.777649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.215 [2024-11-18 00:40:58.777676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.215 qpair failed and we were unable to recover it. 00:35:35.216 [2024-11-18 00:40:58.777758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.216 [2024-11-18 00:40:58.777785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.216 qpair failed and we were unable to recover it. 00:35:35.216 [2024-11-18 00:40:58.777869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.216 [2024-11-18 00:40:58.777896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.216 qpair failed and we were unable to recover it. 00:35:35.216 [2024-11-18 00:40:58.777988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.216 [2024-11-18 00:40:58.778016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.216 qpair failed and we were unable to recover it. 00:35:35.216 [2024-11-18 00:40:58.778094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.216 [2024-11-18 00:40:58.778121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.216 qpair failed and we were unable to recover it. 00:35:35.216 [2024-11-18 00:40:58.778210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.216 [2024-11-18 00:40:58.778237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.216 qpair failed and we were unable to recover it. 00:35:35.216 [2024-11-18 00:40:58.778321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.216 [2024-11-18 00:40:58.778361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.216 qpair failed and we were unable to recover it. 00:35:35.216 [2024-11-18 00:40:58.778445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.216 [2024-11-18 00:40:58.778473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.216 qpair failed and we were unable to recover it. 00:35:35.216 [2024-11-18 00:40:58.778611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.216 [2024-11-18 00:40:58.778638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.216 qpair failed and we were unable to recover it. 00:35:35.216 [2024-11-18 00:40:58.778733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.216 [2024-11-18 00:40:58.778774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.216 qpair failed and we were unable to recover it. 00:35:35.216 [2024-11-18 00:40:58.778864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.216 [2024-11-18 00:40:58.778893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.216 qpair failed and we were unable to recover it. 00:35:35.216 [2024-11-18 00:40:58.778976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.216 [2024-11-18 00:40:58.779004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.216 qpair failed and we were unable to recover it. 00:35:35.216 [2024-11-18 00:40:58.779110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.216 [2024-11-18 00:40:58.779138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.216 qpair failed and we were unable to recover it. 00:35:35.216 [2024-11-18 00:40:58.779281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.216 [2024-11-18 00:40:58.779328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.216 qpair failed and we were unable to recover it. 00:35:35.216 [2024-11-18 00:40:58.779424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.216 [2024-11-18 00:40:58.779453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.216 qpair failed and we were unable to recover it. 00:35:35.216 [2024-11-18 00:40:58.779549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.216 [2024-11-18 00:40:58.779578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.216 qpair failed and we were unable to recover it. 00:35:35.216 [2024-11-18 00:40:58.779654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.216 [2024-11-18 00:40:58.779681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.216 qpair failed and we were unable to recover it. 00:35:35.216 [2024-11-18 00:40:58.779761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.216 [2024-11-18 00:40:58.779788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.216 qpair failed and we were unable to recover it. 00:35:35.216 [2024-11-18 00:40:58.779877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.216 [2024-11-18 00:40:58.779904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.216 qpair failed and we were unable to recover it. 00:35:35.216 [2024-11-18 00:40:58.779988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.216 [2024-11-18 00:40:58.780015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.216 qpair failed and we were unable to recover it. 00:35:35.216 [2024-11-18 00:40:58.780120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.216 [2024-11-18 00:40:58.780147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.216 qpair failed and we were unable to recover it. 00:35:35.216 [2024-11-18 00:40:58.780237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.216 [2024-11-18 00:40:58.780265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.216 qpair failed and we were unable to recover it. 00:35:35.216 [2024-11-18 00:40:58.780362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.216 [2024-11-18 00:40:58.780390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.216 qpair failed and we were unable to recover it. 00:35:35.216 [2024-11-18 00:40:58.780475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.216 [2024-11-18 00:40:58.780504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.216 qpair failed and we were unable to recover it. 00:35:35.216 [2024-11-18 00:40:58.780596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.216 [2024-11-18 00:40:58.780624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.216 qpair failed and we were unable to recover it. 00:35:35.216 [2024-11-18 00:40:58.780732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.216 [2024-11-18 00:40:58.780759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.216 qpair failed and we were unable to recover it. 00:35:35.216 [2024-11-18 00:40:58.780839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.216 [2024-11-18 00:40:58.780871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.216 qpair failed and we were unable to recover it. 00:35:35.216 [2024-11-18 00:40:58.780983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.216 [2024-11-18 00:40:58.781010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.216 qpair failed and we were unable to recover it. 00:35:35.216 [2024-11-18 00:40:58.781086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.216 [2024-11-18 00:40:58.781113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.216 qpair failed and we were unable to recover it. 00:35:35.216 [2024-11-18 00:40:58.781194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.216 [2024-11-18 00:40:58.781221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.216 qpair failed and we were unable to recover it. 00:35:35.216 [2024-11-18 00:40:58.781368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.216 [2024-11-18 00:40:58.781398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.216 qpair failed and we were unable to recover it. 00:35:35.216 [2024-11-18 00:40:58.781482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.216 [2024-11-18 00:40:58.781510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.216 qpair failed and we were unable to recover it. 00:35:35.216 [2024-11-18 00:40:58.781592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.216 [2024-11-18 00:40:58.781620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.216 qpair failed and we were unable to recover it. 00:35:35.216 [2024-11-18 00:40:58.781733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.217 [2024-11-18 00:40:58.781760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.217 qpair failed and we were unable to recover it. 00:35:35.217 [2024-11-18 00:40:58.781873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.217 [2024-11-18 00:40:58.781902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.217 qpair failed and we were unable to recover it. 00:35:35.217 [2024-11-18 00:40:58.781988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.217 [2024-11-18 00:40:58.782015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.217 qpair failed and we were unable to recover it. 00:35:35.217 [2024-11-18 00:40:58.782096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.217 [2024-11-18 00:40:58.782124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.217 qpair failed and we were unable to recover it. 00:35:35.217 [2024-11-18 00:40:58.782249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.217 [2024-11-18 00:40:58.782277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.217 qpair failed and we were unable to recover it. 00:35:35.217 [2024-11-18 00:40:58.782386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.217 [2024-11-18 00:40:58.782415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.217 qpair failed and we were unable to recover it. 00:35:35.217 [2024-11-18 00:40:58.782501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.217 [2024-11-18 00:40:58.782528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.217 qpair failed and we were unable to recover it. 00:35:35.217 [2024-11-18 00:40:58.782644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.217 [2024-11-18 00:40:58.782671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.217 qpair failed and we were unable to recover it. 00:35:35.217 [2024-11-18 00:40:58.782747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.217 [2024-11-18 00:40:58.782774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.217 qpair failed and we were unable to recover it. 00:35:35.217 [2024-11-18 00:40:58.782866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.217 [2024-11-18 00:40:58.782894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.217 qpair failed and we were unable to recover it. 00:35:35.217 [2024-11-18 00:40:58.783034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.217 [2024-11-18 00:40:58.783062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.217 qpair failed and we were unable to recover it. 00:35:35.217 [2024-11-18 00:40:58.783148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.217 [2024-11-18 00:40:58.783176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.217 qpair failed and we were unable to recover it. 00:35:35.217 [2024-11-18 00:40:58.783261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.217 [2024-11-18 00:40:58.783288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.217 qpair failed and we were unable to recover it. 00:35:35.217 [2024-11-18 00:40:58.783417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.217 [2024-11-18 00:40:58.783444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.217 qpair failed and we were unable to recover it. 00:35:35.217 [2024-11-18 00:40:58.783535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.217 [2024-11-18 00:40:58.783564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.217 qpair failed and we were unable to recover it. 00:35:35.217 [2024-11-18 00:40:58.783656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.217 [2024-11-18 00:40:58.783684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.217 qpair failed and we were unable to recover it. 00:35:35.217 [2024-11-18 00:40:58.783770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.217 [2024-11-18 00:40:58.783798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.217 qpair failed and we were unable to recover it. 00:35:35.217 [2024-11-18 00:40:58.783889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.217 [2024-11-18 00:40:58.783917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.217 qpair failed and we were unable to recover it. 00:35:35.217 [2024-11-18 00:40:58.784001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.217 [2024-11-18 00:40:58.784029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.217 qpair failed and we were unable to recover it. 00:35:35.217 [2024-11-18 00:40:58.784116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.217 [2024-11-18 00:40:58.784142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.217 qpair failed and we were unable to recover it. 00:35:35.217 [2024-11-18 00:40:58.784236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.217 [2024-11-18 00:40:58.784269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.217 qpair failed and we were unable to recover it. 00:35:35.217 [2024-11-18 00:40:58.784366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.217 [2024-11-18 00:40:58.784393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.217 qpair failed and we were unable to recover it. 00:35:35.217 [2024-11-18 00:40:58.784479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.217 [2024-11-18 00:40:58.784506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.217 qpair failed and we were unable to recover it. 00:35:35.217 [2024-11-18 00:40:58.784588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.217 [2024-11-18 00:40:58.784615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.217 qpair failed and we were unable to recover it. 00:35:35.217 [2024-11-18 00:40:58.784702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.217 [2024-11-18 00:40:58.784729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.217 qpair failed and we were unable to recover it. 00:35:35.217 [2024-11-18 00:40:58.784826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.217 [2024-11-18 00:40:58.784853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.217 qpair failed and we were unable to recover it. 00:35:35.217 [2024-11-18 00:40:58.784935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.217 [2024-11-18 00:40:58.784963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.217 qpair failed and we were unable to recover it. 00:35:35.217 [2024-11-18 00:40:58.785059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.217 [2024-11-18 00:40:58.785101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.217 qpair failed and we were unable to recover it. 00:35:35.217 [2024-11-18 00:40:58.785228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.217 [2024-11-18 00:40:58.785258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.217 qpair failed and we were unable to recover it. 00:35:35.217 [2024-11-18 00:40:58.785349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.217 [2024-11-18 00:40:58.785378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.217 qpair failed and we were unable to recover it. 00:35:35.217 [2024-11-18 00:40:58.785463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.217 [2024-11-18 00:40:58.785490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.217 qpair failed and we were unable to recover it. 00:35:35.218 [2024-11-18 00:40:58.785606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.218 [2024-11-18 00:40:58.785640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.218 qpair failed and we were unable to recover it. 00:35:35.218 [2024-11-18 00:40:58.785732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.218 [2024-11-18 00:40:58.785760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.218 qpair failed and we were unable to recover it. 00:35:35.218 [2024-11-18 00:40:58.785850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.218 [2024-11-18 00:40:58.785879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.218 qpair failed and we were unable to recover it. 00:35:35.218 [2024-11-18 00:40:58.785978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.218 [2024-11-18 00:40:58.786006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.218 qpair failed and we were unable to recover it. 00:35:35.218 [2024-11-18 00:40:58.786100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.218 [2024-11-18 00:40:58.786129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.218 qpair failed and we were unable to recover it. 00:35:35.218 [2024-11-18 00:40:58.786212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.218 [2024-11-18 00:40:58.786239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.218 qpair failed and we were unable to recover it. 00:35:35.218 [2024-11-18 00:40:58.786320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.218 [2024-11-18 00:40:58.786354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.218 qpair failed and we were unable to recover it. 00:35:35.218 [2024-11-18 00:40:58.786431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.218 [2024-11-18 00:40:58.786458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.218 qpair failed and we were unable to recover it. 00:35:35.218 [2024-11-18 00:40:58.786570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.218 [2024-11-18 00:40:58.786597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.218 qpair failed and we were unable to recover it. 00:35:35.218 [2024-11-18 00:40:58.786685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.218 [2024-11-18 00:40:58.786712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.218 qpair failed and we were unable to recover it. 00:35:35.218 [2024-11-18 00:40:58.786821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.218 [2024-11-18 00:40:58.786850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.218 qpair failed and we were unable to recover it. 00:35:35.218 [2024-11-18 00:40:58.786983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.218 [2024-11-18 00:40:58.787014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.218 qpair failed and we were unable to recover it. 00:35:35.218 [2024-11-18 00:40:58.787108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.218 [2024-11-18 00:40:58.787135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.218 qpair failed and we were unable to recover it. 00:35:35.218 [2024-11-18 00:40:58.787211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.218 [2024-11-18 00:40:58.787243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.218 qpair failed and we were unable to recover it. 00:35:35.218 [2024-11-18 00:40:58.787367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.218 [2024-11-18 00:40:58.787395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.218 qpair failed and we were unable to recover it. 00:35:35.218 [2024-11-18 00:40:58.787483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.218 [2024-11-18 00:40:58.787510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.218 qpair failed and we were unable to recover it. 00:35:35.218 [2024-11-18 00:40:58.787590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.218 [2024-11-18 00:40:58.787619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.218 qpair failed and we were unable to recover it. 00:35:35.218 [2024-11-18 00:40:58.787734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.218 [2024-11-18 00:40:58.787763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.218 qpair failed and we were unable to recover it. 00:35:35.218 [2024-11-18 00:40:58.787849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.218 [2024-11-18 00:40:58.787877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.218 qpair failed and we were unable to recover it. 00:35:35.218 [2024-11-18 00:40:58.787991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.218 [2024-11-18 00:40:58.788019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.218 qpair failed and we were unable to recover it. 00:35:35.219 [2024-11-18 00:40:58.788099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.219 [2024-11-18 00:40:58.788126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.219 qpair failed and we were unable to recover it. 00:35:35.219 [2024-11-18 00:40:58.788209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.219 [2024-11-18 00:40:58.788236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.219 qpair failed and we were unable to recover it. 00:35:35.219 [2024-11-18 00:40:58.788322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.219 [2024-11-18 00:40:58.788351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.219 qpair failed and we were unable to recover it. 00:35:35.219 [2024-11-18 00:40:58.788441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.219 [2024-11-18 00:40:58.788469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.219 qpair failed and we were unable to recover it. 00:35:35.219 [2024-11-18 00:40:58.788550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.219 [2024-11-18 00:40:58.788577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.219 qpair failed and we were unable to recover it. 00:35:35.219 [2024-11-18 00:40:58.788685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.219 [2024-11-18 00:40:58.788712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.219 qpair failed and we were unable to recover it. 00:35:35.219 [2024-11-18 00:40:58.788789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.219 [2024-11-18 00:40:58.788816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.219 qpair failed and we were unable to recover it. 00:35:35.219 [2024-11-18 00:40:58.788893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.219 [2024-11-18 00:40:58.788920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.219 qpair failed and we were unable to recover it. 00:35:35.219 [2024-11-18 00:40:58.788999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.219 [2024-11-18 00:40:58.789027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.219 qpair failed and we were unable to recover it. 00:35:35.219 [2024-11-18 00:40:58.789102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.219 [2024-11-18 00:40:58.789129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.219 qpair failed and we were unable to recover it. 00:35:35.219 [2024-11-18 00:40:58.789224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.219 [2024-11-18 00:40:58.789251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.219 qpair failed and we were unable to recover it. 00:35:35.219 [2024-11-18 00:40:58.789374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.219 [2024-11-18 00:40:58.789402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.219 qpair failed and we were unable to recover it. 00:35:35.219 [2024-11-18 00:40:58.789484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.219 [2024-11-18 00:40:58.789511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.219 qpair failed and we were unable to recover it. 00:35:35.219 [2024-11-18 00:40:58.789602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.219 [2024-11-18 00:40:58.789630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.219 qpair failed and we were unable to recover it. 00:35:35.219 [2024-11-18 00:40:58.789715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.219 [2024-11-18 00:40:58.789743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.219 qpair failed and we were unable to recover it. 00:35:35.219 [2024-11-18 00:40:58.789839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.219 [2024-11-18 00:40:58.789869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.219 qpair failed and we were unable to recover it. 00:35:35.219 [2024-11-18 00:40:58.789951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.219 [2024-11-18 00:40:58.789979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.219 qpair failed and we were unable to recover it. 00:35:35.219 [2024-11-18 00:40:58.790070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.219 [2024-11-18 00:40:58.790097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.219 qpair failed and we were unable to recover it. 00:35:35.219 [2024-11-18 00:40:58.790190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.219 [2024-11-18 00:40:58.790217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.219 qpair failed and we were unable to recover it. 00:35:35.219 [2024-11-18 00:40:58.790300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.219 [2024-11-18 00:40:58.790334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.219 qpair failed and we were unable to recover it. 00:35:35.219 [2024-11-18 00:40:58.790426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.219 [2024-11-18 00:40:58.790454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.219 qpair failed and we were unable to recover it. 00:35:35.219 [2024-11-18 00:40:58.790566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.219 [2024-11-18 00:40:58.790594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.219 qpair failed and we were unable to recover it. 00:35:35.219 [2024-11-18 00:40:58.790674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.219 [2024-11-18 00:40:58.790701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.219 qpair failed and we were unable to recover it. 00:35:35.219 [2024-11-18 00:40:58.790792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.219 [2024-11-18 00:40:58.790820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.219 qpair failed and we were unable to recover it. 00:35:35.219 [2024-11-18 00:40:58.790937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.219 [2024-11-18 00:40:58.790966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.219 qpair failed and we were unable to recover it. 00:35:35.219 [2024-11-18 00:40:58.791049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.219 [2024-11-18 00:40:58.791076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.219 qpair failed and we were unable to recover it. 00:35:35.219 [2024-11-18 00:40:58.791172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.219 [2024-11-18 00:40:58.791199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.219 qpair failed and we were unable to recover it. 00:35:35.219 [2024-11-18 00:40:58.791338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.219 [2024-11-18 00:40:58.791366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.219 qpair failed and we were unable to recover it. 00:35:35.219 [2024-11-18 00:40:58.791453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.219 [2024-11-18 00:40:58.791481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.219 qpair failed and we were unable to recover it. 00:35:35.219 [2024-11-18 00:40:58.791570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.219 [2024-11-18 00:40:58.791598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.219 qpair failed and we were unable to recover it. 00:35:35.219 [2024-11-18 00:40:58.791678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.220 [2024-11-18 00:40:58.791705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.220 qpair failed and we were unable to recover it. 00:35:35.220 [2024-11-18 00:40:58.791790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.220 [2024-11-18 00:40:58.791817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.220 qpair failed and we were unable to recover it. 00:35:35.220 [2024-11-18 00:40:58.791935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.220 [2024-11-18 00:40:58.791962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.220 qpair failed and we were unable to recover it. 00:35:35.220 [2024-11-18 00:40:58.792056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.220 [2024-11-18 00:40:58.792085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.220 qpair failed and we were unable to recover it. 00:35:35.220 [2024-11-18 00:40:58.792177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.220 [2024-11-18 00:40:58.792205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.220 qpair failed and we were unable to recover it. 00:35:35.220 [2024-11-18 00:40:58.792318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.220 [2024-11-18 00:40:58.792346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.220 qpair failed and we were unable to recover it. 00:35:35.220 [2024-11-18 00:40:58.792435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.220 [2024-11-18 00:40:58.792467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.220 qpair failed and we were unable to recover it. 00:35:35.220 [2024-11-18 00:40:58.792579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.220 [2024-11-18 00:40:58.792607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.220 qpair failed and we were unable to recover it. 00:35:35.220 [2024-11-18 00:40:58.792690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.220 [2024-11-18 00:40:58.792717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.220 qpair failed and we were unable to recover it. 00:35:35.220 [2024-11-18 00:40:58.792794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.220 [2024-11-18 00:40:58.792821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.220 qpair failed and we were unable to recover it. 00:35:35.220 [2024-11-18 00:40:58.792943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.220 [2024-11-18 00:40:58.792970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.220 qpair failed and we were unable to recover it. 00:35:35.220 [2024-11-18 00:40:58.793061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.220 [2024-11-18 00:40:58.793089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.220 qpair failed and we were unable to recover it. 00:35:35.220 [2024-11-18 00:40:58.793173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.220 [2024-11-18 00:40:58.793200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.220 qpair failed and we were unable to recover it. 00:35:35.220 [2024-11-18 00:40:58.793320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.220 [2024-11-18 00:40:58.793361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.220 qpair failed and we were unable to recover it. 00:35:35.220 [2024-11-18 00:40:58.793444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.220 [2024-11-18 00:40:58.793472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.220 qpair failed and we were unable to recover it. 00:35:35.220 [2024-11-18 00:40:58.793585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.220 [2024-11-18 00:40:58.793613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.220 qpair failed and we were unable to recover it. 00:35:35.220 [2024-11-18 00:40:58.793709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.220 [2024-11-18 00:40:58.793737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.220 qpair failed and we were unable to recover it. 00:35:35.220 [2024-11-18 00:40:58.793818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.220 [2024-11-18 00:40:58.793846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.220 qpair failed and we were unable to recover it. 00:35:35.220 A controller has encountered a failure and is being reset. 00:35:35.220 [2024-11-18 00:40:58.793942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.220 [2024-11-18 00:40:58.793970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.220 qpair failed and we were unable to recover it. 00:35:35.220 [2024-11-18 00:40:58.794046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.220 [2024-11-18 00:40:58.794074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18bcb40 with addr=10.0.0.2, port=4420 00:35:35.220 qpair failed and we were unable to recover it. 00:35:35.220 [2024-11-18 00:40:58.794158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.220 [2024-11-18 00:40:58.794186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.220 qpair failed and we were unable to recover it. 00:35:35.220 [2024-11-18 00:40:58.794266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.220 [2024-11-18 00:40:58.794293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.220 qpair failed and we were unable to recover it. 00:35:35.220 [2024-11-18 00:40:58.794382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.220 [2024-11-18 00:40:58.794410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.220 qpair failed and we were unable to recover it. 00:35:35.220 [2024-11-18 00:40:58.794493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.220 [2024-11-18 00:40:58.794521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.220 qpair failed and we were unable to recover it. 00:35:35.220 [2024-11-18 00:40:58.794597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.220 [2024-11-18 00:40:58.794624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff50000b90 with addr=10.0.0.2, port=4420 00:35:35.220 qpair failed and we were unable to recover it. 00:35:35.220 [2024-11-18 00:40:58.794706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.220 [2024-11-18 00:40:58.794736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff48000b90 with addr=10.0.0.2, port=4420 00:35:35.220 qpair failed and we were unable to recover it. 00:35:35.220 [2024-11-18 00:40:58.794841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.220 [2024-11-18 00:40:58.794878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ca970 with addr=10.0.0.2, port=4420 00:35:35.220 [2024-11-18 00:40:58.794897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca970 is same with the state(6) to be set 00:35:35.220 [2024-11-18 00:40:58.794923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18ca970 (9): Bad file descriptor 00:35:35.220 [2024-11-18 00:40:58.794943] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:35:35.220 [2024-11-18 00:40:58.794958] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:35:35.220 [2024-11-18 00:40:58.794975] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:35:35.220 Unable to reset the controller. 00:35:35.220 00:40:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:35.220 00:40:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:35:35.220 00:40:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:35.220 00:40:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:35.220 00:40:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:35.221 00:40:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:35.221 00:40:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:35.221 00:40:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.221 00:40:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:35.221 Malloc0 00:35:35.221 00:40:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.221 00:40:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:35:35.221 00:40:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.221 00:40:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:35.221 [2024-11-18 00:40:58.930881] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:35.221 00:40:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.221 00:40:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:35.221 00:40:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.221 00:40:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:35.221 00:40:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.221 00:40:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:35.221 00:40:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.221 00:40:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:35.221 00:40:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.221 00:40:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:35.221 00:40:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.221 00:40:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:35.582 [2024-11-18 00:40:58.959226] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:35.582 00:40:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.582 00:40:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:35.582 00:40:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.582 00:40:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:35.582 00:40:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.582 00:40:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 404223 00:35:36.147 Controller properly reset. 00:35:41.432 Initializing NVMe Controllers 00:35:41.432 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:41.432 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:41.432 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:35:41.432 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:35:41.432 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:35:41.432 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:35:41.432 Initialization complete. Launching workers. 00:35:41.432 Starting thread on core 1 00:35:41.432 Starting thread on core 2 00:35:41.432 Starting thread on core 3 00:35:41.432 Starting thread on core 0 00:35:41.432 00:41:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:35:41.432 00:35:41.432 real 0m10.718s 00:35:41.432 user 0m34.049s 00:35:41.432 sys 0m7.552s 00:35:41.432 00:41:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:41.432 00:41:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:41.432 ************************************ 00:35:41.432 END TEST nvmf_target_disconnect_tc2 00:35:41.432 ************************************ 00:35:41.432 00:41:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:35:41.432 00:41:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:35:41.432 00:41:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:35:41.432 00:41:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:41.432 00:41:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:35:41.432 00:41:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:41.432 00:41:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:35:41.432 00:41:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:41.432 00:41:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:41.432 rmmod nvme_tcp 00:35:41.432 rmmod nvme_fabrics 00:35:41.432 rmmod nvme_keyring 00:35:41.432 00:41:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:41.432 00:41:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:35:41.432 00:41:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:35:41.432 00:41:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 404743 ']' 00:35:41.433 00:41:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 404743 00:35:41.433 00:41:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 404743 ']' 00:35:41.433 00:41:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 404743 00:35:41.433 00:41:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:35:41.433 00:41:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:41.433 00:41:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 404743 00:35:41.433 00:41:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:35:41.433 00:41:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:35:41.433 00:41:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 404743' 00:35:41.433 killing process with pid 404743 00:35:41.433 00:41:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 404743 00:35:41.433 00:41:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 404743 00:35:41.433 00:41:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:41.433 00:41:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:41.433 00:41:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:41.433 00:41:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:35:41.433 00:41:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:35:41.433 00:41:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:41.433 00:41:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:35:41.433 00:41:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:41.433 00:41:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:41.433 00:41:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:41.433 00:41:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:41.433 00:41:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:43.973 00:41:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:43.973 00:35:43.973 real 0m15.794s 00:35:43.973 user 0m59.756s 00:35:43.973 sys 0m10.141s 00:35:43.973 00:41:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:43.973 00:41:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:43.973 ************************************ 00:35:43.973 END TEST nvmf_target_disconnect 00:35:43.973 ************************************ 00:35:43.973 00:41:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:35:43.973 00:35:43.973 real 6m44.717s 00:35:43.973 user 17m24.113s 00:35:43.973 sys 1m32.214s 00:35:43.973 00:41:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:43.973 00:41:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.973 ************************************ 00:35:43.973 END TEST nvmf_host 00:35:43.973 ************************************ 00:35:43.973 00:41:07 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:35:43.973 00:41:07 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:35:43.973 00:41:07 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:35:43.973 00:41:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:43.973 00:41:07 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:43.973 00:41:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:43.973 ************************************ 00:35:43.973 START TEST nvmf_target_core_interrupt_mode 00:35:43.973 ************************************ 00:35:43.973 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:35:43.973 * Looking for test storage... 00:35:43.973 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:35:43.973 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:43.973 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:35:43.973 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:43.973 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:43.973 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:43.973 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:43.973 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:43.973 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:35:43.973 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:35:43.973 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:35:43.973 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:35:43.973 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:35:43.973 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:35:43.973 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:35:43.973 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:43.974 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:35:43.974 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:35:43.974 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:43.974 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:43.974 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:35:43.974 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:35:43.974 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:43.974 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:35:43.974 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:35:43.974 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:35:43.974 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:35:43.974 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:43.974 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:35:43.974 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:35:43.974 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:43.974 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:43.974 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:35:43.974 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:43.974 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:43.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:43.974 --rc genhtml_branch_coverage=1 00:35:43.974 --rc genhtml_function_coverage=1 00:35:43.974 --rc genhtml_legend=1 00:35:43.974 --rc geninfo_all_blocks=1 00:35:43.974 --rc geninfo_unexecuted_blocks=1 00:35:43.974 00:35:43.974 ' 00:35:43.974 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:43.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:43.974 --rc genhtml_branch_coverage=1 00:35:43.974 --rc genhtml_function_coverage=1 00:35:43.974 --rc genhtml_legend=1 00:35:43.974 --rc geninfo_all_blocks=1 00:35:43.974 --rc geninfo_unexecuted_blocks=1 00:35:43.974 00:35:43.974 ' 00:35:43.974 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:43.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:43.974 --rc genhtml_branch_coverage=1 00:35:43.974 --rc genhtml_function_coverage=1 00:35:43.974 --rc genhtml_legend=1 00:35:43.974 --rc geninfo_all_blocks=1 00:35:43.974 --rc geninfo_unexecuted_blocks=1 00:35:43.974 00:35:43.974 ' 00:35:43.974 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:43.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:43.974 --rc genhtml_branch_coverage=1 00:35:43.974 --rc genhtml_function_coverage=1 00:35:43.974 --rc genhtml_legend=1 00:35:43.974 --rc geninfo_all_blocks=1 00:35:43.974 --rc geninfo_unexecuted_blocks=1 00:35:43.974 00:35:43.974 ' 00:35:43.974 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:35:43.974 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:35:43.974 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:43.974 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:35:43.974 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:43.974 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:43.974 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:43.974 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:43.974 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:43.974 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:43.974 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:43.974 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:43.974 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:43.974 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:43.974 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:43.974 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:43.974 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:43.974 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:43.974 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:43.974 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:43.974 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:43.974 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:35:43.974 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:43.974 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:43.974 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:43.974 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.974 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.974 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.974 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:35:43.974 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.974 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:35:43.974 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:43.974 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:43.974 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:43.974 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:43.974 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:43.974 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:43.974 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:43.974 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:43.974 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:43.974 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:43.975 ************************************ 00:35:43.975 START TEST nvmf_abort 00:35:43.975 ************************************ 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:35:43.975 * Looking for test storage... 00:35:43.975 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:43.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:43.975 --rc genhtml_branch_coverage=1 00:35:43.975 --rc genhtml_function_coverage=1 00:35:43.975 --rc genhtml_legend=1 00:35:43.975 --rc geninfo_all_blocks=1 00:35:43.975 --rc geninfo_unexecuted_blocks=1 00:35:43.975 00:35:43.975 ' 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:43.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:43.975 --rc genhtml_branch_coverage=1 00:35:43.975 --rc genhtml_function_coverage=1 00:35:43.975 --rc genhtml_legend=1 00:35:43.975 --rc geninfo_all_blocks=1 00:35:43.975 --rc geninfo_unexecuted_blocks=1 00:35:43.975 00:35:43.975 ' 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:43.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:43.975 --rc genhtml_branch_coverage=1 00:35:43.975 --rc genhtml_function_coverage=1 00:35:43.975 --rc genhtml_legend=1 00:35:43.975 --rc geninfo_all_blocks=1 00:35:43.975 --rc geninfo_unexecuted_blocks=1 00:35:43.975 00:35:43.975 ' 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:43.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:43.975 --rc genhtml_branch_coverage=1 00:35:43.975 --rc genhtml_function_coverage=1 00:35:43.975 --rc genhtml_legend=1 00:35:43.975 --rc geninfo_all_blocks=1 00:35:43.975 --rc geninfo_unexecuted_blocks=1 00:35:43.975 00:35:43.975 ' 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:43.975 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:43.976 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.976 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.976 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.976 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:35:43.976 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.976 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:35:43.976 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:43.976 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:43.976 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:43.976 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:43.976 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:43.976 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:43.976 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:43.976 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:43.976 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:43.976 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:43.976 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:43.976 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:35:43.976 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:35:43.976 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:43.976 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:43.976 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:43.976 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:43.976 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:43.976 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:43.976 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:43.976 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:43.976 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:43.976 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:43.976 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:35:43.976 00:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:46.511 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:46.512 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:46.512 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:46.512 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:46.512 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:46.512 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:46.512 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:46.512 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:35:46.512 00:35:46.513 --- 10.0.0.2 ping statistics --- 00:35:46.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:46.513 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:35:46.513 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:46.513 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:46.513 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:35:46.513 00:35:46.513 --- 10.0.0.1 ping statistics --- 00:35:46.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:46.513 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:35:46.513 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:46.513 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:35:46.513 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:46.513 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:46.513 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:46.513 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:46.513 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:46.513 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:46.513 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:46.513 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:35:46.513 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:46.513 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:46.513 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:46.513 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=407553 00:35:46.513 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:35:46.513 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 407553 00:35:46.513 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 407553 ']' 00:35:46.513 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:46.513 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:46.513 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:46.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:46.513 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:46.513 00:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:46.513 [2024-11-18 00:41:09.992144] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:46.513 [2024-11-18 00:41:09.993333] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:35:46.513 [2024-11-18 00:41:09.993392] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:46.513 [2024-11-18 00:41:10.074902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:46.513 [2024-11-18 00:41:10.124693] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:46.513 [2024-11-18 00:41:10.124749] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:46.513 [2024-11-18 00:41:10.124763] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:46.513 [2024-11-18 00:41:10.124775] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:46.513 [2024-11-18 00:41:10.124784] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:46.513 [2024-11-18 00:41:10.126281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:46.513 [2024-11-18 00:41:10.126350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:46.513 [2024-11-18 00:41:10.126355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:46.513 [2024-11-18 00:41:10.223224] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:46.513 [2024-11-18 00:41:10.223478] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:46.513 [2024-11-18 00:41:10.223484] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:46.513 [2024-11-18 00:41:10.223775] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:46.513 00:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:46.513 00:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:35:46.513 00:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:46.513 00:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:46.513 00:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:46.513 00:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:46.513 00:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:35:46.513 00:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.513 00:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:46.513 [2024-11-18 00:41:10.279150] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:46.513 00:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.513 00:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:35:46.513 00:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.513 00:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:46.513 Malloc0 00:35:46.513 00:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.513 00:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:35:46.513 00:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.513 00:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:46.771 Delay0 00:35:46.771 00:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.771 00:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:35:46.771 00:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.771 00:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:46.771 00:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.771 00:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:35:46.771 00:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.771 00:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:46.771 00:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.771 00:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:46.771 00:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.771 00:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:46.771 [2024-11-18 00:41:10.359322] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:46.772 00:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.772 00:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:46.772 00:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.772 00:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:46.772 00:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.772 00:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:35:46.772 [2024-11-18 00:41:10.469197] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:35:49.308 Initializing NVMe Controllers 00:35:49.308 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:35:49.308 controller IO queue size 128 less than required 00:35:49.308 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:35:49.309 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:35:49.309 Initialization complete. Launching workers. 00:35:49.309 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28590 00:35:49.309 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28647, failed to submit 66 00:35:49.309 success 28590, unsuccessful 57, failed 0 00:35:49.309 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:49.309 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.309 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:49.309 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.309 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:35:49.309 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:35:49.309 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:49.309 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:35:49.309 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:49.309 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:35:49.309 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:49.309 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:49.309 rmmod nvme_tcp 00:35:49.309 rmmod nvme_fabrics 00:35:49.309 rmmod nvme_keyring 00:35:49.309 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:49.309 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:35:49.309 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:35:49.309 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 407553 ']' 00:35:49.309 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 407553 00:35:49.309 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 407553 ']' 00:35:49.309 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 407553 00:35:49.309 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:35:49.309 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:49.309 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 407553 00:35:49.309 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:49.309 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:49.309 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 407553' 00:35:49.309 killing process with pid 407553 00:35:49.309 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 407553 00:35:49.309 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 407553 00:35:49.309 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:49.309 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:49.310 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:49.310 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:35:49.310 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:35:49.310 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:49.310 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:35:49.310 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:49.310 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:49.310 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:49.310 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:49.310 00:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:51.230 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:51.230 00:35:51.230 real 0m7.562s 00:35:51.230 user 0m9.845s 00:35:51.230 sys 0m2.970s 00:35:51.230 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:51.230 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:51.230 ************************************ 00:35:51.230 END TEST nvmf_abort 00:35:51.230 ************************************ 00:35:51.490 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:35:51.490 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:51.490 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:51.490 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:51.490 ************************************ 00:35:51.490 START TEST nvmf_ns_hotplug_stress 00:35:51.490 ************************************ 00:35:51.490 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:35:51.490 * Looking for test storage... 00:35:51.490 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:51.490 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:51.490 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:35:51.490 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:51.490 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:51.490 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:51.490 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:51.490 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:51.490 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:35:51.490 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:35:51.490 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:35:51.490 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:35:51.490 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:35:51.490 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:35:51.490 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:35:51.490 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:51.490 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:35:51.490 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:35:51.490 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:51.490 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:51.490 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:35:51.490 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:35:51.490 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:51.490 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:35:51.490 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:35:51.490 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:35:51.490 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:35:51.490 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:51.490 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:35:51.490 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:35:51.490 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:51.490 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:51.490 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:35:51.490 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:51.490 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:51.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:51.490 --rc genhtml_branch_coverage=1 00:35:51.490 --rc genhtml_function_coverage=1 00:35:51.490 --rc genhtml_legend=1 00:35:51.490 --rc geninfo_all_blocks=1 00:35:51.490 --rc geninfo_unexecuted_blocks=1 00:35:51.490 00:35:51.490 ' 00:35:51.490 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:51.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:51.490 --rc genhtml_branch_coverage=1 00:35:51.490 --rc genhtml_function_coverage=1 00:35:51.490 --rc genhtml_legend=1 00:35:51.490 --rc geninfo_all_blocks=1 00:35:51.490 --rc geninfo_unexecuted_blocks=1 00:35:51.490 00:35:51.490 ' 00:35:51.490 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:51.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:51.490 --rc genhtml_branch_coverage=1 00:35:51.490 --rc genhtml_function_coverage=1 00:35:51.490 --rc genhtml_legend=1 00:35:51.490 --rc geninfo_all_blocks=1 00:35:51.490 --rc geninfo_unexecuted_blocks=1 00:35:51.490 00:35:51.490 ' 00:35:51.490 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:51.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:51.490 --rc genhtml_branch_coverage=1 00:35:51.490 --rc genhtml_function_coverage=1 00:35:51.490 --rc genhtml_legend=1 00:35:51.490 --rc geninfo_all_blocks=1 00:35:51.490 --rc geninfo_unexecuted_blocks=1 00:35:51.490 00:35:51.490 ' 00:35:51.490 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:51.490 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:35:51.490 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:51.491 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:51.491 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:51.491 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:51.491 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:51.491 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:51.491 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:51.491 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:51.491 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:51.491 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:51.491 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:51.491 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:51.491 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:51.491 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:51.491 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:51.491 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:51.491 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:51.491 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:35:51.491 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:51.491 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:51.491 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:51.491 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:51.491 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:51.491 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:51.491 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:35:51.491 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:51.491 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:35:51.491 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:51.491 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:51.491 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:51.491 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:51.491 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:51.491 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:51.491 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:51.491 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:51.491 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:51.491 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:51.491 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:51.491 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:35:51.491 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:51.491 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:51.491 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:51.491 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:51.491 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:51.491 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:51.491 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:51.491 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:51.491 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:51.491 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:51.491 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:35:51.491 00:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:35:54.023 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:54.023 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:35:54.023 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:54.023 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:54.023 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:54.023 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:54.023 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:54.023 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:35:54.023 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:54.023 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:35:54.023 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:35:54.023 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:35:54.023 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:35:54.023 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:35:54.023 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:35:54.023 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:54.023 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:54.023 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:54.023 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:54.023 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:54.023 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:54.023 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:54.023 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:54.024 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:54.024 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:54.024 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:54.024 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:54.024 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:54.024 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:35:54.024 00:35:54.024 --- 10.0.0.2 ping statistics --- 00:35:54.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:54.024 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:54.024 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:54.024 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:35:54.024 00:35:54.024 --- 10.0.0.1 ping statistics --- 00:35:54.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:54.024 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:54.024 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:54.025 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:54.025 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:54.025 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:54.025 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:35:54.025 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:54.025 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:54.025 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:35:54.025 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=409888 00:35:54.025 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:35:54.025 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 409888 00:35:54.025 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 409888 ']' 00:35:54.025 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:54.025 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:54.025 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:54.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:54.025 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:54.025 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:35:54.025 [2024-11-18 00:41:17.668259] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:54.025 [2024-11-18 00:41:17.669277] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:35:54.025 [2024-11-18 00:41:17.669337] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:54.025 [2024-11-18 00:41:17.742214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:54.025 [2024-11-18 00:41:17.789383] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:54.025 [2024-11-18 00:41:17.789433] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:54.025 [2024-11-18 00:41:17.789460] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:54.025 [2024-11-18 00:41:17.789472] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:54.025 [2024-11-18 00:41:17.789481] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:54.025 [2024-11-18 00:41:17.790856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:54.025 [2024-11-18 00:41:17.790923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:54.025 [2024-11-18 00:41:17.790926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:54.282 [2024-11-18 00:41:17.874369] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:54.282 [2024-11-18 00:41:17.874564] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:54.282 [2024-11-18 00:41:17.874579] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:54.282 [2024-11-18 00:41:17.874823] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:54.282 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:54.282 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:35:54.282 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:54.282 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:54.282 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:35:54.282 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:54.282 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:35:54.282 00:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:35:54.541 [2024-11-18 00:41:18.183583] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:54.541 00:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:35:54.799 00:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:55.057 [2024-11-18 00:41:18.739999] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:55.057 00:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:55.315 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:35:55.572 Malloc0 00:35:55.572 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:35:55.829 Delay0 00:35:55.829 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:56.087 00:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:35:56.652 NULL1 00:35:56.652 00:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:35:56.910 00:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=410188 00:35:56.910 00:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:35:56.910 00:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410188 00:35:56.910 00:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:58.281 Read completed with error (sct=0, sc=11) 00:35:58.281 00:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:58.281 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:58.281 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:58.281 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:58.281 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:58.281 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:58.281 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:58.281 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:58.281 00:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:35:58.281 00:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:35:58.539 true 00:35:58.539 00:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410188 00:35:58.539 00:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:59.472 00:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:59.730 00:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:35:59.730 00:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:35:59.988 true 00:35:59.988 00:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410188 00:35:59.988 00:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:00.246 00:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:00.503 00:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:36:00.503 00:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:36:00.760 true 00:36:00.760 00:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410188 00:36:00.760 00:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:01.018 00:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:01.275 00:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:36:01.275 00:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:36:01.533 true 00:36:01.533 00:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410188 00:36:01.533 00:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:02.467 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:02.467 00:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:02.724 00:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:36:02.724 00:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:36:02.982 true 00:36:02.982 00:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410188 00:36:02.982 00:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:03.239 00:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:03.496 00:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:36:03.496 00:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:36:03.753 true 00:36:03.753 00:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410188 00:36:03.753 00:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:04.011 00:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:04.269 00:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:36:04.269 00:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:36:04.527 true 00:36:04.527 00:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410188 00:36:04.527 00:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:05.459 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:05.459 00:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:05.459 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:05.716 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:05.716 00:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:36:05.716 00:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:36:05.984 true 00:36:05.984 00:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410188 00:36:05.984 00:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:06.549 00:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:06.807 00:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:36:06.807 00:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:36:07.065 true 00:36:07.065 00:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410188 00:36:07.065 00:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:07.631 00:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:07.631 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:07.888 00:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:36:07.888 00:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:36:08.145 true 00:36:08.406 00:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410188 00:36:08.406 00:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:08.665 00:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:08.923 00:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:36:08.923 00:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:36:09.180 true 00:36:09.180 00:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410188 00:36:09.180 00:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:10.117 00:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:10.117 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:10.117 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:10.117 00:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:36:10.117 00:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:36:10.375 true 00:36:10.375 00:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410188 00:36:10.375 00:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:10.633 00:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:10.890 00:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:36:10.890 00:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:36:11.149 true 00:36:11.149 00:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410188 00:36:11.149 00:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:11.714 00:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:11.714 00:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:36:11.714 00:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:36:11.971 true 00:36:11.971 00:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410188 00:36:11.972 00:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:13.357 00:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:13.357 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:13.357 00:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:36:13.357 00:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:36:13.622 true 00:36:13.622 00:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410188 00:36:13.622 00:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:13.879 00:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:14.137 00:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:36:14.137 00:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:36:14.395 true 00:36:14.395 00:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410188 00:36:14.395 00:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:15.329 00:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:15.329 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:15.586 00:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:36:15.586 00:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:36:15.843 true 00:36:15.843 00:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410188 00:36:15.843 00:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:16.100 00:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:16.358 00:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:36:16.358 00:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:36:16.615 true 00:36:16.615 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410188 00:36:16.615 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:16.872 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:17.130 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:36:17.130 00:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:36:17.389 true 00:36:17.389 00:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410188 00:36:17.390 00:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:18.322 00:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:18.322 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:18.322 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:18.579 00:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:36:18.579 00:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:36:18.836 true 00:36:18.836 00:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410188 00:36:18.836 00:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:19.094 00:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:19.364 00:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:36:19.364 00:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:36:19.622 true 00:36:19.622 00:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410188 00:36:19.622 00:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:19.880 00:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:20.138 00:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:36:20.138 00:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:36:20.396 true 00:36:20.396 00:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410188 00:36:20.396 00:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:21.769 00:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:21.769 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:21.769 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:21.769 00:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:36:21.769 00:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:36:22.026 true 00:36:22.026 00:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410188 00:36:22.026 00:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:22.285 00:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:22.542 00:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:36:22.543 00:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:36:22.814 true 00:36:22.814 00:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410188 00:36:22.814 00:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:23.744 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:23.744 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:23.744 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:24.000 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:36:24.001 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:36:24.257 true 00:36:24.257 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410188 00:36:24.257 00:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:24.515 00:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:24.772 00:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:36:24.772 00:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:36:25.032 true 00:36:25.032 00:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410188 00:36:25.032 00:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:25.289 00:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:25.545 00:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:36:25.545 00:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:36:25.803 true 00:36:25.803 00:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410188 00:36:25.803 00:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:26.736 00:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:26.993 Initializing NVMe Controllers 00:36:26.993 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:26.993 Controller IO queue size 128, less than required. 00:36:26.993 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:26.993 Controller IO queue size 128, less than required. 00:36:26.993 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:26.993 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:36:26.993 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:36:26.993 Initialization complete. Launching workers. 00:36:26.993 ======================================================== 00:36:26.993 Latency(us) 00:36:26.993 Device Information : IOPS MiB/s Average min max 00:36:26.993 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 811.76 0.40 71793.82 2369.54 1014133.96 00:36:26.993 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 9383.50 4.58 13640.45 3132.80 457620.27 00:36:26.993 ======================================================== 00:36:26.993 Total : 10195.26 4.98 18270.71 2369.54 1014133.96 00:36:26.993 00:36:26.993 00:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:36:26.993 00:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:36:27.251 true 00:36:27.509 00:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 410188 00:36:27.509 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (410188) - No such process 00:36:27.509 00:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 410188 00:36:27.509 00:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:27.767 00:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:28.025 00:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:36:28.025 00:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:36:28.025 00:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:36:28.025 00:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:28.025 00:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:36:28.283 null0 00:36:28.283 00:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:28.283 00:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:28.283 00:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:36:28.542 null1 00:36:28.542 00:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:28.542 00:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:28.542 00:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:36:28.800 null2 00:36:28.800 00:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:28.800 00:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:28.800 00:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:36:29.056 null3 00:36:29.056 00:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:29.056 00:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:29.056 00:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:36:29.313 null4 00:36:29.313 00:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:29.313 00:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:29.313 00:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:36:29.573 null5 00:36:29.573 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:29.573 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:29.573 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:36:29.831 null6 00:36:29.831 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:29.831 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:29.831 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:36:30.089 null7 00:36:30.089 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:30.089 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:30.089 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:36:30.089 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:30.089 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:30.089 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:36:30.089 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:30.089 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:36:30.089 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:30.089 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:30.089 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.089 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:30.089 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:30.089 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:36:30.089 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:30.089 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:30.089 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:36:30.089 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:30.089 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.089 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:30.089 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:30.089 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:36:30.090 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:30.090 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:36:30.090 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:30.090 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:30.090 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.090 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:30.090 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:30.090 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:36:30.090 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:30.090 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:36:30.090 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:30.090 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:30.090 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.090 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:30.090 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:30.090 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:36:30.090 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:30.090 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:36:30.090 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:30.090 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:30.090 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.090 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:30.090 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:30.090 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:36:30.090 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:30.090 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:36:30.090 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:30.090 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:30.090 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.090 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:30.090 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:30.090 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:36:30.090 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:30.090 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:36:30.090 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:30.090 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:30.090 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.090 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:30.090 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:30.090 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:36:30.090 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:30.090 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:36:30.090 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:30.090 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:30.090 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 414183 414184 414186 414188 414190 414192 414194 414196 00:36:30.090 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.090 00:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:30.374 00:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:30.374 00:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:30.374 00:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:30.374 00:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:30.374 00:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:30.374 00:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:30.374 00:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:30.374 00:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:30.633 00:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:30.633 00:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.633 00:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:30.633 00:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:30.633 00:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.633 00:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:30.633 00:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:30.633 00:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.633 00:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:30.633 00:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:30.633 00:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.633 00:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:30.633 00:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:30.633 00:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.633 00:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:30.633 00:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:30.633 00:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.633 00:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:30.633 00:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:30.633 00:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.633 00:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:30.633 00:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:30.633 00:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.633 00:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:30.890 00:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:30.890 00:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:30.890 00:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:30.891 00:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:30.891 00:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:30.891 00:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:30.891 00:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:31.149 00:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:31.407 00:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:31.407 00:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:31.407 00:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:31.407 00:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:31.407 00:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:31.407 00:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:31.407 00:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:31.407 00:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:31.407 00:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:31.407 00:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:31.407 00:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:31.407 00:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:31.407 00:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:31.407 00:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:31.407 00:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:31.407 00:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:31.407 00:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:31.407 00:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:31.407 00:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:31.407 00:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:31.407 00:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:31.407 00:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:31.407 00:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:31.407 00:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:31.664 00:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:31.664 00:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:31.664 00:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:31.664 00:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:31.664 00:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:31.664 00:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:31.664 00:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:31.664 00:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:31.922 00:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:31.922 00:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:31.922 00:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:31.922 00:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:31.922 00:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:31.922 00:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:31.922 00:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:31.922 00:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:31.922 00:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:31.922 00:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:31.922 00:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:31.922 00:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:31.922 00:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:31.922 00:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:31.922 00:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:31.922 00:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:31.922 00:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:31.922 00:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:31.922 00:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:31.922 00:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:31.922 00:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:31.922 00:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:31.922 00:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:31.922 00:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:32.181 00:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:32.181 00:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:32.181 00:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:32.181 00:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:32.181 00:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:32.181 00:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:32.181 00:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:32.181 00:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:32.439 00:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:32.439 00:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:32.439 00:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:32.439 00:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:32.439 00:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:32.439 00:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:32.439 00:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:32.439 00:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:32.439 00:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:32.439 00:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:32.439 00:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:32.439 00:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:32.439 00:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:32.439 00:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:32.439 00:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:32.439 00:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:32.439 00:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:32.439 00:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:32.439 00:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:32.439 00:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:32.439 00:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:32.439 00:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:32.439 00:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:32.439 00:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:32.698 00:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:32.698 00:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:32.698 00:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:32.698 00:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:32.698 00:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:32.698 00:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:32.698 00:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:32.698 00:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:32.956 00:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:32.956 00:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:32.956 00:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:32.956 00:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:32.956 00:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:32.956 00:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:33.213 00:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:33.213 00:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:33.213 00:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:33.213 00:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:33.213 00:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:33.213 00:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:33.213 00:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:33.213 00:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:33.213 00:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:33.213 00:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:33.213 00:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:33.213 00:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:33.213 00:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:33.213 00:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:33.213 00:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:33.213 00:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:33.213 00:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:33.213 00:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:33.471 00:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:33.471 00:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:33.471 00:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:33.471 00:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:33.471 00:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:33.471 00:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:33.471 00:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:33.471 00:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:33.730 00:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:33.730 00:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:33.730 00:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:33.730 00:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:33.730 00:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:33.730 00:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:33.730 00:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:33.730 00:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:33.730 00:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:33.730 00:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:33.730 00:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:33.730 00:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:33.730 00:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:33.730 00:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:33.730 00:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:33.730 00:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:33.730 00:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:33.730 00:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:33.730 00:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:33.730 00:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:33.730 00:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:33.730 00:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:33.730 00:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:33.730 00:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:33.989 00:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:33.989 00:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:33.989 00:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:33.989 00:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:33.989 00:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:33.989 00:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:33.989 00:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:33.989 00:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:34.248 00:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:34.248 00:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:34.248 00:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:34.248 00:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:34.248 00:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:34.248 00:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:34.248 00:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:34.248 00:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:34.248 00:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:34.248 00:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:34.248 00:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:34.248 00:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:34.248 00:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:34.248 00:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:34.248 00:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:34.248 00:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:34.248 00:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:34.248 00:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:34.248 00:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:34.248 00:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:34.248 00:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:34.248 00:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:34.248 00:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:34.248 00:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:34.507 00:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:34.507 00:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:34.507 00:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:34.507 00:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:34.507 00:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:34.507 00:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:34.507 00:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:34.507 00:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:34.766 00:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:34.766 00:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:34.766 00:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:34.766 00:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:34.766 00:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:34.766 00:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:34.766 00:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:34.766 00:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:34.766 00:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:34.766 00:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:34.766 00:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:34.766 00:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:34.766 00:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:34.766 00:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:34.766 00:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:34.766 00:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:34.766 00:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:34.766 00:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:35.024 00:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:35.024 00:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:35.024 00:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:35.024 00:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:35.024 00:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:35.024 00:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:35.282 00:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:35.282 00:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:35.282 00:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:35.282 00:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:35.282 00:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:35.282 00:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:35.282 00:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:35.282 00:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:35.554 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:35.554 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:35.554 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:35.554 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:35.554 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:35.555 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:35.555 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:35.555 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:35.555 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:35.555 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:35.555 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:35.555 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:35.555 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:35.555 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:35.555 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:35.556 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:35.556 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:35.556 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:35.556 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:35.556 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:35.556 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:35.556 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:35.556 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:35.556 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:35.818 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:35.818 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:35.818 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:35.818 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:35.819 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:35.819 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:35.819 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:35.819 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:36.077 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:36.077 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:36.077 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:36.077 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:36.077 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:36.077 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:36.077 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:36.077 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:36.077 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:36.077 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:36.077 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:36.077 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:36.077 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:36.077 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:36.077 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:36.077 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:36.077 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:36:36.077 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:36:36.077 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:36.077 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:36:36.077 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:36.077 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:36:36.077 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:36.077 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:36.077 rmmod nvme_tcp 00:36:36.077 rmmod nvme_fabrics 00:36:36.077 rmmod nvme_keyring 00:36:36.077 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:36.077 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:36:36.077 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:36:36.077 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 409888 ']' 00:36:36.077 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 409888 00:36:36.077 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 409888 ']' 00:36:36.077 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 409888 00:36:36.077 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:36:36.077 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:36.077 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 409888 00:36:36.077 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:36.077 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:36.077 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 409888' 00:36:36.077 killing process with pid 409888 00:36:36.077 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 409888 00:36:36.077 00:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 409888 00:36:36.335 00:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:36.335 00:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:36.335 00:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:36.335 00:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:36:36.335 00:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:36:36.335 00:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:36.335 00:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:36:36.335 00:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:36.335 00:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:36.335 00:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:36.335 00:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:36.335 00:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:38.999 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:38.999 00:36:38.999 real 0m47.068s 00:36:38.999 user 3m15.994s 00:36:38.999 sys 0m22.004s 00:36:38.999 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:38.999 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:38.999 ************************************ 00:36:38.999 END TEST nvmf_ns_hotplug_stress 00:36:38.999 ************************************ 00:36:38.999 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:36:38.999 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:38.999 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:38.999 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:38.999 ************************************ 00:36:38.999 START TEST nvmf_delete_subsystem 00:36:38.999 ************************************ 00:36:38.999 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:36:38.999 * Looking for test storage... 00:36:38.999 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:38.999 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:38.999 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:36:38.999 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:38.999 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:38.999 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:38.999 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:38.999 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:38.999 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:36:38.999 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:36:38.999 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:36:38.999 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:36:38.999 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:36:38.999 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:36:38.999 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:36:38.999 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:38.999 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:36:38.999 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:36:38.999 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:38.999 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:38.999 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:36:38.999 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:36:38.999 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:38.999 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:36:38.999 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:36:38.999 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:36:38.999 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:36:38.999 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:39.000 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:36:39.000 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:36:39.000 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:39.000 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:39.000 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:36:39.000 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:39.000 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:39.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:39.000 --rc genhtml_branch_coverage=1 00:36:39.000 --rc genhtml_function_coverage=1 00:36:39.000 --rc genhtml_legend=1 00:36:39.000 --rc geninfo_all_blocks=1 00:36:39.000 --rc geninfo_unexecuted_blocks=1 00:36:39.000 00:36:39.000 ' 00:36:39.000 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:39.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:39.000 --rc genhtml_branch_coverage=1 00:36:39.000 --rc genhtml_function_coverage=1 00:36:39.000 --rc genhtml_legend=1 00:36:39.000 --rc geninfo_all_blocks=1 00:36:39.000 --rc geninfo_unexecuted_blocks=1 00:36:39.000 00:36:39.000 ' 00:36:39.000 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:39.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:39.000 --rc genhtml_branch_coverage=1 00:36:39.000 --rc genhtml_function_coverage=1 00:36:39.000 --rc genhtml_legend=1 00:36:39.000 --rc geninfo_all_blocks=1 00:36:39.000 --rc geninfo_unexecuted_blocks=1 00:36:39.000 00:36:39.000 ' 00:36:39.000 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:39.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:39.000 --rc genhtml_branch_coverage=1 00:36:39.000 --rc genhtml_function_coverage=1 00:36:39.000 --rc genhtml_legend=1 00:36:39.000 --rc geninfo_all_blocks=1 00:36:39.000 --rc geninfo_unexecuted_blocks=1 00:36:39.000 00:36:39.000 ' 00:36:39.000 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:39.000 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:36:39.000 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:39.000 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:39.000 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:39.000 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:39.000 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:39.000 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:39.000 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:39.000 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:39.000 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:39.000 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:39.000 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:39.000 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:39.000 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:39.000 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:39.000 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:39.000 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:39.000 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:39.000 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:36:39.000 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:39.000 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:39.000 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:39.000 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:39.000 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:39.000 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:39.000 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:36:39.000 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:39.000 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:36:39.000 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:39.000 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:39.000 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:39.000 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:39.000 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:39.000 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:39.000 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:39.000 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:39.000 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:39.000 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:39.000 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:36:39.000 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:39.000 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:39.000 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:39.000 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:39.000 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:39.000 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:39.000 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:39.000 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:39.000 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:39.000 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:39.000 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:36:39.000 00:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:40.934 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:40.934 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:36:40.934 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:40.934 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:40.934 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:40.934 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:40.934 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:40.934 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:36:40.934 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:40.934 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:36:40.934 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:36:40.934 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:36:40.934 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:36:40.934 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:36:40.934 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:36:40.934 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:40.934 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:40.934 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:40.934 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:40.934 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:40.934 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:40.934 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:40.935 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:40.935 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:40.935 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:40.935 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:40.935 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:40.935 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:36:40.935 00:36:40.935 --- 10.0.0.2 ping statistics --- 00:36:40.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:40.935 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:36:40.935 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:40.935 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:40.935 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:36:40.936 00:36:40.936 --- 10.0.0.1 ping statistics --- 00:36:40.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:40.936 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:36:40.936 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:40.936 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:36:40.936 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:40.936 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:40.936 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:40.936 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:40.936 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:40.936 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:40.936 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:40.936 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:36:40.936 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:40.936 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:40.936 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:40.936 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=417122 00:36:40.936 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:36:40.936 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 417122 00:36:40.936 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 417122 ']' 00:36:40.936 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:40.936 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:40.936 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:40.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:40.936 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:40.936 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:40.936 [2024-11-18 00:42:04.664979] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:40.936 [2024-11-18 00:42:04.666055] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:36:40.936 [2024-11-18 00:42:04.666108] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:40.936 [2024-11-18 00:42:04.738528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:41.195 [2024-11-18 00:42:04.787209] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:41.195 [2024-11-18 00:42:04.787266] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:41.195 [2024-11-18 00:42:04.787295] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:41.195 [2024-11-18 00:42:04.787306] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:41.195 [2024-11-18 00:42:04.787325] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:41.195 [2024-11-18 00:42:04.788804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:41.195 [2024-11-18 00:42:04.788810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:41.195 [2024-11-18 00:42:04.874591] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:41.195 [2024-11-18 00:42:04.874644] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:41.195 [2024-11-18 00:42:04.874874] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:41.195 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:41.195 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:36:41.195 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:41.195 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:41.195 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:41.195 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:41.195 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:41.195 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.195 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:41.195 [2024-11-18 00:42:04.925500] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:41.195 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.195 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:36:41.195 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.195 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:41.195 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.195 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:41.195 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.195 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:41.195 [2024-11-18 00:42:04.945721] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:41.195 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.195 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:36:41.195 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.195 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:41.195 NULL1 00:36:41.195 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.195 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:41.195 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.195 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:41.195 Delay0 00:36:41.195 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.195 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:41.195 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.195 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:41.195 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.195 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=417199 00:36:41.195 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:36:41.195 00:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:36:41.453 [2024-11-18 00:42:05.023988] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:36:43.351 00:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:43.351 00:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.351 00:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 starting I/O failed: -6 00:36:43.351 Write completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Write completed with error (sct=0, sc=8) 00:36:43.351 starting I/O failed: -6 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Write completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 starting I/O failed: -6 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 starting I/O failed: -6 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Write completed with error (sct=0, sc=8) 00:36:43.351 starting I/O failed: -6 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Write completed with error (sct=0, sc=8) 00:36:43.351 starting I/O failed: -6 00:36:43.351 Write completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Write completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 starting I/O failed: -6 00:36:43.351 Write completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 starting I/O failed: -6 00:36:43.351 Write completed with error (sct=0, sc=8) 00:36:43.351 Write completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Write completed with error (sct=0, sc=8) 00:36:43.351 starting I/O failed: -6 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Write completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 starting I/O failed: -6 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Write completed with error (sct=0, sc=8) 00:36:43.351 starting I/O failed: -6 00:36:43.351 [2024-11-18 00:42:07.105283] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6343f0 is same with the state(6) to be set 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Write completed with error (sct=0, sc=8) 00:36:43.351 starting I/O failed: -6 00:36:43.351 Write completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Write completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Write completed with error (sct=0, sc=8) 00:36:43.351 Write completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 starting I/O failed: -6 00:36:43.351 Write completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 starting I/O failed: -6 00:36:43.351 Write completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Write completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 starting I/O failed: -6 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Write completed with error (sct=0, sc=8) 00:36:43.351 Write completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Write completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 starting I/O failed: -6 00:36:43.351 Write completed with error (sct=0, sc=8) 00:36:43.351 Write completed with error (sct=0, sc=8) 00:36:43.351 Write completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Write completed with error (sct=0, sc=8) 00:36:43.351 Write completed with error (sct=0, sc=8) 00:36:43.351 Write completed with error (sct=0, sc=8) 00:36:43.351 Write completed with error (sct=0, sc=8) 00:36:43.351 Write completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 starting I/O failed: -6 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Write completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Write completed with error (sct=0, sc=8) 00:36:43.351 Write completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 starting I/O failed: -6 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Write completed with error (sct=0, sc=8) 00:36:43.351 Write completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 starting I/O failed: -6 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Write completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Write completed with error (sct=0, sc=8) 00:36:43.351 Write completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 starting I/O failed: -6 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Write completed with error (sct=0, sc=8) 00:36:43.351 Write completed with error (sct=0, sc=8) 00:36:43.351 [2024-11-18 00:42:07.106071] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4b78000c40 is same with the state(6) to be set 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Write completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Write completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Write completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Write completed with error (sct=0, sc=8) 00:36:43.351 Write completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Write completed with error (sct=0, sc=8) 00:36:43.351 Write completed with error (sct=0, sc=8) 00:36:43.351 Write completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Write completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.351 Read completed with error (sct=0, sc=8) 00:36:43.352 Read completed with error (sct=0, sc=8) 00:36:43.352 Read completed with error (sct=0, sc=8) 00:36:43.352 Read completed with error (sct=0, sc=8) 00:36:43.352 Read completed with error (sct=0, sc=8) 00:36:43.352 Read completed with error (sct=0, sc=8) 00:36:43.352 Read completed with error (sct=0, sc=8) 00:36:43.352 Read completed with error (sct=0, sc=8) 00:36:43.352 Read completed with error (sct=0, sc=8) 00:36:43.352 Read completed with error (sct=0, sc=8) 00:36:43.352 Write completed with error (sct=0, sc=8) 00:36:43.352 Read completed with error (sct=0, sc=8) 00:36:43.352 Read completed with error (sct=0, sc=8) 00:36:43.352 Read completed with error (sct=0, sc=8) 00:36:43.352 Read completed with error (sct=0, sc=8) 00:36:43.352 Read completed with error (sct=0, sc=8) 00:36:43.352 Read completed with error (sct=0, sc=8) 00:36:43.352 Read completed with error (sct=0, sc=8) 00:36:43.352 Write completed with error (sct=0, sc=8) 00:36:43.352 Read completed with error (sct=0, sc=8) 00:36:43.352 Read completed with error (sct=0, sc=8) 00:36:43.352 Read completed with error (sct=0, sc=8) 00:36:43.352 Read completed with error (sct=0, sc=8) 00:36:43.352 Read completed with error (sct=0, sc=8) 00:36:44.283 [2024-11-18 00:42:08.085404] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6425b0 is same with the state(6) to be set 00:36:44.283 Read completed with error (sct=0, sc=8) 00:36:44.283 Read completed with error (sct=0, sc=8) 00:36:44.283 Read completed with error (sct=0, sc=8) 00:36:44.283 Read completed with error (sct=0, sc=8) 00:36:44.283 Write completed with error (sct=0, sc=8) 00:36:44.283 Read completed with error (sct=0, sc=8) 00:36:44.283 Write completed with error (sct=0, sc=8) 00:36:44.283 Read completed with error (sct=0, sc=8) 00:36:44.283 Write completed with error (sct=0, sc=8) 00:36:44.283 Read completed with error (sct=0, sc=8) 00:36:44.283 Read completed with error (sct=0, sc=8) 00:36:44.283 Write completed with error (sct=0, sc=8) 00:36:44.283 Read completed with error (sct=0, sc=8) 00:36:44.283 [2024-11-18 00:42:08.105522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4b7800d020 is same with the state(6) to be set 00:36:44.283 Read completed with error (sct=0, sc=8) 00:36:44.283 Read completed with error (sct=0, sc=8) 00:36:44.283 Read completed with error (sct=0, sc=8) 00:36:44.283 Read completed with error (sct=0, sc=8) 00:36:44.283 Read completed with error (sct=0, sc=8) 00:36:44.283 Read completed with error (sct=0, sc=8) 00:36:44.283 Read completed with error (sct=0, sc=8) 00:36:44.283 Read completed with error (sct=0, sc=8) 00:36:44.283 Read completed with error (sct=0, sc=8) 00:36:44.283 Read completed with error (sct=0, sc=8) 00:36:44.283 Read completed with error (sct=0, sc=8) 00:36:44.283 Write completed with error (sct=0, sc=8) 00:36:44.544 [2024-11-18 00:42:08.105754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4b7800d680 is same with the state(6) to be set 00:36:44.544 Read completed with error (sct=0, sc=8) 00:36:44.544 Read completed with error (sct=0, sc=8) 00:36:44.544 Read completed with error (sct=0, sc=8) 00:36:44.544 Write completed with error (sct=0, sc=8) 00:36:44.544 Read completed with error (sct=0, sc=8) 00:36:44.544 Read completed with error (sct=0, sc=8) 00:36:44.544 Read completed with error (sct=0, sc=8) 00:36:44.544 Read completed with error (sct=0, sc=8) 00:36:44.544 Read completed with error (sct=0, sc=8) 00:36:44.544 Read completed with error (sct=0, sc=8) 00:36:44.544 Read completed with error (sct=0, sc=8) 00:36:44.544 Read completed with error (sct=0, sc=8) 00:36:44.544 Read completed with error (sct=0, sc=8) 00:36:44.545 Read completed with error (sct=0, sc=8) 00:36:44.545 Write completed with error (sct=0, sc=8) 00:36:44.545 Write completed with error (sct=0, sc=8) 00:36:44.545 Read completed with error (sct=0, sc=8) 00:36:44.545 Read completed with error (sct=0, sc=8) 00:36:44.545 Read completed with error (sct=0, sc=8) 00:36:44.545 Read completed with error (sct=0, sc=8) 00:36:44.545 Read completed with error (sct=0, sc=8) 00:36:44.545 Read completed with error (sct=0, sc=8) 00:36:44.545 [2024-11-18 00:42:08.105935] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x634810 is same with the state(6) to be set 00:36:44.545 Read completed with error (sct=0, sc=8) 00:36:44.545 Read completed with error (sct=0, sc=8) 00:36:44.545 Read completed with error (sct=0, sc=8) 00:36:44.545 Read completed with error (sct=0, sc=8) 00:36:44.545 Read completed with error (sct=0, sc=8) 00:36:44.545 Write completed with error (sct=0, sc=8) 00:36:44.545 Read completed with error (sct=0, sc=8) 00:36:44.545 Write completed with error (sct=0, sc=8) 00:36:44.545 Read completed with error (sct=0, sc=8) 00:36:44.545 Read completed with error (sct=0, sc=8) 00:36:44.545 Read completed with error (sct=0, sc=8) 00:36:44.545 Write completed with error (sct=0, sc=8) 00:36:44.545 Read completed with error (sct=0, sc=8) 00:36:44.545 Read completed with error (sct=0, sc=8) 00:36:44.545 Read completed with error (sct=0, sc=8) 00:36:44.545 Read completed with error (sct=0, sc=8) 00:36:44.545 Read completed with error (sct=0, sc=8) 00:36:44.545 Read completed with error (sct=0, sc=8) 00:36:44.545 Read completed with error (sct=0, sc=8) 00:36:44.545 Read completed with error (sct=0, sc=8) 00:36:44.546 Write completed with error (sct=0, sc=8) 00:36:44.546 Read completed with error (sct=0, sc=8) 00:36:44.546 [2024-11-18 00:42:08.106374] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x634e70 is same with the state(6) to be set 00:36:44.546 Initializing NVMe Controllers 00:36:44.546 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:44.546 Controller IO queue size 128, less than required. 00:36:44.546 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:44.546 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:36:44.546 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:36:44.546 Initialization complete. Launching workers. 00:36:44.546 ======================================================== 00:36:44.546 Latency(us) 00:36:44.546 Device Information : IOPS MiB/s Average min max 00:36:44.546 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 165.83 0.08 903528.73 866.11 1011729.62 00:36:44.546 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 147.95 0.07 949257.31 400.09 1011709.51 00:36:44.546 ======================================================== 00:36:44.546 Total : 313.78 0.15 925090.62 400.09 1011729.62 00:36:44.546 00:36:44.546 [2024-11-18 00:42:08.107277] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6425b0 (9): Bad file descriptor 00:36:44.546 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:36:44.546 00:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.546 00:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:36:44.546 00:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 417199 00:36:44.546 00:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:36:44.808 00:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:36:44.808 00:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 417199 00:36:44.808 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (417199) - No such process 00:36:44.808 00:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 417199 00:36:44.808 00:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:36:44.808 00:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 417199 00:36:44.808 00:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:36:44.808 00:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:44.809 00:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:36:44.809 00:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:44.809 00:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 417199 00:36:44.809 00:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:36:44.809 00:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:44.809 00:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:44.809 00:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:44.809 00:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:36:44.809 00:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.809 00:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:44.809 00:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.809 00:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:44.809 00:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.809 00:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:44.809 [2024-11-18 00:42:08.625689] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:44.809 00:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.809 00:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:44.809 00:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.809 00:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:45.067 00:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.067 00:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=418096 00:36:45.067 00:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:36:45.067 00:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 418096 00:36:45.067 00:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:36:45.067 00:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:36:45.067 [2024-11-18 00:42:08.688834] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:36:45.325 00:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:36:45.325 00:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 418096 00:36:45.325 00:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:36:45.898 00:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:36:45.898 00:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 418096 00:36:45.898 00:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:36:46.464 00:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:36:46.464 00:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 418096 00:36:46.464 00:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:36:47.031 00:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:36:47.031 00:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 418096 00:36:47.032 00:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:36:47.597 00:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:36:47.597 00:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 418096 00:36:47.597 00:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:36:47.855 00:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:36:47.855 00:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 418096 00:36:47.855 00:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:36:48.420 Initializing NVMe Controllers 00:36:48.420 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:48.420 Controller IO queue size 128, less than required. 00:36:48.420 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:48.420 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:36:48.420 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:36:48.420 Initialization complete. Launching workers. 00:36:48.420 ======================================================== 00:36:48.420 Latency(us) 00:36:48.420 Device Information : IOPS MiB/s Average min max 00:36:48.420 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004635.96 1000241.82 1012131.87 00:36:48.420 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005503.75 1000277.08 1043680.20 00:36:48.420 ======================================================== 00:36:48.420 Total : 256.00 0.12 1005069.86 1000241.82 1043680.20 00:36:48.420 00:36:48.420 00:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:36:48.420 00:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 418096 00:36:48.420 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (418096) - No such process 00:36:48.420 00:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 418096 00:36:48.420 00:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:36:48.420 00:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:36:48.420 00:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:48.420 00:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:36:48.420 00:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:48.420 00:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:36:48.420 00:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:48.420 00:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:48.420 rmmod nvme_tcp 00:36:48.420 rmmod nvme_fabrics 00:36:48.420 rmmod nvme_keyring 00:36:48.420 00:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:48.420 00:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:36:48.420 00:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:36:48.420 00:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 417122 ']' 00:36:48.420 00:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 417122 00:36:48.420 00:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 417122 ']' 00:36:48.420 00:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 417122 00:36:48.420 00:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:36:48.420 00:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:48.420 00:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 417122 00:36:48.420 00:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:48.420 00:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:48.420 00:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 417122' 00:36:48.420 killing process with pid 417122 00:36:48.420 00:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 417122 00:36:48.420 00:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 417122 00:36:48.679 00:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:48.679 00:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:48.679 00:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:48.679 00:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:36:48.679 00:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:36:48.679 00:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:36:48.679 00:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:48.679 00:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:48.679 00:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:48.679 00:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:48.679 00:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:48.679 00:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:51.213 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:51.213 00:36:51.213 real 0m12.266s 00:36:51.213 user 0m24.510s 00:36:51.213 sys 0m3.770s 00:36:51.213 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:51.213 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:51.213 ************************************ 00:36:51.213 END TEST nvmf_delete_subsystem 00:36:51.213 ************************************ 00:36:51.213 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:36:51.213 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:51.213 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:51.213 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:51.213 ************************************ 00:36:51.213 START TEST nvmf_host_management 00:36:51.213 ************************************ 00:36:51.213 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:36:51.213 * Looking for test storage... 00:36:51.213 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:51.213 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:51.213 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:36:51.213 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:51.213 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:51.213 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:51.213 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:51.214 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:51.214 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:36:51.214 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:36:51.214 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:36:51.214 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:36:51.214 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:36:51.214 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:36:51.214 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:36:51.214 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:51.214 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:36:51.214 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:36:51.214 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:51.214 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:51.214 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:36:51.214 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:36:51.214 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:51.214 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:36:51.214 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:36:51.214 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:36:51.214 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:36:51.214 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:51.214 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:36:51.214 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:36:51.214 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:51.214 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:51.214 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:36:51.214 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:51.214 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:51.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:51.214 --rc genhtml_branch_coverage=1 00:36:51.214 --rc genhtml_function_coverage=1 00:36:51.214 --rc genhtml_legend=1 00:36:51.214 --rc geninfo_all_blocks=1 00:36:51.214 --rc geninfo_unexecuted_blocks=1 00:36:51.214 00:36:51.214 ' 00:36:51.214 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:51.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:51.214 --rc genhtml_branch_coverage=1 00:36:51.214 --rc genhtml_function_coverage=1 00:36:51.214 --rc genhtml_legend=1 00:36:51.214 --rc geninfo_all_blocks=1 00:36:51.214 --rc geninfo_unexecuted_blocks=1 00:36:51.214 00:36:51.214 ' 00:36:51.214 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:51.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:51.214 --rc genhtml_branch_coverage=1 00:36:51.214 --rc genhtml_function_coverage=1 00:36:51.214 --rc genhtml_legend=1 00:36:51.214 --rc geninfo_all_blocks=1 00:36:51.214 --rc geninfo_unexecuted_blocks=1 00:36:51.214 00:36:51.214 ' 00:36:51.214 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:51.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:51.214 --rc genhtml_branch_coverage=1 00:36:51.214 --rc genhtml_function_coverage=1 00:36:51.214 --rc genhtml_legend=1 00:36:51.214 --rc geninfo_all_blocks=1 00:36:51.214 --rc geninfo_unexecuted_blocks=1 00:36:51.214 00:36:51.214 ' 00:36:51.214 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:51.214 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:36:51.214 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:51.214 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:51.214 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:51.214 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:51.214 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:51.214 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:51.214 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:51.214 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:51.214 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:51.214 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:51.214 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:51.214 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:51.214 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:51.214 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:51.214 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:51.214 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:51.214 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:51.214 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:36:51.214 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:51.214 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:51.214 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:51.214 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:51.214 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:51.214 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:51.214 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:36:51.215 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:51.215 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:36:51.215 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:51.215 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:51.215 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:51.215 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:51.215 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:51.215 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:51.215 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:51.215 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:51.215 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:51.215 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:51.215 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:51.215 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:51.215 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:36:51.215 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:51.215 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:51.215 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:51.215 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:51.215 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:51.215 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:51.215 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:51.215 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:51.215 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:51.215 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:51.215 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:36:51.215 00:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:53.120 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:53.120 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:36:53.120 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:53.120 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:53.120 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:53.120 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:53.120 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:53.120 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:36:53.120 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:53.120 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:36:53.120 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:36:53.120 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:36:53.120 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:36:53.120 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:36:53.120 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:36:53.120 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:53.120 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:53.120 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:53.120 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:53.120 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:53.120 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:53.120 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:53.120 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:53.120 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:53.120 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:53.120 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:53.120 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:53.120 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:53.120 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:53.120 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:53.120 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:53.120 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:53.120 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:53.120 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:53.120 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:53.120 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:53.120 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:53.120 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:53.120 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:53.120 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:53.120 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:53.120 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:53.120 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:53.120 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:53.120 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:53.120 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:53.120 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:53.120 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:53.120 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:53.120 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:53.120 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:53.120 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:53.120 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:53.120 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:53.120 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:53.120 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:53.120 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:53.120 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:53.120 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:53.120 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:53.121 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:53.121 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:53.121 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:53.121 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:53.121 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:53.121 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:53.121 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:53.121 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:53.121 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:53.121 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:53.121 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:53.121 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:53.121 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:53.121 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:36:53.121 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:53.121 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:53.121 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:53.121 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:53.121 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:53.121 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:53.121 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:53.121 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:53.121 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:53.121 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:53.121 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:53.121 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:53.121 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:53.121 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:53.121 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:53.121 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:53.121 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:53.121 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:53.121 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:53.121 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:53.121 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:53.121 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:53.121 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:53.121 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:53.121 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:53.121 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:53.121 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:53.121 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:36:53.121 00:36:53.121 --- 10.0.0.2 ping statistics --- 00:36:53.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:53.121 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:36:53.121 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:53.121 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:53.121 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:36:53.121 00:36:53.121 --- 10.0.0.1 ping statistics --- 00:36:53.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:53.121 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:36:53.121 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:53.121 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:36:53.121 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:53.121 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:53.121 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:53.121 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:53.121 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:53.121 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:53.121 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:53.121 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:36:53.121 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:36:53.121 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:36:53.121 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:53.121 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:53.121 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:53.380 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=420444 00:36:53.380 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:36:53.380 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 420444 00:36:53.380 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 420444 ']' 00:36:53.380 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:53.380 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:53.380 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:53.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:53.380 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:53.380 00:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:53.380 [2024-11-18 00:42:16.996265] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:53.380 [2024-11-18 00:42:16.997335] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:36:53.380 [2024-11-18 00:42:16.997400] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:53.380 [2024-11-18 00:42:17.071455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:53.380 [2024-11-18 00:42:17.122480] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:53.380 [2024-11-18 00:42:17.122524] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:53.380 [2024-11-18 00:42:17.122539] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:53.380 [2024-11-18 00:42:17.122552] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:53.380 [2024-11-18 00:42:17.122562] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:53.380 [2024-11-18 00:42:17.124180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:53.380 [2024-11-18 00:42:17.124235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:53.380 [2024-11-18 00:42:17.124261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:36:53.380 [2024-11-18 00:42:17.124265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:53.639 [2024-11-18 00:42:17.209106] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:53.639 [2024-11-18 00:42:17.209337] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:53.639 [2024-11-18 00:42:17.209629] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:53.639 [2024-11-18 00:42:17.210229] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:53.639 [2024-11-18 00:42:17.210498] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:36:53.639 00:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:53.639 00:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:36:53.639 00:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:53.639 00:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:53.639 00:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:53.639 00:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:53.639 00:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:53.639 00:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.639 00:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:53.639 [2024-11-18 00:42:17.264966] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:53.639 00:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.639 00:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:36:53.639 00:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:53.639 00:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:53.639 00:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:36:53.639 00:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:36:53.639 00:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:36:53.639 00:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.639 00:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:53.639 Malloc0 00:36:53.639 [2024-11-18 00:42:17.337119] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:53.639 00:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.639 00:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:36:53.639 00:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:53.639 00:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:53.639 00:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=420601 00:36:53.639 00:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 420601 /var/tmp/bdevperf.sock 00:36:53.639 00:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 420601 ']' 00:36:53.639 00:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:36:53.639 00:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:36:53.639 00:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:36:53.639 00:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:53.639 00:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:36:53.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:36:53.639 00:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:36:53.639 00:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:53.639 00:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:36:53.639 00:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:53.639 00:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:53.639 00:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:53.639 { 00:36:53.639 "params": { 00:36:53.639 "name": "Nvme$subsystem", 00:36:53.639 "trtype": "$TEST_TRANSPORT", 00:36:53.639 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:53.639 "adrfam": "ipv4", 00:36:53.639 "trsvcid": "$NVMF_PORT", 00:36:53.639 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:53.639 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:53.639 "hdgst": ${hdgst:-false}, 00:36:53.639 "ddgst": ${ddgst:-false} 00:36:53.639 }, 00:36:53.639 "method": "bdev_nvme_attach_controller" 00:36:53.639 } 00:36:53.639 EOF 00:36:53.639 )") 00:36:53.639 00:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:36:53.639 00:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:36:53.639 00:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:36:53.639 00:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:53.639 "params": { 00:36:53.639 "name": "Nvme0", 00:36:53.639 "trtype": "tcp", 00:36:53.639 "traddr": "10.0.0.2", 00:36:53.639 "adrfam": "ipv4", 00:36:53.639 "trsvcid": "4420", 00:36:53.639 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:53.639 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:53.639 "hdgst": false, 00:36:53.639 "ddgst": false 00:36:53.639 }, 00:36:53.639 "method": "bdev_nvme_attach_controller" 00:36:53.639 }' 00:36:53.639 [2024-11-18 00:42:17.420078] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:36:53.639 [2024-11-18 00:42:17.420153] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid420601 ] 00:36:53.897 [2024-11-18 00:42:17.492128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:53.897 [2024-11-18 00:42:17.539348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:54.155 Running I/O for 10 seconds... 00:36:54.155 00:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:54.155 00:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:36:54.155 00:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:36:54.155 00:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.155 00:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:54.155 00:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.155 00:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:54.155 00:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:36:54.155 00:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:36:54.155 00:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:36:54.155 00:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:36:54.155 00:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:36:54.155 00:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:36:54.155 00:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:36:54.155 00:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:36:54.155 00:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:36:54.155 00:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.155 00:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:54.155 00:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.155 00:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:36:54.155 00:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:36:54.155 00:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:36:54.414 00:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:36:54.414 00:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:36:54.414 00:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:36:54.414 00:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:36:54.414 00:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.414 00:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:54.414 00:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.414 00:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=550 00:36:54.414 00:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 550 -ge 100 ']' 00:36:54.414 00:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:36:54.414 00:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:36:54.414 00:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:36:54.414 00:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:36:54.414 00:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.414 00:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:54.414 [2024-11-18 00:42:18.217472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:54.414 [2024-11-18 00:42:18.217541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.414 [2024-11-18 00:42:18.217562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:54.414 [2024-11-18 00:42:18.217576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.414 [2024-11-18 00:42:18.217590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:54.414 [2024-11-18 00:42:18.217604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.414 [2024-11-18 00:42:18.217628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:54.414 [2024-11-18 00:42:18.217641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.414 [2024-11-18 00:42:18.217664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997d70 is same with the state(6) to be set 00:36:54.414 00:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.414 00:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:36:54.414 00:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.414 00:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:54.414 [2024-11-18 00:42:18.227567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x997d70 (9): Bad file descriptor 00:36:54.414 [2024-11-18 00:42:18.227658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.414 [2024-11-18 00:42:18.227681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.414 [2024-11-18 00:42:18.227706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.414 [2024-11-18 00:42:18.227722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.414 [2024-11-18 00:42:18.227738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.414 [2024-11-18 00:42:18.227752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.414 [2024-11-18 00:42:18.227772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.414 [2024-11-18 00:42:18.227786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.414 [2024-11-18 00:42:18.227801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.414 [2024-11-18 00:42:18.227815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.414 [2024-11-18 00:42:18.227830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.414 [2024-11-18 00:42:18.227844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.415 [2024-11-18 00:42:18.227859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.415 [2024-11-18 00:42:18.227873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.415 [2024-11-18 00:42:18.227888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.415 [2024-11-18 00:42:18.227902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.415 [2024-11-18 00:42:18.227917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.415 [2024-11-18 00:42:18.227931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.415 [2024-11-18 00:42:18.227946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.415 [2024-11-18 00:42:18.227960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.415 [2024-11-18 00:42:18.227980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.415 [2024-11-18 00:42:18.227995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.415 [2024-11-18 00:42:18.228010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.415 [2024-11-18 00:42:18.228023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.415 [2024-11-18 00:42:18.228038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.415 [2024-11-18 00:42:18.228052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.415 [2024-11-18 00:42:18.228067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.415 [2024-11-18 00:42:18.228081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.415 [2024-11-18 00:42:18.228096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.415 [2024-11-18 00:42:18.228109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.415 [2024-11-18 00:42:18.228124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.415 [2024-11-18 00:42:18.228137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.415 [2024-11-18 00:42:18.228152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.415 [2024-11-18 00:42:18.228166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.415 [2024-11-18 00:42:18.228192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.415 [2024-11-18 00:42:18.228206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.415 [2024-11-18 00:42:18.228221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.415 [2024-11-18 00:42:18.228235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.415 [2024-11-18 00:42:18.228257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.415 [2024-11-18 00:42:18.228271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.415 [2024-11-18 00:42:18.228286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.415 [2024-11-18 00:42:18.228299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.415 [2024-11-18 00:42:18.228336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.415 [2024-11-18 00:42:18.228353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.415 [2024-11-18 00:42:18.228368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.415 [2024-11-18 00:42:18.228387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.415 [2024-11-18 00:42:18.228403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.415 [2024-11-18 00:42:18.228417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.415 [2024-11-18 00:42:18.228431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.415 [2024-11-18 00:42:18.228445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.415 [2024-11-18 00:42:18.228460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.415 [2024-11-18 00:42:18.228474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.415 [2024-11-18 00:42:18.228489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.415 [2024-11-18 00:42:18.228503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.415 [2024-11-18 00:42:18.228517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.415 [2024-11-18 00:42:18.228531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.415 [2024-11-18 00:42:18.228546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.415 [2024-11-18 00:42:18.228560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.415 [2024-11-18 00:42:18.228574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.415 [2024-11-18 00:42:18.228588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.415 [2024-11-18 00:42:18.228602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.415 [2024-11-18 00:42:18.228616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.415 [2024-11-18 00:42:18.228630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.415 [2024-11-18 00:42:18.228644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.415 [2024-11-18 00:42:18.228658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.415 [2024-11-18 00:42:18.228673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.415 [2024-11-18 00:42:18.228689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.415 [2024-11-18 00:42:18.228703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.415 [2024-11-18 00:42:18.228718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.415 [2024-11-18 00:42:18.228736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.415 [2024-11-18 00:42:18.228752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.415 [2024-11-18 00:42:18.228766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.415 [2024-11-18 00:42:18.228781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.415 [2024-11-18 00:42:18.228795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.415 [2024-11-18 00:42:18.228810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.415 [2024-11-18 00:42:18.228823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.415 [2024-11-18 00:42:18.228838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.415 [2024-11-18 00:42:18.228852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.415 [2024-11-18 00:42:18.228867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.415 [2024-11-18 00:42:18.228881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.415 [2024-11-18 00:42:18.228907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.415 [2024-11-18 00:42:18.228920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.415 [2024-11-18 00:42:18.228935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.415 [2024-11-18 00:42:18.228950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.415 [2024-11-18 00:42:18.228965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.415 [2024-11-18 00:42:18.228978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.415 [2024-11-18 00:42:18.228995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.415 [2024-11-18 00:42:18.229009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.415 [2024-11-18 00:42:18.229024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.415 [2024-11-18 00:42:18.229038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.416 [2024-11-18 00:42:18.229053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.416 [2024-11-18 00:42:18.229067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.416 [2024-11-18 00:42:18.229082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.416 [2024-11-18 00:42:18.229096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.416 [2024-11-18 00:42:18.229115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.416 [2024-11-18 00:42:18.229130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.416 [2024-11-18 00:42:18.229145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.416 [2024-11-18 00:42:18.229159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.416 [2024-11-18 00:42:18.229175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.416 00:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.416 [2024-11-18 00:42:18.229189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.416 [2024-11-18 00:42:18.229204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.416 [2024-11-18 00:42:18.229218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.416 [2024-11-18 00:42:18.229232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.416 [2024-11-18 00:42:18.229246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.416 [2024-11-18 00:42:18.229261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.416 [2024-11-18 00:42:18.229275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.416 [2024-11-18 00:42:18.229290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.416 [2024-11-18 00:42:18.229319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.416 00:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:36:54.416 [2024-11-18 00:42:18.229337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.416 [2024-11-18 00:42:18.229352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.416 [2024-11-18 00:42:18.229367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.416 [2024-11-18 00:42:18.229381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.416 [2024-11-18 00:42:18.229396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.416 [2024-11-18 00:42:18.229410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.416 [2024-11-18 00:42:18.229425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.416 [2024-11-18 00:42:18.229439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.416 [2024-11-18 00:42:18.229454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.416 [2024-11-18 00:42:18.229468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.416 [2024-11-18 00:42:18.229488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.416 [2024-11-18 00:42:18.229503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.416 [2024-11-18 00:42:18.229518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.416 [2024-11-18 00:42:18.229532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.416 [2024-11-18 00:42:18.229547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.416 [2024-11-18 00:42:18.229561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.416 [2024-11-18 00:42:18.229575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.416 [2024-11-18 00:42:18.229589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.416 [2024-11-18 00:42:18.229604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:54.416 [2024-11-18 00:42:18.229618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:54.416 [2024-11-18 00:42:18.230822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:36:54.416 task offset: 81920 on job bdev=Nvme0n1 fails 00:36:54.416 00:36:54.416 Latency(us) 00:36:54.416 [2024-11-17T23:42:18.238Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:54.416 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:36:54.416 Job: Nvme0n1 ended in about 0.40 seconds with error 00:36:54.416 Verification LBA range: start 0x0 length 0x400 00:36:54.416 Nvme0n1 : 0.40 1608.93 100.56 160.89 0.00 35098.99 2475.80 34369.99 00:36:54.416 [2024-11-17T23:42:18.238Z] =================================================================================================================== 00:36:54.416 [2024-11-17T23:42:18.238Z] Total : 1608.93 100.56 160.89 0.00 35098.99 2475.80 34369.99 00:36:54.416 [2024-11-18 00:42:18.232699] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:36:54.674 [2024-11-18 00:42:18.236134] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:36:55.611 00:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 420601 00:36:55.611 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (420601) - No such process 00:36:55.611 00:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:36:55.611 00:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:36:55.611 00:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:36:55.611 00:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:36:55.611 00:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:36:55.611 00:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:36:55.611 00:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:55.611 00:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:55.611 { 00:36:55.611 "params": { 00:36:55.611 "name": "Nvme$subsystem", 00:36:55.611 "trtype": "$TEST_TRANSPORT", 00:36:55.611 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:55.611 "adrfam": "ipv4", 00:36:55.611 "trsvcid": "$NVMF_PORT", 00:36:55.611 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:55.611 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:55.611 "hdgst": ${hdgst:-false}, 00:36:55.611 "ddgst": ${ddgst:-false} 00:36:55.611 }, 00:36:55.611 "method": "bdev_nvme_attach_controller" 00:36:55.611 } 00:36:55.611 EOF 00:36:55.611 )") 00:36:55.611 00:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:36:55.611 00:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:36:55.611 00:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:36:55.611 00:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:55.611 "params": { 00:36:55.611 "name": "Nvme0", 00:36:55.611 "trtype": "tcp", 00:36:55.611 "traddr": "10.0.0.2", 00:36:55.611 "adrfam": "ipv4", 00:36:55.611 "trsvcid": "4420", 00:36:55.611 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:55.611 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:55.611 "hdgst": false, 00:36:55.611 "ddgst": false 00:36:55.611 }, 00:36:55.611 "method": "bdev_nvme_attach_controller" 00:36:55.611 }' 00:36:55.611 [2024-11-18 00:42:19.282274] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:36:55.611 [2024-11-18 00:42:19.282393] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid420763 ] 00:36:55.611 [2024-11-18 00:42:19.354458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:55.611 [2024-11-18 00:42:19.401355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:55.868 Running I/O for 1 seconds... 00:36:56.801 1597.00 IOPS, 99.81 MiB/s 00:36:56.801 Latency(us) 00:36:56.801 [2024-11-17T23:42:20.623Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:56.801 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:36:56.801 Verification LBA range: start 0x0 length 0x400 00:36:56.801 Nvme0n1 : 1.03 1621.21 101.33 0.00 0.00 38847.78 4660.34 34564.17 00:36:56.801 [2024-11-17T23:42:20.623Z] =================================================================================================================== 00:36:56.801 [2024-11-17T23:42:20.623Z] Total : 1621.21 101.33 0.00 0.00 38847.78 4660.34 34564.17 00:36:57.059 00:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:36:57.059 00:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:36:57.059 00:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:36:57.059 00:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:36:57.059 00:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:36:57.059 00:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:57.059 00:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:36:57.059 00:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:57.059 00:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:36:57.059 00:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:57.059 00:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:57.059 rmmod nvme_tcp 00:36:57.059 rmmod nvme_fabrics 00:36:57.059 rmmod nvme_keyring 00:36:57.059 00:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:57.059 00:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:36:57.059 00:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:36:57.059 00:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 420444 ']' 00:36:57.059 00:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 420444 00:36:57.059 00:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 420444 ']' 00:36:57.059 00:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 420444 00:36:57.059 00:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:36:57.059 00:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:57.059 00:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 420444 00:36:57.319 00:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:57.319 00:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:57.319 00:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 420444' 00:36:57.319 killing process with pid 420444 00:36:57.319 00:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 420444 00:36:57.319 00:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 420444 00:36:57.319 [2024-11-18 00:42:21.065650] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:36:57.319 00:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:57.319 00:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:57.319 00:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:57.319 00:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:36:57.319 00:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:36:57.319 00:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:57.319 00:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:36:57.319 00:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:57.319 00:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:57.319 00:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:57.319 00:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:57.319 00:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:36:59.860 00:36:59.860 real 0m8.630s 00:36:59.860 user 0m16.840s 00:36:59.860 sys 0m3.703s 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:59.860 ************************************ 00:36:59.860 END TEST nvmf_host_management 00:36:59.860 ************************************ 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:59.860 ************************************ 00:36:59.860 START TEST nvmf_lvol 00:36:59.860 ************************************ 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:36:59.860 * Looking for test storage... 00:36:59.860 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:59.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:59.860 --rc genhtml_branch_coverage=1 00:36:59.860 --rc genhtml_function_coverage=1 00:36:59.860 --rc genhtml_legend=1 00:36:59.860 --rc geninfo_all_blocks=1 00:36:59.860 --rc geninfo_unexecuted_blocks=1 00:36:59.860 00:36:59.860 ' 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:59.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:59.860 --rc genhtml_branch_coverage=1 00:36:59.860 --rc genhtml_function_coverage=1 00:36:59.860 --rc genhtml_legend=1 00:36:59.860 --rc geninfo_all_blocks=1 00:36:59.860 --rc geninfo_unexecuted_blocks=1 00:36:59.860 00:36:59.860 ' 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:59.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:59.860 --rc genhtml_branch_coverage=1 00:36:59.860 --rc genhtml_function_coverage=1 00:36:59.860 --rc genhtml_legend=1 00:36:59.860 --rc geninfo_all_blocks=1 00:36:59.860 --rc geninfo_unexecuted_blocks=1 00:36:59.860 00:36:59.860 ' 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:59.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:59.860 --rc genhtml_branch_coverage=1 00:36:59.860 --rc genhtml_function_coverage=1 00:36:59.860 --rc genhtml_legend=1 00:36:59.860 --rc geninfo_all_blocks=1 00:36:59.860 --rc geninfo_unexecuted_blocks=1 00:36:59.860 00:36:59.860 ' 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:59.860 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:59.861 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:59.861 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:59.861 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:59.861 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:59.861 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:36:59.861 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:59.861 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:36:59.861 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:59.861 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:59.861 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:59.861 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:59.861 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:59.861 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:59.861 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:59.861 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:59.861 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:59.861 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:59.861 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:59.861 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:59.861 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:36:59.861 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:36:59.861 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:59.861 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:36:59.861 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:59.861 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:59.861 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:59.861 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:59.861 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:59.861 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:59.861 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:59.861 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:59.861 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:59.861 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:59.861 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:36:59.861 00:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:01.770 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:01.770 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:37:01.770 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:01.770 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:01.770 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:01.770 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:01.770 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:01.770 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:37:01.770 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:01.770 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:37:01.770 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:37:01.770 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:37:01.770 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:37:01.770 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:37:01.770 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:37:01.770 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:01.770 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:01.770 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:01.770 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:01.770 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:01.770 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:01.770 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:01.770 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:01.770 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:01.770 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:01.770 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:01.770 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:01.770 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:01.770 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:01.770 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:01.770 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:01.770 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:01.770 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:01.770 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:01.770 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:01.770 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:01.770 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:01.770 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:01.770 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:01.770 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:01.770 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:01.770 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:01.770 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:01.770 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:01.770 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:01.770 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:01.770 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:01.770 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:01.770 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:01.770 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:01.770 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:01.770 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:01.770 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:01.770 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:01.770 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:01.770 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:01.770 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:01.770 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:01.770 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:01.770 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:01.770 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:01.770 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:01.770 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:01.770 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:01.770 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:01.770 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:01.770 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:01.770 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:01.771 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:01.771 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:01.771 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:01.771 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:01.771 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:01.771 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:37:01.771 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:01.771 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:01.771 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:01.771 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:01.771 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:01.771 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:01.771 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:01.771 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:01.771 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:01.771 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:01.771 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:01.771 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:01.771 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:01.771 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:01.771 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:01.771 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:01.771 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:01.771 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:01.771 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:01.771 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:01.771 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:01.771 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:01.771 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:01.771 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:01.771 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:01.771 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:01.771 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:01.771 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:37:01.771 00:37:01.771 --- 10.0.0.2 ping statistics --- 00:37:01.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:01.771 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:37:01.771 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:02.030 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:02.030 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.065 ms 00:37:02.030 00:37:02.030 --- 10.0.0.1 ping statistics --- 00:37:02.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:02.030 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:37:02.030 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:02.030 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:37:02.030 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:02.030 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:02.030 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:02.030 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:02.030 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:02.030 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:02.030 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:02.030 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:37:02.030 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:02.030 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:02.030 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:02.030 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=422952 00:37:02.030 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:37:02.030 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 422952 00:37:02.030 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 422952 ']' 00:37:02.030 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:02.030 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:02.030 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:02.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:02.030 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:02.030 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:02.030 [2024-11-18 00:42:25.666237] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:02.030 [2024-11-18 00:42:25.667324] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:37:02.030 [2024-11-18 00:42:25.667382] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:02.030 [2024-11-18 00:42:25.741269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:02.030 [2024-11-18 00:42:25.787431] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:02.030 [2024-11-18 00:42:25.787479] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:02.030 [2024-11-18 00:42:25.787494] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:02.030 [2024-11-18 00:42:25.787506] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:02.030 [2024-11-18 00:42:25.787516] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:02.030 [2024-11-18 00:42:25.788985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:02.030 [2024-11-18 00:42:25.789050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:02.030 [2024-11-18 00:42:25.789053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:02.289 [2024-11-18 00:42:25.869451] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:02.289 [2024-11-18 00:42:25.869651] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:02.289 [2024-11-18 00:42:25.869662] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:02.289 [2024-11-18 00:42:25.869914] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:02.289 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:02.289 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:37:02.289 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:02.289 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:02.289 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:02.289 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:02.289 00:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:02.547 [2024-11-18 00:42:26.177846] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:02.547 00:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:02.805 00:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:37:02.805 00:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:03.063 00:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:37:03.063 00:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:37:03.321 00:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:37:03.579 00:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=3bb024a2-ac95-4d83-a981-fa1d15d2d7e2 00:37:03.579 00:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3bb024a2-ac95-4d83-a981-fa1d15d2d7e2 lvol 20 00:37:03.838 00:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=434dd445-12b7-46f9-9bec-8fea579343f3 00:37:03.838 00:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:04.095 00:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 434dd445-12b7-46f9-9bec-8fea579343f3 00:37:04.660 00:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:04.660 [2024-11-18 00:42:28.417984] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:04.660 00:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:04.918 00:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=423382 00:37:04.918 00:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:37:04.918 00:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:37:06.293 00:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 434dd445-12b7-46f9-9bec-8fea579343f3 MY_SNAPSHOT 00:37:06.293 00:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=5b1e0c35-2210-469d-ab69-075daa101e72 00:37:06.293 00:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 434dd445-12b7-46f9-9bec-8fea579343f3 30 00:37:06.552 00:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 5b1e0c35-2210-469d-ab69-075daa101e72 MY_CLONE 00:37:07.117 00:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=8298c106-f4eb-4543-a3d2-8928e59b9145 00:37:07.117 00:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 8298c106-f4eb-4543-a3d2-8928e59b9145 00:37:07.684 00:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 423382 00:37:15.808 Initializing NVMe Controllers 00:37:15.808 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:37:15.808 Controller IO queue size 128, less than required. 00:37:15.808 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:15.808 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:37:15.808 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:37:15.808 Initialization complete. Launching workers. 00:37:15.808 ======================================================== 00:37:15.808 Latency(us) 00:37:15.808 Device Information : IOPS MiB/s Average min max 00:37:15.808 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10625.10 41.50 12054.28 3911.59 56448.40 00:37:15.808 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10658.70 41.64 12014.87 2016.37 128252.17 00:37:15.808 ======================================================== 00:37:15.808 Total : 21283.80 83.14 12034.55 2016.37 128252.17 00:37:15.808 00:37:15.808 00:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:15.808 00:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 434dd445-12b7-46f9-9bec-8fea579343f3 00:37:16.066 00:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3bb024a2-ac95-4d83-a981-fa1d15d2d7e2 00:37:16.324 00:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:37:16.324 00:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:37:16.324 00:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:37:16.324 00:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:16.324 00:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:37:16.324 00:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:16.324 00:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:37:16.324 00:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:16.324 00:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:16.324 rmmod nvme_tcp 00:37:16.324 rmmod nvme_fabrics 00:37:16.324 rmmod nvme_keyring 00:37:16.324 00:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:16.324 00:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:37:16.324 00:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:37:16.324 00:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 422952 ']' 00:37:16.324 00:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 422952 00:37:16.324 00:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 422952 ']' 00:37:16.324 00:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 422952 00:37:16.324 00:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:37:16.324 00:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:16.324 00:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 422952 00:37:16.324 00:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:16.324 00:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:16.324 00:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 422952' 00:37:16.324 killing process with pid 422952 00:37:16.324 00:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 422952 00:37:16.324 00:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 422952 00:37:16.582 00:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:16.582 00:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:16.583 00:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:16.583 00:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:37:16.583 00:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:16.583 00:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:37:16.583 00:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:37:16.583 00:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:16.583 00:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:16.583 00:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:16.583 00:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:16.583 00:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:19.117 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:19.117 00:37:19.117 real 0m19.132s 00:37:19.117 user 0m56.585s 00:37:19.117 sys 0m7.580s 00:37:19.117 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:19.117 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:19.117 ************************************ 00:37:19.117 END TEST nvmf_lvol 00:37:19.117 ************************************ 00:37:19.117 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:37:19.117 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:19.117 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:19.117 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:19.117 ************************************ 00:37:19.117 START TEST nvmf_lvs_grow 00:37:19.117 ************************************ 00:37:19.117 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:37:19.117 * Looking for test storage... 00:37:19.117 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:19.117 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:19.117 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:37:19.117 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:19.117 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:19.117 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:19.117 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:19.117 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:19.117 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:37:19.117 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:37:19.117 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:37:19.117 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:37:19.117 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:37:19.117 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:37:19.117 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:37:19.117 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:19.117 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:37:19.117 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:37:19.117 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:19.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:19.118 --rc genhtml_branch_coverage=1 00:37:19.118 --rc genhtml_function_coverage=1 00:37:19.118 --rc genhtml_legend=1 00:37:19.118 --rc geninfo_all_blocks=1 00:37:19.118 --rc geninfo_unexecuted_blocks=1 00:37:19.118 00:37:19.118 ' 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:19.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:19.118 --rc genhtml_branch_coverage=1 00:37:19.118 --rc genhtml_function_coverage=1 00:37:19.118 --rc genhtml_legend=1 00:37:19.118 --rc geninfo_all_blocks=1 00:37:19.118 --rc geninfo_unexecuted_blocks=1 00:37:19.118 00:37:19.118 ' 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:19.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:19.118 --rc genhtml_branch_coverage=1 00:37:19.118 --rc genhtml_function_coverage=1 00:37:19.118 --rc genhtml_legend=1 00:37:19.118 --rc geninfo_all_blocks=1 00:37:19.118 --rc geninfo_unexecuted_blocks=1 00:37:19.118 00:37:19.118 ' 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:19.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:19.118 --rc genhtml_branch_coverage=1 00:37:19.118 --rc genhtml_function_coverage=1 00:37:19.118 --rc genhtml_legend=1 00:37:19.118 --rc geninfo_all_blocks=1 00:37:19.118 --rc geninfo_unexecuted_blocks=1 00:37:19.118 00:37:19.118 ' 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:19.118 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:37:19.119 00:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:21.022 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:21.022 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:21.022 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:21.022 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:21.022 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:21.022 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:37:21.022 00:37:21.022 --- 10.0.0.2 ping statistics --- 00:37:21.022 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:21.022 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:21.022 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:21.022 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:37:21.022 00:37:21.022 --- 10.0.0.1 ping statistics --- 00:37:21.022 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:21.022 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:37:21.022 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:21.023 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:37:21.023 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:21.023 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:21.023 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:21.023 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:21.023 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:21.023 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:21.023 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:21.023 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:37:21.023 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:21.023 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:21.023 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:21.023 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=426637 00:37:21.023 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:37:21.023 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 426637 00:37:21.023 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 426637 ']' 00:37:21.023 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:21.023 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:21.023 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:21.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:21.023 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:21.023 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:21.023 [2024-11-18 00:42:44.760633] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:21.023 [2024-11-18 00:42:44.761644] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:37:21.023 [2024-11-18 00:42:44.761695] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:21.023 [2024-11-18 00:42:44.835032] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:21.280 [2024-11-18 00:42:44.884659] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:21.280 [2024-11-18 00:42:44.884718] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:21.280 [2024-11-18 00:42:44.884747] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:21.280 [2024-11-18 00:42:44.884759] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:21.280 [2024-11-18 00:42:44.884769] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:21.280 [2024-11-18 00:42:44.885396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:21.280 [2024-11-18 00:42:44.979254] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:21.280 [2024-11-18 00:42:44.979590] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:21.280 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:21.281 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:37:21.281 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:21.281 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:21.281 00:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:21.281 00:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:21.281 00:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:21.539 [2024-11-18 00:42:45.290032] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:21.539 00:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:37:21.539 00:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:21.539 00:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:21.539 00:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:21.539 ************************************ 00:37:21.539 START TEST lvs_grow_clean 00:37:21.539 ************************************ 00:37:21.539 00:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:37:21.539 00:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:37:21.539 00:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:37:21.539 00:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:37:21.539 00:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:37:21.539 00:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:37:21.539 00:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:37:21.539 00:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:21.539 00:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:21.539 00:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:22.104 00:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:37:22.104 00:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:37:22.104 00:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=045828ed-1ae1-48e5-992a-46369ea01f51 00:37:22.362 00:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 045828ed-1ae1-48e5-992a-46369ea01f51 00:37:22.362 00:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:37:22.620 00:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:37:22.620 00:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:37:22.620 00:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 045828ed-1ae1-48e5-992a-46369ea01f51 lvol 150 00:37:22.878 00:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=d69ea4bf-7eac-442c-8785-c0626257cd3b 00:37:22.878 00:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:22.878 00:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:37:23.136 [2024-11-18 00:42:46.749910] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:37:23.136 [2024-11-18 00:42:46.750009] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:37:23.136 true 00:37:23.136 00:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 045828ed-1ae1-48e5-992a-46369ea01f51 00:37:23.136 00:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:37:23.394 00:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:37:23.394 00:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:23.651 00:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d69ea4bf-7eac-442c-8785-c0626257cd3b 00:37:23.909 00:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:24.171 [2024-11-18 00:42:47.850240] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:24.171 00:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:24.434 00:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=427076 00:37:24.434 00:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:37:24.434 00:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:24.434 00:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 427076 /var/tmp/bdevperf.sock 00:37:24.434 00:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 427076 ']' 00:37:24.434 00:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:24.434 00:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:24.434 00:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:24.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:24.434 00:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:24.434 00:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:37:24.434 [2024-11-18 00:42:48.188342] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:37:24.434 [2024-11-18 00:42:48.188436] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid427076 ] 00:37:24.692 [2024-11-18 00:42:48.258112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:24.692 [2024-11-18 00:42:48.309977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:24.692 00:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:24.692 00:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:37:24.692 00:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:37:25.259 Nvme0n1 00:37:25.259 00:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:37:25.517 [ 00:37:25.517 { 00:37:25.517 "name": "Nvme0n1", 00:37:25.517 "aliases": [ 00:37:25.517 "d69ea4bf-7eac-442c-8785-c0626257cd3b" 00:37:25.517 ], 00:37:25.517 "product_name": "NVMe disk", 00:37:25.517 "block_size": 4096, 00:37:25.517 "num_blocks": 38912, 00:37:25.517 "uuid": "d69ea4bf-7eac-442c-8785-c0626257cd3b", 00:37:25.517 "numa_id": 0, 00:37:25.517 "assigned_rate_limits": { 00:37:25.517 "rw_ios_per_sec": 0, 00:37:25.517 "rw_mbytes_per_sec": 0, 00:37:25.517 "r_mbytes_per_sec": 0, 00:37:25.517 "w_mbytes_per_sec": 0 00:37:25.517 }, 00:37:25.517 "claimed": false, 00:37:25.517 "zoned": false, 00:37:25.517 "supported_io_types": { 00:37:25.517 "read": true, 00:37:25.517 "write": true, 00:37:25.517 "unmap": true, 00:37:25.517 "flush": true, 00:37:25.517 "reset": true, 00:37:25.517 "nvme_admin": true, 00:37:25.517 "nvme_io": true, 00:37:25.517 "nvme_io_md": false, 00:37:25.517 "write_zeroes": true, 00:37:25.517 "zcopy": false, 00:37:25.517 "get_zone_info": false, 00:37:25.517 "zone_management": false, 00:37:25.517 "zone_append": false, 00:37:25.517 "compare": true, 00:37:25.517 "compare_and_write": true, 00:37:25.517 "abort": true, 00:37:25.517 "seek_hole": false, 00:37:25.517 "seek_data": false, 00:37:25.517 "copy": true, 00:37:25.517 "nvme_iov_md": false 00:37:25.517 }, 00:37:25.517 "memory_domains": [ 00:37:25.517 { 00:37:25.517 "dma_device_id": "system", 00:37:25.517 "dma_device_type": 1 00:37:25.517 } 00:37:25.517 ], 00:37:25.517 "driver_specific": { 00:37:25.517 "nvme": [ 00:37:25.517 { 00:37:25.517 "trid": { 00:37:25.517 "trtype": "TCP", 00:37:25.517 "adrfam": "IPv4", 00:37:25.517 "traddr": "10.0.0.2", 00:37:25.517 "trsvcid": "4420", 00:37:25.517 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:37:25.517 }, 00:37:25.517 "ctrlr_data": { 00:37:25.517 "cntlid": 1, 00:37:25.517 "vendor_id": "0x8086", 00:37:25.517 "model_number": "SPDK bdev Controller", 00:37:25.517 "serial_number": "SPDK0", 00:37:25.517 "firmware_revision": "25.01", 00:37:25.517 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:25.517 "oacs": { 00:37:25.517 "security": 0, 00:37:25.517 "format": 0, 00:37:25.517 "firmware": 0, 00:37:25.517 "ns_manage": 0 00:37:25.517 }, 00:37:25.517 "multi_ctrlr": true, 00:37:25.517 "ana_reporting": false 00:37:25.517 }, 00:37:25.517 "vs": { 00:37:25.517 "nvme_version": "1.3" 00:37:25.517 }, 00:37:25.517 "ns_data": { 00:37:25.517 "id": 1, 00:37:25.517 "can_share": true 00:37:25.517 } 00:37:25.517 } 00:37:25.517 ], 00:37:25.517 "mp_policy": "active_passive" 00:37:25.517 } 00:37:25.517 } 00:37:25.517 ] 00:37:25.517 00:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=427211 00:37:25.517 00:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:37:25.517 00:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:37:25.517 Running I/O for 10 seconds... 00:37:26.892 Latency(us) 00:37:26.892 [2024-11-17T23:42:50.714Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:26.892 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:26.892 Nvme0n1 : 1.00 14796.00 57.80 0.00 0.00 0.00 0.00 0.00 00:37:26.892 [2024-11-17T23:42:50.714Z] =================================================================================================================== 00:37:26.892 [2024-11-17T23:42:50.714Z] Total : 14796.00 57.80 0.00 0.00 0.00 0.00 0.00 00:37:26.892 00:37:27.457 00:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 045828ed-1ae1-48e5-992a-46369ea01f51 00:37:27.716 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:27.716 Nvme0n1 : 2.00 15049.50 58.79 0.00 0.00 0.00 0.00 0.00 00:37:27.716 [2024-11-17T23:42:51.538Z] =================================================================================================================== 00:37:27.716 [2024-11-17T23:42:51.538Z] Total : 15049.50 58.79 0.00 0.00 0.00 0.00 0.00 00:37:27.716 00:37:27.716 true 00:37:27.716 00:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 045828ed-1ae1-48e5-992a-46369ea01f51 00:37:27.716 00:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:37:27.974 00:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:37:27.974 00:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:37:27.974 00:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 427211 00:37:28.541 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:28.541 Nvme0n1 : 3.00 15155.33 59.20 0.00 0.00 0.00 0.00 0.00 00:37:28.541 [2024-11-17T23:42:52.363Z] =================================================================================================================== 00:37:28.541 [2024-11-17T23:42:52.363Z] Total : 15155.33 59.20 0.00 0.00 0.00 0.00 0.00 00:37:28.541 00:37:29.918 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:29.918 Nvme0n1 : 4.00 15271.75 59.66 0.00 0.00 0.00 0.00 0.00 00:37:29.918 [2024-11-17T23:42:53.740Z] =================================================================================================================== 00:37:29.918 [2024-11-17T23:42:53.740Z] Total : 15271.75 59.66 0.00 0.00 0.00 0.00 0.00 00:37:29.918 00:37:30.854 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:30.854 Nvme0n1 : 5.00 15341.60 59.93 0.00 0.00 0.00 0.00 0.00 00:37:30.854 [2024-11-17T23:42:54.676Z] =================================================================================================================== 00:37:30.854 [2024-11-17T23:42:54.676Z] Total : 15341.60 59.93 0.00 0.00 0.00 0.00 0.00 00:37:30.854 00:37:31.789 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:31.789 Nvme0n1 : 6.00 15388.17 60.11 0.00 0.00 0.00 0.00 0.00 00:37:31.789 [2024-11-17T23:42:55.611Z] =================================================================================================================== 00:37:31.789 [2024-11-17T23:42:55.611Z] Total : 15388.17 60.11 0.00 0.00 0.00 0.00 0.00 00:37:31.789 00:37:32.725 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:32.725 Nvme0n1 : 7.00 15403.29 60.17 0.00 0.00 0.00 0.00 0.00 00:37:32.725 [2024-11-17T23:42:56.547Z] =================================================================================================================== 00:37:32.725 [2024-11-17T23:42:56.547Z] Total : 15403.29 60.17 0.00 0.00 0.00 0.00 0.00 00:37:32.725 00:37:33.659 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:33.659 Nvme0n1 : 8.00 15462.25 60.40 0.00 0.00 0.00 0.00 0.00 00:37:33.659 [2024-11-17T23:42:57.481Z] =================================================================================================================== 00:37:33.659 [2024-11-17T23:42:57.481Z] Total : 15462.25 60.40 0.00 0.00 0.00 0.00 0.00 00:37:33.659 00:37:34.617 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:34.617 Nvme0n1 : 9.00 15495.89 60.53 0.00 0.00 0.00 0.00 0.00 00:37:34.617 [2024-11-17T23:42:58.439Z] =================================================================================================================== 00:37:34.617 [2024-11-17T23:42:58.439Z] Total : 15495.89 60.53 0.00 0.00 0.00 0.00 0.00 00:37:34.617 00:37:35.640 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:35.640 Nvme0n1 : 10.00 15521.10 60.63 0.00 0.00 0.00 0.00 0.00 00:37:35.640 [2024-11-17T23:42:59.462Z] =================================================================================================================== 00:37:35.640 [2024-11-17T23:42:59.462Z] Total : 15521.10 60.63 0.00 0.00 0.00 0.00 0.00 00:37:35.640 00:37:35.640 00:37:35.640 Latency(us) 00:37:35.640 [2024-11-17T23:42:59.462Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:35.640 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:35.640 Nvme0n1 : 10.00 15527.63 60.65 0.00 0.00 8238.83 4199.16 18252.99 00:37:35.640 [2024-11-17T23:42:59.462Z] =================================================================================================================== 00:37:35.640 [2024-11-17T23:42:59.462Z] Total : 15527.63 60.65 0.00 0.00 8238.83 4199.16 18252.99 00:37:35.640 { 00:37:35.640 "results": [ 00:37:35.640 { 00:37:35.640 "job": "Nvme0n1", 00:37:35.640 "core_mask": "0x2", 00:37:35.640 "workload": "randwrite", 00:37:35.640 "status": "finished", 00:37:35.640 "queue_depth": 128, 00:37:35.640 "io_size": 4096, 00:37:35.640 "runtime": 10.004035, 00:37:35.640 "iops": 15527.634599439127, 00:37:35.640 "mibps": 60.65482265405909, 00:37:35.640 "io_failed": 0, 00:37:35.640 "io_timeout": 0, 00:37:35.640 "avg_latency_us": 8238.825670155571, 00:37:35.640 "min_latency_us": 4199.158518518519, 00:37:35.640 "max_latency_us": 18252.98962962963 00:37:35.640 } 00:37:35.640 ], 00:37:35.640 "core_count": 1 00:37:35.640 } 00:37:35.640 00:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 427076 00:37:35.640 00:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 427076 ']' 00:37:35.640 00:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 427076 00:37:35.640 00:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:37:35.640 00:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:35.640 00:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 427076 00:37:35.640 00:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:35.640 00:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:35.640 00:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 427076' 00:37:35.640 killing process with pid 427076 00:37:35.640 00:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 427076 00:37:35.640 Received shutdown signal, test time was about 10.000000 seconds 00:37:35.640 00:37:35.640 Latency(us) 00:37:35.640 [2024-11-17T23:42:59.462Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:35.640 [2024-11-17T23:42:59.462Z] =================================================================================================================== 00:37:35.640 [2024-11-17T23:42:59.462Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:35.640 00:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 427076 00:37:35.898 00:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:36.156 00:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:36.415 00:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 045828ed-1ae1-48e5-992a-46369ea01f51 00:37:36.415 00:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:37:36.673 00:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:37:36.673 00:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:37:36.673 00:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:37:36.936 [2024-11-18 00:43:00.709951] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:37:36.936 00:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 045828ed-1ae1-48e5-992a-46369ea01f51 00:37:36.936 00:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:37:36.936 00:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 045828ed-1ae1-48e5-992a-46369ea01f51 00:37:36.936 00:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:36.936 00:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:36.936 00:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:36.936 00:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:36.936 00:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:36.936 00:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:36.936 00:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:36.936 00:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:37:36.936 00:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 045828ed-1ae1-48e5-992a-46369ea01f51 00:37:37.195 request: 00:37:37.195 { 00:37:37.195 "uuid": "045828ed-1ae1-48e5-992a-46369ea01f51", 00:37:37.195 "method": "bdev_lvol_get_lvstores", 00:37:37.195 "req_id": 1 00:37:37.195 } 00:37:37.195 Got JSON-RPC error response 00:37:37.195 response: 00:37:37.195 { 00:37:37.195 "code": -19, 00:37:37.195 "message": "No such device" 00:37:37.195 } 00:37:37.195 00:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:37:37.195 00:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:37.195 00:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:37.195 00:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:37.195 00:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:37.761 aio_bdev 00:37:37.761 00:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d69ea4bf-7eac-442c-8785-c0626257cd3b 00:37:37.761 00:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=d69ea4bf-7eac-442c-8785-c0626257cd3b 00:37:37.761 00:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:37:37.761 00:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:37:37.761 00:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:37:37.761 00:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:37:37.761 00:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:37.761 00:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d69ea4bf-7eac-442c-8785-c0626257cd3b -t 2000 00:37:38.019 [ 00:37:38.019 { 00:37:38.019 "name": "d69ea4bf-7eac-442c-8785-c0626257cd3b", 00:37:38.019 "aliases": [ 00:37:38.019 "lvs/lvol" 00:37:38.019 ], 00:37:38.019 "product_name": "Logical Volume", 00:37:38.019 "block_size": 4096, 00:37:38.019 "num_blocks": 38912, 00:37:38.019 "uuid": "d69ea4bf-7eac-442c-8785-c0626257cd3b", 00:37:38.019 "assigned_rate_limits": { 00:37:38.019 "rw_ios_per_sec": 0, 00:37:38.019 "rw_mbytes_per_sec": 0, 00:37:38.019 "r_mbytes_per_sec": 0, 00:37:38.019 "w_mbytes_per_sec": 0 00:37:38.019 }, 00:37:38.019 "claimed": false, 00:37:38.019 "zoned": false, 00:37:38.019 "supported_io_types": { 00:37:38.019 "read": true, 00:37:38.019 "write": true, 00:37:38.019 "unmap": true, 00:37:38.019 "flush": false, 00:37:38.019 "reset": true, 00:37:38.019 "nvme_admin": false, 00:37:38.019 "nvme_io": false, 00:37:38.019 "nvme_io_md": false, 00:37:38.019 "write_zeroes": true, 00:37:38.019 "zcopy": false, 00:37:38.019 "get_zone_info": false, 00:37:38.019 "zone_management": false, 00:37:38.019 "zone_append": false, 00:37:38.019 "compare": false, 00:37:38.019 "compare_and_write": false, 00:37:38.019 "abort": false, 00:37:38.019 "seek_hole": true, 00:37:38.019 "seek_data": true, 00:37:38.019 "copy": false, 00:37:38.019 "nvme_iov_md": false 00:37:38.019 }, 00:37:38.019 "driver_specific": { 00:37:38.019 "lvol": { 00:37:38.019 "lvol_store_uuid": "045828ed-1ae1-48e5-992a-46369ea01f51", 00:37:38.019 "base_bdev": "aio_bdev", 00:37:38.019 "thin_provision": false, 00:37:38.019 "num_allocated_clusters": 38, 00:37:38.019 "snapshot": false, 00:37:38.019 "clone": false, 00:37:38.019 "esnap_clone": false 00:37:38.019 } 00:37:38.019 } 00:37:38.019 } 00:37:38.019 ] 00:37:38.277 00:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:37:38.277 00:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 045828ed-1ae1-48e5-992a-46369ea01f51 00:37:38.277 00:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:37:38.538 00:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:37:38.539 00:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 045828ed-1ae1-48e5-992a-46369ea01f51 00:37:38.539 00:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:37:38.799 00:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:37:38.799 00:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d69ea4bf-7eac-442c-8785-c0626257cd3b 00:37:39.057 00:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 045828ed-1ae1-48e5-992a-46369ea01f51 00:37:39.315 00:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:37:39.574 00:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:39.574 00:37:39.574 real 0m17.973s 00:37:39.574 user 0m17.428s 00:37:39.574 sys 0m1.943s 00:37:39.574 00:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:39.574 00:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:37:39.574 ************************************ 00:37:39.574 END TEST lvs_grow_clean 00:37:39.574 ************************************ 00:37:39.574 00:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:37:39.574 00:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:39.574 00:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:39.574 00:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:39.574 ************************************ 00:37:39.574 START TEST lvs_grow_dirty 00:37:39.574 ************************************ 00:37:39.574 00:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:37:39.574 00:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:37:39.574 00:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:37:39.574 00:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:37:39.574 00:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:37:39.574 00:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:37:39.574 00:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:37:39.574 00:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:39.574 00:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:39.574 00:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:39.832 00:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:37:40.091 00:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:37:40.352 00:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=2f640b76-e95b-4866-bd73-8685e8878d09 00:37:40.352 00:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2f640b76-e95b-4866-bd73-8685e8878d09 00:37:40.352 00:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:37:40.609 00:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:37:40.609 00:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:37:40.609 00:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2f640b76-e95b-4866-bd73-8685e8878d09 lvol 150 00:37:40.868 00:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=df91d110-83aa-4bf8-a23c-0d72d349fbc6 00:37:40.868 00:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:40.868 00:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:37:41.126 [2024-11-18 00:43:04.809905] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:37:41.126 [2024-11-18 00:43:04.810012] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:37:41.126 true 00:37:41.126 00:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2f640b76-e95b-4866-bd73-8685e8878d09 00:37:41.126 00:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:37:41.384 00:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:37:41.384 00:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:41.643 00:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 df91d110-83aa-4bf8-a23c-0d72d349fbc6 00:37:41.900 00:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:42.158 [2024-11-18 00:43:05.910223] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:42.158 00:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:42.415 00:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=429237 00:37:42.415 00:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:37:42.415 00:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:42.415 00:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 429237 /var/tmp/bdevperf.sock 00:37:42.415 00:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 429237 ']' 00:37:42.415 00:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:42.415 00:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:42.415 00:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:42.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:42.415 00:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:42.415 00:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:37:42.673 [2024-11-18 00:43:06.245226] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:37:42.673 [2024-11-18 00:43:06.245308] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid429237 ] 00:37:42.673 [2024-11-18 00:43:06.311861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:42.673 [2024-11-18 00:43:06.357455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:42.931 00:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:42.931 00:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:37:42.931 00:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:37:43.196 Nvme0n1 00:37:43.197 00:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:37:43.457 [ 00:37:43.457 { 00:37:43.457 "name": "Nvme0n1", 00:37:43.457 "aliases": [ 00:37:43.457 "df91d110-83aa-4bf8-a23c-0d72d349fbc6" 00:37:43.457 ], 00:37:43.457 "product_name": "NVMe disk", 00:37:43.457 "block_size": 4096, 00:37:43.457 "num_blocks": 38912, 00:37:43.457 "uuid": "df91d110-83aa-4bf8-a23c-0d72d349fbc6", 00:37:43.457 "numa_id": 0, 00:37:43.457 "assigned_rate_limits": { 00:37:43.457 "rw_ios_per_sec": 0, 00:37:43.457 "rw_mbytes_per_sec": 0, 00:37:43.457 "r_mbytes_per_sec": 0, 00:37:43.457 "w_mbytes_per_sec": 0 00:37:43.457 }, 00:37:43.457 "claimed": false, 00:37:43.457 "zoned": false, 00:37:43.457 "supported_io_types": { 00:37:43.457 "read": true, 00:37:43.457 "write": true, 00:37:43.457 "unmap": true, 00:37:43.457 "flush": true, 00:37:43.457 "reset": true, 00:37:43.457 "nvme_admin": true, 00:37:43.457 "nvme_io": true, 00:37:43.457 "nvme_io_md": false, 00:37:43.457 "write_zeroes": true, 00:37:43.457 "zcopy": false, 00:37:43.457 "get_zone_info": false, 00:37:43.457 "zone_management": false, 00:37:43.457 "zone_append": false, 00:37:43.457 "compare": true, 00:37:43.457 "compare_and_write": true, 00:37:43.457 "abort": true, 00:37:43.457 "seek_hole": false, 00:37:43.457 "seek_data": false, 00:37:43.457 "copy": true, 00:37:43.457 "nvme_iov_md": false 00:37:43.457 }, 00:37:43.457 "memory_domains": [ 00:37:43.457 { 00:37:43.457 "dma_device_id": "system", 00:37:43.457 "dma_device_type": 1 00:37:43.457 } 00:37:43.457 ], 00:37:43.457 "driver_specific": { 00:37:43.457 "nvme": [ 00:37:43.457 { 00:37:43.457 "trid": { 00:37:43.457 "trtype": "TCP", 00:37:43.457 "adrfam": "IPv4", 00:37:43.457 "traddr": "10.0.0.2", 00:37:43.457 "trsvcid": "4420", 00:37:43.457 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:37:43.457 }, 00:37:43.457 "ctrlr_data": { 00:37:43.457 "cntlid": 1, 00:37:43.457 "vendor_id": "0x8086", 00:37:43.457 "model_number": "SPDK bdev Controller", 00:37:43.457 "serial_number": "SPDK0", 00:37:43.457 "firmware_revision": "25.01", 00:37:43.457 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:43.457 "oacs": { 00:37:43.457 "security": 0, 00:37:43.457 "format": 0, 00:37:43.457 "firmware": 0, 00:37:43.457 "ns_manage": 0 00:37:43.457 }, 00:37:43.458 "multi_ctrlr": true, 00:37:43.458 "ana_reporting": false 00:37:43.458 }, 00:37:43.458 "vs": { 00:37:43.458 "nvme_version": "1.3" 00:37:43.458 }, 00:37:43.458 "ns_data": { 00:37:43.458 "id": 1, 00:37:43.458 "can_share": true 00:37:43.458 } 00:37:43.458 } 00:37:43.458 ], 00:37:43.458 "mp_policy": "active_passive" 00:37:43.458 } 00:37:43.458 } 00:37:43.458 ] 00:37:43.458 00:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=429260 00:37:43.458 00:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:37:43.458 00:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:37:43.716 Running I/O for 10 seconds... 00:37:44.650 Latency(us) 00:37:44.650 [2024-11-17T23:43:08.472Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:44.651 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:44.651 Nvme0n1 : 1.00 14732.00 57.55 0.00 0.00 0.00 0.00 0.00 00:37:44.651 [2024-11-17T23:43:08.473Z] =================================================================================================================== 00:37:44.651 [2024-11-17T23:43:08.473Z] Total : 14732.00 57.55 0.00 0.00 0.00 0.00 0.00 00:37:44.651 00:37:45.598 00:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 2f640b76-e95b-4866-bd73-8685e8878d09 00:37:45.598 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:45.598 Nvme0n1 : 2.00 14859.00 58.04 0.00 0.00 0.00 0.00 0.00 00:37:45.598 [2024-11-17T23:43:09.420Z] =================================================================================================================== 00:37:45.598 [2024-11-17T23:43:09.420Z] Total : 14859.00 58.04 0.00 0.00 0.00 0.00 0.00 00:37:45.598 00:37:45.858 true 00:37:45.858 00:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2f640b76-e95b-4866-bd73-8685e8878d09 00:37:45.858 00:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:37:46.116 00:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:37:46.116 00:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:37:46.116 00:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 429260 00:37:46.682 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:46.682 Nvme0n1 : 3.00 14901.33 58.21 0.00 0.00 0.00 0.00 0.00 00:37:46.682 [2024-11-17T23:43:10.504Z] =================================================================================================================== 00:37:46.682 [2024-11-17T23:43:10.504Z] Total : 14901.33 58.21 0.00 0.00 0.00 0.00 0.00 00:37:46.682 00:37:47.616 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:47.616 Nvme0n1 : 4.00 14986.00 58.54 0.00 0.00 0.00 0.00 0.00 00:37:47.616 [2024-11-17T23:43:11.438Z] =================================================================================================================== 00:37:47.616 [2024-11-17T23:43:11.438Z] Total : 14986.00 58.54 0.00 0.00 0.00 0.00 0.00 00:37:47.616 00:37:48.551 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:48.551 Nvme0n1 : 5.00 15062.20 58.84 0.00 0.00 0.00 0.00 0.00 00:37:48.551 [2024-11-17T23:43:12.373Z] =================================================================================================================== 00:37:48.551 [2024-11-17T23:43:12.373Z] Total : 15062.20 58.84 0.00 0.00 0.00 0.00 0.00 00:37:48.551 00:37:49.926 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:49.926 Nvme0n1 : 6.00 15134.17 59.12 0.00 0.00 0.00 0.00 0.00 00:37:49.926 [2024-11-17T23:43:13.748Z] =================================================================================================================== 00:37:49.926 [2024-11-17T23:43:13.748Z] Total : 15134.17 59.12 0.00 0.00 0.00 0.00 0.00 00:37:49.926 00:37:50.862 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:50.862 Nvme0n1 : 7.00 15167.43 59.25 0.00 0.00 0.00 0.00 0.00 00:37:50.862 [2024-11-17T23:43:14.684Z] =================================================================================================================== 00:37:50.862 [2024-11-17T23:43:14.684Z] Total : 15167.43 59.25 0.00 0.00 0.00 0.00 0.00 00:37:50.862 00:37:51.798 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:51.798 Nvme0n1 : 8.00 15208.25 59.41 0.00 0.00 0.00 0.00 0.00 00:37:51.798 [2024-11-17T23:43:15.620Z] =================================================================================================================== 00:37:51.798 [2024-11-17T23:43:15.620Z] Total : 15208.25 59.41 0.00 0.00 0.00 0.00 0.00 00:37:51.798 00:37:52.733 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:52.734 Nvme0n1 : 9.00 15240.00 59.53 0.00 0.00 0.00 0.00 0.00 00:37:52.734 [2024-11-17T23:43:16.556Z] =================================================================================================================== 00:37:52.734 [2024-11-17T23:43:16.556Z] Total : 15240.00 59.53 0.00 0.00 0.00 0.00 0.00 00:37:52.734 00:37:53.670 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:53.670 Nvme0n1 : 10.00 15252.70 59.58 0.00 0.00 0.00 0.00 0.00 00:37:53.670 [2024-11-17T23:43:17.492Z] =================================================================================================================== 00:37:53.670 [2024-11-17T23:43:17.492Z] Total : 15252.70 59.58 0.00 0.00 0.00 0.00 0.00 00:37:53.670 00:37:53.670 00:37:53.670 Latency(us) 00:37:53.670 [2024-11-17T23:43:17.492Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:53.670 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:53.670 Nvme0n1 : 10.00 15258.19 59.60 0.00 0.00 8384.43 7718.68 18835.53 00:37:53.670 [2024-11-17T23:43:17.492Z] =================================================================================================================== 00:37:53.670 [2024-11-17T23:43:17.492Z] Total : 15258.19 59.60 0.00 0.00 8384.43 7718.68 18835.53 00:37:53.670 { 00:37:53.670 "results": [ 00:37:53.670 { 00:37:53.670 "job": "Nvme0n1", 00:37:53.670 "core_mask": "0x2", 00:37:53.670 "workload": "randwrite", 00:37:53.670 "status": "finished", 00:37:53.670 "queue_depth": 128, 00:37:53.670 "io_size": 4096, 00:37:53.670 "runtime": 10.004792, 00:37:53.670 "iops": 15258.188276178056, 00:37:53.670 "mibps": 59.60229795382053, 00:37:53.670 "io_failed": 0, 00:37:53.670 "io_timeout": 0, 00:37:53.670 "avg_latency_us": 8384.434491039467, 00:37:53.670 "min_latency_us": 7718.684444444444, 00:37:53.670 "max_latency_us": 18835.53185185185 00:37:53.670 } 00:37:53.670 ], 00:37:53.670 "core_count": 1 00:37:53.670 } 00:37:53.670 00:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 429237 00:37:53.670 00:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 429237 ']' 00:37:53.670 00:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 429237 00:37:53.670 00:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:37:53.670 00:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:53.670 00:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 429237 00:37:53.670 00:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:53.670 00:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:53.670 00:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 429237' 00:37:53.670 killing process with pid 429237 00:37:53.670 00:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 429237 00:37:53.671 Received shutdown signal, test time was about 10.000000 seconds 00:37:53.671 00:37:53.671 Latency(us) 00:37:53.671 [2024-11-17T23:43:17.493Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:53.671 [2024-11-17T23:43:17.493Z] =================================================================================================================== 00:37:53.671 [2024-11-17T23:43:17.493Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:53.671 00:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 429237 00:37:53.931 00:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:54.192 00:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:54.450 00:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2f640b76-e95b-4866-bd73-8685e8878d09 00:37:54.450 00:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:37:54.708 00:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:37:54.708 00:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:37:54.708 00:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 426637 00:37:54.708 00:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 426637 00:37:54.708 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 426637 Killed "${NVMF_APP[@]}" "$@" 00:37:54.708 00:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:37:54.708 00:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:37:54.708 00:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:54.708 00:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:54.708 00:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:37:54.708 00:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=430578 00:37:54.708 00:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:37:54.708 00:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 430578 00:37:54.708 00:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 430578 ']' 00:37:54.709 00:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:54.709 00:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:54.709 00:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:54.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:54.709 00:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:54.709 00:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:37:54.967 [2024-11-18 00:43:18.538860] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:54.967 [2024-11-18 00:43:18.539915] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:37:54.967 [2024-11-18 00:43:18.539979] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:54.967 [2024-11-18 00:43:18.610267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:54.967 [2024-11-18 00:43:18.652727] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:54.967 [2024-11-18 00:43:18.652784] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:54.967 [2024-11-18 00:43:18.652812] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:54.967 [2024-11-18 00:43:18.652824] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:54.967 [2024-11-18 00:43:18.652835] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:54.967 [2024-11-18 00:43:18.653428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:54.967 [2024-11-18 00:43:18.741948] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:54.967 [2024-11-18 00:43:18.742274] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:54.967 00:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:54.967 00:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:37:54.967 00:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:54.967 00:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:54.967 00:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:37:54.967 00:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:55.225 00:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:55.225 [2024-11-18 00:43:19.036071] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:37:55.225 [2024-11-18 00:43:19.036198] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:37:55.225 [2024-11-18 00:43:19.036244] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:37:55.489 00:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:37:55.489 00:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev df91d110-83aa-4bf8-a23c-0d72d349fbc6 00:37:55.489 00:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=df91d110-83aa-4bf8-a23c-0d72d349fbc6 00:37:55.489 00:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:37:55.489 00:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:37:55.489 00:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:37:55.489 00:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:37:55.489 00:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:55.745 00:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b df91d110-83aa-4bf8-a23c-0d72d349fbc6 -t 2000 00:37:56.002 [ 00:37:56.003 { 00:37:56.003 "name": "df91d110-83aa-4bf8-a23c-0d72d349fbc6", 00:37:56.003 "aliases": [ 00:37:56.003 "lvs/lvol" 00:37:56.003 ], 00:37:56.003 "product_name": "Logical Volume", 00:37:56.003 "block_size": 4096, 00:37:56.003 "num_blocks": 38912, 00:37:56.003 "uuid": "df91d110-83aa-4bf8-a23c-0d72d349fbc6", 00:37:56.003 "assigned_rate_limits": { 00:37:56.003 "rw_ios_per_sec": 0, 00:37:56.003 "rw_mbytes_per_sec": 0, 00:37:56.003 "r_mbytes_per_sec": 0, 00:37:56.003 "w_mbytes_per_sec": 0 00:37:56.003 }, 00:37:56.003 "claimed": false, 00:37:56.003 "zoned": false, 00:37:56.003 "supported_io_types": { 00:37:56.003 "read": true, 00:37:56.003 "write": true, 00:37:56.003 "unmap": true, 00:37:56.003 "flush": false, 00:37:56.003 "reset": true, 00:37:56.003 "nvme_admin": false, 00:37:56.003 "nvme_io": false, 00:37:56.003 "nvme_io_md": false, 00:37:56.003 "write_zeroes": true, 00:37:56.003 "zcopy": false, 00:37:56.003 "get_zone_info": false, 00:37:56.003 "zone_management": false, 00:37:56.003 "zone_append": false, 00:37:56.003 "compare": false, 00:37:56.003 "compare_and_write": false, 00:37:56.003 "abort": false, 00:37:56.003 "seek_hole": true, 00:37:56.003 "seek_data": true, 00:37:56.003 "copy": false, 00:37:56.003 "nvme_iov_md": false 00:37:56.003 }, 00:37:56.003 "driver_specific": { 00:37:56.003 "lvol": { 00:37:56.003 "lvol_store_uuid": "2f640b76-e95b-4866-bd73-8685e8878d09", 00:37:56.003 "base_bdev": "aio_bdev", 00:37:56.003 "thin_provision": false, 00:37:56.003 "num_allocated_clusters": 38, 00:37:56.003 "snapshot": false, 00:37:56.003 "clone": false, 00:37:56.003 "esnap_clone": false 00:37:56.003 } 00:37:56.003 } 00:37:56.003 } 00:37:56.003 ] 00:37:56.003 00:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:37:56.003 00:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2f640b76-e95b-4866-bd73-8685e8878d09 00:37:56.003 00:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:37:56.261 00:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:37:56.261 00:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2f640b76-e95b-4866-bd73-8685e8878d09 00:37:56.261 00:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:37:56.519 00:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:37:56.519 00:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:37:56.777 [2024-11-18 00:43:20.425947] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:37:56.777 00:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2f640b76-e95b-4866-bd73-8685e8878d09 00:37:56.777 00:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:37:56.777 00:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2f640b76-e95b-4866-bd73-8685e8878d09 00:37:56.777 00:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:56.777 00:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:56.778 00:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:56.778 00:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:56.778 00:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:56.778 00:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:56.778 00:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:56.778 00:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:37:56.778 00:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2f640b76-e95b-4866-bd73-8685e8878d09 00:37:57.039 request: 00:37:57.039 { 00:37:57.039 "uuid": "2f640b76-e95b-4866-bd73-8685e8878d09", 00:37:57.039 "method": "bdev_lvol_get_lvstores", 00:37:57.039 "req_id": 1 00:37:57.039 } 00:37:57.039 Got JSON-RPC error response 00:37:57.039 response: 00:37:57.039 { 00:37:57.039 "code": -19, 00:37:57.039 "message": "No such device" 00:37:57.039 } 00:37:57.039 00:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:37:57.039 00:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:57.039 00:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:57.039 00:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:57.039 00:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:57.299 aio_bdev 00:37:57.299 00:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev df91d110-83aa-4bf8-a23c-0d72d349fbc6 00:37:57.299 00:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=df91d110-83aa-4bf8-a23c-0d72d349fbc6 00:37:57.299 00:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:37:57.299 00:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:37:57.299 00:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:37:57.299 00:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:37:57.299 00:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:57.556 00:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b df91d110-83aa-4bf8-a23c-0d72d349fbc6 -t 2000 00:37:57.819 [ 00:37:57.819 { 00:37:57.819 "name": "df91d110-83aa-4bf8-a23c-0d72d349fbc6", 00:37:57.819 "aliases": [ 00:37:57.819 "lvs/lvol" 00:37:57.819 ], 00:37:57.819 "product_name": "Logical Volume", 00:37:57.819 "block_size": 4096, 00:37:57.819 "num_blocks": 38912, 00:37:57.819 "uuid": "df91d110-83aa-4bf8-a23c-0d72d349fbc6", 00:37:57.819 "assigned_rate_limits": { 00:37:57.819 "rw_ios_per_sec": 0, 00:37:57.819 "rw_mbytes_per_sec": 0, 00:37:57.819 "r_mbytes_per_sec": 0, 00:37:57.819 "w_mbytes_per_sec": 0 00:37:57.819 }, 00:37:57.819 "claimed": false, 00:37:57.819 "zoned": false, 00:37:57.819 "supported_io_types": { 00:37:57.819 "read": true, 00:37:57.819 "write": true, 00:37:57.819 "unmap": true, 00:37:57.819 "flush": false, 00:37:57.819 "reset": true, 00:37:57.819 "nvme_admin": false, 00:37:57.819 "nvme_io": false, 00:37:57.819 "nvme_io_md": false, 00:37:57.819 "write_zeroes": true, 00:37:57.819 "zcopy": false, 00:37:57.819 "get_zone_info": false, 00:37:57.819 "zone_management": false, 00:37:57.819 "zone_append": false, 00:37:57.819 "compare": false, 00:37:57.819 "compare_and_write": false, 00:37:57.819 "abort": false, 00:37:57.819 "seek_hole": true, 00:37:57.819 "seek_data": true, 00:37:57.819 "copy": false, 00:37:57.820 "nvme_iov_md": false 00:37:57.820 }, 00:37:57.820 "driver_specific": { 00:37:57.820 "lvol": { 00:37:57.820 "lvol_store_uuid": "2f640b76-e95b-4866-bd73-8685e8878d09", 00:37:57.820 "base_bdev": "aio_bdev", 00:37:57.820 "thin_provision": false, 00:37:57.820 "num_allocated_clusters": 38, 00:37:57.820 "snapshot": false, 00:37:57.820 "clone": false, 00:37:57.820 "esnap_clone": false 00:37:57.820 } 00:37:57.820 } 00:37:57.820 } 00:37:57.820 ] 00:37:57.820 00:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:37:57.820 00:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2f640b76-e95b-4866-bd73-8685e8878d09 00:37:57.820 00:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:37:58.077 00:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:37:58.078 00:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2f640b76-e95b-4866-bd73-8685e8878d09 00:37:58.078 00:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:37:58.335 00:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:37:58.335 00:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete df91d110-83aa-4bf8-a23c-0d72d349fbc6 00:37:58.594 00:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2f640b76-e95b-4866-bd73-8685e8878d09 00:37:59.160 00:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:37:59.160 00:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:59.419 00:37:59.419 real 0m19.637s 00:37:59.419 user 0m36.740s 00:37:59.419 sys 0m4.624s 00:37:59.419 00:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:59.419 00:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:37:59.419 ************************************ 00:37:59.419 END TEST lvs_grow_dirty 00:37:59.419 ************************************ 00:37:59.419 00:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:37:59.419 00:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:37:59.419 00:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:37:59.419 00:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:37:59.419 00:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:37:59.419 00:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:37:59.419 00:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:37:59.419 00:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:37:59.419 00:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:37:59.419 nvmf_trace.0 00:37:59.419 00:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:37:59.419 00:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:37:59.419 00:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:59.419 00:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:37:59.419 00:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:59.419 00:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:37:59.419 00:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:59.419 00:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:59.419 rmmod nvme_tcp 00:37:59.419 rmmod nvme_fabrics 00:37:59.419 rmmod nvme_keyring 00:37:59.419 00:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:59.419 00:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:37:59.419 00:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:37:59.419 00:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 430578 ']' 00:37:59.419 00:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 430578 00:37:59.420 00:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 430578 ']' 00:37:59.420 00:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 430578 00:37:59.420 00:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:37:59.420 00:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:59.420 00:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 430578 00:37:59.420 00:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:59.420 00:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:59.420 00:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 430578' 00:37:59.420 killing process with pid 430578 00:37:59.420 00:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 430578 00:37:59.420 00:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 430578 00:37:59.677 00:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:59.677 00:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:59.677 00:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:59.677 00:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:37:59.677 00:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:37:59.677 00:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:59.677 00:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:37:59.677 00:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:59.677 00:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:59.677 00:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:59.677 00:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:59.677 00:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:01.577 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:01.577 00:38:01.577 real 0m43.005s 00:38:01.577 user 0m55.869s 00:38:01.577 sys 0m8.530s 00:38:01.577 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:01.577 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:01.577 ************************************ 00:38:01.577 END TEST nvmf_lvs_grow 00:38:01.577 ************************************ 00:38:01.835 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:38:01.835 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:01.835 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:01.835 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:01.835 ************************************ 00:38:01.836 START TEST nvmf_bdev_io_wait 00:38:01.836 ************************************ 00:38:01.836 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:38:01.836 * Looking for test storage... 00:38:01.836 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:01.836 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:01.836 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:38:01.836 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:01.836 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:01.836 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:01.836 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:01.836 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:01.836 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:38:01.836 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:38:01.836 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:38:01.836 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:38:01.836 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:38:01.836 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:38:01.836 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:38:01.836 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:01.836 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:38:01.836 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:38:01.836 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:01.836 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:01.836 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:38:01.836 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:38:01.836 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:01.836 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:38:01.836 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:38:01.836 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:38:01.836 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:38:01.836 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:01.836 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:38:01.836 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:38:01.836 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:01.836 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:01.836 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:38:01.836 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:01.836 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:01.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:01.836 --rc genhtml_branch_coverage=1 00:38:01.836 --rc genhtml_function_coverage=1 00:38:01.836 --rc genhtml_legend=1 00:38:01.836 --rc geninfo_all_blocks=1 00:38:01.836 --rc geninfo_unexecuted_blocks=1 00:38:01.836 00:38:01.836 ' 00:38:01.836 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:01.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:01.836 --rc genhtml_branch_coverage=1 00:38:01.836 --rc genhtml_function_coverage=1 00:38:01.836 --rc genhtml_legend=1 00:38:01.836 --rc geninfo_all_blocks=1 00:38:01.836 --rc geninfo_unexecuted_blocks=1 00:38:01.836 00:38:01.836 ' 00:38:01.836 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:01.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:01.836 --rc genhtml_branch_coverage=1 00:38:01.836 --rc genhtml_function_coverage=1 00:38:01.836 --rc genhtml_legend=1 00:38:01.836 --rc geninfo_all_blocks=1 00:38:01.836 --rc geninfo_unexecuted_blocks=1 00:38:01.836 00:38:01.836 ' 00:38:01.836 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:01.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:01.836 --rc genhtml_branch_coverage=1 00:38:01.836 --rc genhtml_function_coverage=1 00:38:01.836 --rc genhtml_legend=1 00:38:01.836 --rc geninfo_all_blocks=1 00:38:01.836 --rc geninfo_unexecuted_blocks=1 00:38:01.836 00:38:01.836 ' 00:38:01.836 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:01.836 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:38:01.836 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:01.836 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:01.836 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:01.836 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:01.836 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:01.836 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:01.836 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:01.836 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:01.837 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:01.837 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:01.837 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:01.837 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:01.837 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:01.837 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:01.837 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:01.837 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:01.837 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:01.837 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:38:01.837 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:01.837 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:01.837 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:01.837 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:01.837 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:01.837 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:01.837 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:38:01.837 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:01.837 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:38:01.837 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:01.837 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:01.837 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:01.837 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:01.837 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:01.837 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:01.837 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:01.837 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:01.837 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:01.837 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:01.837 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:01.837 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:01.837 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:38:01.837 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:01.837 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:01.837 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:01.837 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:01.837 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:01.837 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:01.837 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:01.837 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:01.837 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:01.837 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:01.837 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:38:01.837 00:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:04.368 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:04.368 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:04.369 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:04.369 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:04.369 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:04.369 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:04.369 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:04.370 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:04.370 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:04.370 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:04.370 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:38:04.370 00:38:04.370 --- 10.0.0.2 ping statistics --- 00:38:04.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:04.370 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:38:04.370 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:04.370 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:04.370 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:38:04.370 00:38:04.370 --- 10.0.0.1 ping statistics --- 00:38:04.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:04.370 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:38:04.370 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:04.370 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:38:04.370 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:04.370 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:04.370 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:04.370 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:04.370 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:04.370 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:04.370 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:04.370 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:38:04.370 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:04.370 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:04.370 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:04.370 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=433205 00:38:04.370 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:38:04.370 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 433205 00:38:04.370 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 433205 ']' 00:38:04.370 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:04.370 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:04.370 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:04.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:04.370 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:04.370 00:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:04.370 [2024-11-18 00:43:27.994070] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:04.370 [2024-11-18 00:43:27.995284] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:38:04.370 [2024-11-18 00:43:27.995393] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:04.370 [2024-11-18 00:43:28.074548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:04.370 [2024-11-18 00:43:28.126127] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:04.370 [2024-11-18 00:43:28.126191] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:04.370 [2024-11-18 00:43:28.126221] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:04.370 [2024-11-18 00:43:28.126233] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:04.370 [2024-11-18 00:43:28.126242] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:04.370 [2024-11-18 00:43:28.127879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:04.370 [2024-11-18 00:43:28.127940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:04.370 [2024-11-18 00:43:28.127912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:04.370 [2024-11-18 00:43:28.127944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:04.370 [2024-11-18 00:43:28.128514] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:04.629 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:04.629 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:38:04.629 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:04.629 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:04.629 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:04.629 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:04.629 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:38:04.629 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:04.629 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:04.629 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:04.629 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:38:04.629 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:04.629 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:04.629 [2024-11-18 00:43:28.334014] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:04.629 [2024-11-18 00:43:28.334239] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:04.629 [2024-11-18 00:43:28.335141] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:04.629 [2024-11-18 00:43:28.335989] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:38:04.629 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:04.629 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:04.629 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:04.629 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:04.629 [2024-11-18 00:43:28.340713] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:04.629 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:04.629 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:04.629 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:04.629 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:04.629 Malloc0 00:38:04.629 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:04.629 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:04.629 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:04.629 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:04.629 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:04.629 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:04.629 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:04.629 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:04.629 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:04.629 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:04.629 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:04.629 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:04.629 [2024-11-18 00:43:28.392904] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:04.629 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:04.629 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=433232 00:38:04.629 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=433234 00:38:04.629 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:38:04.629 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:38:04.629 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:38:04.629 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:38:04.629 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:04.629 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:04.629 { 00:38:04.629 "params": { 00:38:04.629 "name": "Nvme$subsystem", 00:38:04.629 "trtype": "$TEST_TRANSPORT", 00:38:04.629 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:04.629 "adrfam": "ipv4", 00:38:04.629 "trsvcid": "$NVMF_PORT", 00:38:04.629 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:04.629 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:04.629 "hdgst": ${hdgst:-false}, 00:38:04.629 "ddgst": ${ddgst:-false} 00:38:04.629 }, 00:38:04.629 "method": "bdev_nvme_attach_controller" 00:38:04.629 } 00:38:04.629 EOF 00:38:04.629 )") 00:38:04.629 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=433236 00:38:04.629 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:38:04.629 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:38:04.629 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:38:04.629 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:38:04.629 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:04.629 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=433239 00:38:04.629 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:38:04.629 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:38:04.629 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:04.629 { 00:38:04.629 "params": { 00:38:04.629 "name": "Nvme$subsystem", 00:38:04.629 "trtype": "$TEST_TRANSPORT", 00:38:04.629 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:04.629 "adrfam": "ipv4", 00:38:04.629 "trsvcid": "$NVMF_PORT", 00:38:04.629 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:04.629 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:04.629 "hdgst": ${hdgst:-false}, 00:38:04.629 "ddgst": ${ddgst:-false} 00:38:04.629 }, 00:38:04.629 "method": "bdev_nvme_attach_controller" 00:38:04.629 } 00:38:04.629 EOF 00:38:04.629 )") 00:38:04.629 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:38:04.629 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:38:04.630 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:38:04.630 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:38:04.630 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:04.630 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:38:04.630 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:38:04.630 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:04.630 { 00:38:04.630 "params": { 00:38:04.630 "name": "Nvme$subsystem", 00:38:04.630 "trtype": "$TEST_TRANSPORT", 00:38:04.630 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:04.630 "adrfam": "ipv4", 00:38:04.630 "trsvcid": "$NVMF_PORT", 00:38:04.630 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:04.630 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:04.630 "hdgst": ${hdgst:-false}, 00:38:04.630 "ddgst": ${ddgst:-false} 00:38:04.630 }, 00:38:04.630 "method": "bdev_nvme_attach_controller" 00:38:04.630 } 00:38:04.630 EOF 00:38:04.630 )") 00:38:04.630 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:38:04.630 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:38:04.630 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:38:04.630 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:04.630 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:04.630 { 00:38:04.630 "params": { 00:38:04.630 "name": "Nvme$subsystem", 00:38:04.630 "trtype": "$TEST_TRANSPORT", 00:38:04.630 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:04.630 "adrfam": "ipv4", 00:38:04.630 "trsvcid": "$NVMF_PORT", 00:38:04.630 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:04.630 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:04.630 "hdgst": ${hdgst:-false}, 00:38:04.630 "ddgst": ${ddgst:-false} 00:38:04.630 }, 00:38:04.630 "method": "bdev_nvme_attach_controller" 00:38:04.630 } 00:38:04.630 EOF 00:38:04.630 )") 00:38:04.630 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:38:04.630 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 433232 00:38:04.630 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:38:04.630 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:38:04.630 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:38:04.630 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:38:04.630 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:38:04.630 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:04.630 "params": { 00:38:04.630 "name": "Nvme1", 00:38:04.630 "trtype": "tcp", 00:38:04.630 "traddr": "10.0.0.2", 00:38:04.630 "adrfam": "ipv4", 00:38:04.630 "trsvcid": "4420", 00:38:04.630 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:04.630 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:04.630 "hdgst": false, 00:38:04.630 "ddgst": false 00:38:04.630 }, 00:38:04.630 "method": "bdev_nvme_attach_controller" 00:38:04.630 }' 00:38:04.630 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:38:04.630 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:38:04.630 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:04.630 "params": { 00:38:04.630 "name": "Nvme1", 00:38:04.630 "trtype": "tcp", 00:38:04.630 "traddr": "10.0.0.2", 00:38:04.630 "adrfam": "ipv4", 00:38:04.630 "trsvcid": "4420", 00:38:04.630 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:04.630 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:04.630 "hdgst": false, 00:38:04.630 "ddgst": false 00:38:04.630 }, 00:38:04.630 "method": "bdev_nvme_attach_controller" 00:38:04.630 }' 00:38:04.630 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:38:04.630 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:04.630 "params": { 00:38:04.630 "name": "Nvme1", 00:38:04.630 "trtype": "tcp", 00:38:04.630 "traddr": "10.0.0.2", 00:38:04.630 "adrfam": "ipv4", 00:38:04.630 "trsvcid": "4420", 00:38:04.630 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:04.630 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:04.630 "hdgst": false, 00:38:04.630 "ddgst": false 00:38:04.630 }, 00:38:04.630 "method": "bdev_nvme_attach_controller" 00:38:04.630 }' 00:38:04.630 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:38:04.630 00:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:04.630 "params": { 00:38:04.630 "name": "Nvme1", 00:38:04.630 "trtype": "tcp", 00:38:04.630 "traddr": "10.0.0.2", 00:38:04.630 "adrfam": "ipv4", 00:38:04.630 "trsvcid": "4420", 00:38:04.630 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:04.630 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:04.630 "hdgst": false, 00:38:04.630 "ddgst": false 00:38:04.630 }, 00:38:04.630 "method": "bdev_nvme_attach_controller" 00:38:04.630 }' 00:38:04.630 [2024-11-18 00:43:28.443477] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:38:04.630 [2024-11-18 00:43:28.443477] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:38:04.630 [2024-11-18 00:43:28.443477] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:38:04.630 [2024-11-18 00:43:28.443571] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-18 00:43:28.443571] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-18 00:43:28.443572] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:38:04.630 --proc-type=auto ] 00:38:04.630 --proc-type=auto ] 00:38:04.630 [2024-11-18 00:43:28.444251] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:38:04.630 [2024-11-18 00:43:28.444341] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:38:04.888 [2024-11-18 00:43:28.627827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:04.888 [2024-11-18 00:43:28.670653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:38:05.146 [2024-11-18 00:43:28.729626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:05.146 [2024-11-18 00:43:28.771869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:38:05.146 [2024-11-18 00:43:28.832750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:05.146 [2024-11-18 00:43:28.877123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:38:05.146 [2024-11-18 00:43:28.900446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:05.146 [2024-11-18 00:43:28.938994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:38:05.405 Running I/O for 1 seconds... 00:38:05.405 Running I/O for 1 seconds... 00:38:05.405 Running I/O for 1 seconds... 00:38:05.405 Running I/O for 1 seconds... 00:38:06.354 6697.00 IOPS, 26.16 MiB/s [2024-11-17T23:43:30.176Z] 8817.00 IOPS, 34.44 MiB/s [2024-11-17T23:43:30.176Z] 187640.00 IOPS, 732.97 MiB/s 00:38:06.354 Latency(us) 00:38:06.354 [2024-11-17T23:43:30.176Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:06.354 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:38:06.354 Nvme1n1 : 1.00 187289.71 731.60 0.00 0.00 679.80 298.86 1868.99 00:38:06.354 [2024-11-17T23:43:30.176Z] =================================================================================================================== 00:38:06.354 [2024-11-17T23:43:30.176Z] Total : 187289.71 731.60 0.00 0.00 679.80 298.86 1868.99 00:38:06.354 00:38:06.354 Latency(us) 00:38:06.354 [2024-11-17T23:43:30.176Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:06.354 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:38:06.354 Nvme1n1 : 1.01 8856.83 34.60 0.00 0.00 14375.07 4903.06 19709.35 00:38:06.354 [2024-11-17T23:43:30.176Z] =================================================================================================================== 00:38:06.354 [2024-11-17T23:43:30.176Z] Total : 8856.83 34.60 0.00 0.00 14375.07 4903.06 19709.35 00:38:06.354 00:38:06.354 Latency(us) 00:38:06.354 [2024-11-17T23:43:30.176Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:06.354 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:38:06.354 Nvme1n1 : 1.02 6671.25 26.06 0.00 0.00 18945.93 1953.94 28350.39 00:38:06.354 [2024-11-17T23:43:30.176Z] =================================================================================================================== 00:38:06.354 [2024-11-17T23:43:30.176Z] Total : 6671.25 26.06 0.00 0.00 18945.93 1953.94 28350.39 00:38:06.354 6799.00 IOPS, 26.56 MiB/s 00:38:06.354 Latency(us) 00:38:06.354 [2024-11-17T23:43:30.176Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:06.354 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:38:06.354 Nvme1n1 : 1.01 6925.50 27.05 0.00 0.00 18431.96 4004.98 37282.70 00:38:06.354 [2024-11-17T23:43:30.176Z] =================================================================================================================== 00:38:06.354 [2024-11-17T23:43:30.176Z] Total : 6925.50 27.05 0.00 0.00 18431.96 4004.98 37282.70 00:38:06.613 00:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 433234 00:38:06.613 00:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 433236 00:38:06.613 00:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 433239 00:38:06.613 00:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:06.613 00:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:06.613 00:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:06.613 00:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:06.613 00:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:38:06.613 00:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:38:06.613 00:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:06.613 00:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:38:06.613 00:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:06.613 00:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:38:06.613 00:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:06.613 00:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:06.613 rmmod nvme_tcp 00:38:06.613 rmmod nvme_fabrics 00:38:06.613 rmmod nvme_keyring 00:38:06.613 00:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:06.613 00:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:38:06.613 00:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:38:06.613 00:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 433205 ']' 00:38:06.613 00:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 433205 00:38:06.613 00:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 433205 ']' 00:38:06.613 00:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 433205 00:38:06.613 00:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:38:06.613 00:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:06.613 00:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 433205 00:38:06.613 00:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:06.613 00:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:06.613 00:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 433205' 00:38:06.613 killing process with pid 433205 00:38:06.613 00:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 433205 00:38:06.613 00:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 433205 00:38:06.872 00:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:06.872 00:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:06.872 00:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:06.872 00:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:38:06.872 00:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:38:06.872 00:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:38:06.872 00:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:06.872 00:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:06.872 00:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:06.872 00:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:06.872 00:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:06.872 00:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:08.782 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:08.782 00:38:08.782 real 0m7.122s 00:38:08.782 user 0m13.271s 00:38:08.782 sys 0m3.902s 00:38:08.782 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:08.782 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:08.782 ************************************ 00:38:08.782 END TEST nvmf_bdev_io_wait 00:38:08.782 ************************************ 00:38:08.782 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:38:08.782 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:08.782 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:08.782 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:09.042 ************************************ 00:38:09.042 START TEST nvmf_queue_depth 00:38:09.042 ************************************ 00:38:09.042 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:38:09.042 * Looking for test storage... 00:38:09.042 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:09.042 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:09.042 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:38:09.042 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:09.042 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:09.042 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:09.042 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:09.042 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:09.042 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:38:09.042 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:38:09.042 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:38:09.042 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:38:09.042 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:38:09.042 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:38:09.042 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:38:09.042 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:09.042 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:38:09.042 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:38:09.042 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:09.042 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:09.042 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:38:09.042 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:38:09.042 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:09.042 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:38:09.042 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:38:09.042 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:38:09.042 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:38:09.042 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:09.042 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:38:09.042 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:38:09.042 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:09.042 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:09.042 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:38:09.042 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:09.042 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:09.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:09.042 --rc genhtml_branch_coverage=1 00:38:09.042 --rc genhtml_function_coverage=1 00:38:09.042 --rc genhtml_legend=1 00:38:09.042 --rc geninfo_all_blocks=1 00:38:09.042 --rc geninfo_unexecuted_blocks=1 00:38:09.042 00:38:09.042 ' 00:38:09.042 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:09.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:09.042 --rc genhtml_branch_coverage=1 00:38:09.042 --rc genhtml_function_coverage=1 00:38:09.042 --rc genhtml_legend=1 00:38:09.042 --rc geninfo_all_blocks=1 00:38:09.042 --rc geninfo_unexecuted_blocks=1 00:38:09.042 00:38:09.042 ' 00:38:09.042 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:09.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:09.042 --rc genhtml_branch_coverage=1 00:38:09.042 --rc genhtml_function_coverage=1 00:38:09.042 --rc genhtml_legend=1 00:38:09.042 --rc geninfo_all_blocks=1 00:38:09.042 --rc geninfo_unexecuted_blocks=1 00:38:09.042 00:38:09.042 ' 00:38:09.042 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:09.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:09.042 --rc genhtml_branch_coverage=1 00:38:09.042 --rc genhtml_function_coverage=1 00:38:09.042 --rc genhtml_legend=1 00:38:09.042 --rc geninfo_all_blocks=1 00:38:09.042 --rc geninfo_unexecuted_blocks=1 00:38:09.042 00:38:09.042 ' 00:38:09.042 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:09.042 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:38:09.042 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:09.042 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:09.042 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:09.042 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:09.042 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:09.042 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:09.042 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:09.042 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:09.042 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:09.042 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:09.042 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:09.042 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:09.042 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:09.042 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:09.042 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:09.042 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:09.042 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:09.042 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:38:09.042 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:09.042 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:09.042 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:09.042 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:09.043 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:09.043 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:09.043 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:38:09.043 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:09.043 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:38:09.043 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:09.043 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:09.043 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:09.043 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:09.043 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:09.043 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:09.043 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:09.043 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:09.043 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:09.043 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:09.043 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:38:09.043 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:38:09.043 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:38:09.043 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:38:09.043 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:09.043 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:09.043 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:09.043 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:09.043 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:09.043 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:09.043 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:09.043 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:09.043 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:09.043 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:09.043 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:38:09.043 00:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:10.957 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:10.957 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:38:10.957 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:10.957 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:10.957 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:10.957 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:10.957 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:10.957 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:38:10.957 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:10.957 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:38:10.957 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:38:10.957 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:38:10.957 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:38:10.957 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:38:10.957 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:38:10.957 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:10.957 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:10.957 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:10.957 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:10.957 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:10.957 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:10.957 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:10.957 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:10.957 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:10.957 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:10.957 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:10.957 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:10.957 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:10.957 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:10.957 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:10.957 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:10.957 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:10.957 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:10.957 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:10.957 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:10.957 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:10.957 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:10.957 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:10.957 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:10.957 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:10.957 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:10.958 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:10.958 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:10.958 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:10.958 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:10.958 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:10.958 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:10.958 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:10.958 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:10.958 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:10.958 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:10.958 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:10.958 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:10.958 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:10.958 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:10.958 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:10.958 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:10.958 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:10.958 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:10.958 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:10.958 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:10.958 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:10.958 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:10.958 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:10.958 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:10.958 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:10.958 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:10.958 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:10.958 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:10.958 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:10.958 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:10.958 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:10.958 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:10.958 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:38:10.958 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:10.958 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:10.958 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:10.958 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:10.958 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:10.958 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:10.958 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:10.958 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:10.958 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:10.958 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:10.958 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:10.958 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:10.958 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:10.958 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:10.958 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:10.958 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:10.958 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:10.958 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:11.216 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:11.217 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:11.217 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:11.217 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:11.217 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:11.217 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:11.217 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:11.217 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:11.217 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:11.217 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.307 ms 00:38:11.217 00:38:11.217 --- 10.0.0.2 ping statistics --- 00:38:11.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:11.217 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:38:11.217 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:11.217 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:11.217 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:38:11.217 00:38:11.217 --- 10.0.0.1 ping statistics --- 00:38:11.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:11.217 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:38:11.217 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:11.217 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:38:11.217 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:11.217 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:11.217 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:11.217 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:11.217 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:11.217 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:11.217 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:11.217 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:38:11.217 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:11.217 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:11.217 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:11.217 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=435457 00:38:11.217 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:38:11.217 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 435457 00:38:11.217 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 435457 ']' 00:38:11.217 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:11.217 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:11.217 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:11.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:11.217 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:11.217 00:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:11.217 [2024-11-18 00:43:34.981655] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:11.217 [2024-11-18 00:43:34.982786] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:38:11.217 [2024-11-18 00:43:34.982859] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:11.476 [2024-11-18 00:43:35.062429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:11.476 [2024-11-18 00:43:35.106969] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:11.476 [2024-11-18 00:43:35.107043] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:11.476 [2024-11-18 00:43:35.107071] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:11.476 [2024-11-18 00:43:35.107082] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:11.476 [2024-11-18 00:43:35.107091] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:11.476 [2024-11-18 00:43:35.107727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:11.476 [2024-11-18 00:43:35.190078] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:11.476 [2024-11-18 00:43:35.190401] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:11.476 00:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:11.476 00:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:38:11.476 00:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:11.476 00:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:11.476 00:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:11.476 00:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:11.476 00:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:11.476 00:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:11.476 00:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:11.476 [2024-11-18 00:43:35.244329] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:11.476 00:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:11.476 00:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:11.476 00:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:11.476 00:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:11.476 Malloc0 00:38:11.476 00:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:11.476 00:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:11.476 00:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:11.476 00:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:11.476 00:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:11.476 00:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:11.476 00:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:11.476 00:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:11.476 00:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:11.476 00:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:11.476 00:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:11.476 00:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:11.734 [2024-11-18 00:43:35.300447] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:11.734 00:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:11.734 00:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=435476 00:38:11.734 00:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:38:11.734 00:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:11.734 00:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 435476 /var/tmp/bdevperf.sock 00:38:11.734 00:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 435476 ']' 00:38:11.734 00:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:11.734 00:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:11.734 00:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:11.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:11.734 00:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:11.734 00:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:11.734 [2024-11-18 00:43:35.346677] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:38:11.734 [2024-11-18 00:43:35.346753] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid435476 ] 00:38:11.734 [2024-11-18 00:43:35.411777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:11.734 [2024-11-18 00:43:35.456800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:11.992 00:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:11.992 00:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:38:11.992 00:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:38:11.992 00:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:11.992 00:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:11.992 NVMe0n1 00:38:11.992 00:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:11.992 00:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:12.250 Running I/O for 10 seconds... 00:38:14.140 8211.00 IOPS, 32.07 MiB/s [2024-11-17T23:43:39.342Z] 8564.00 IOPS, 33.45 MiB/s [2024-11-17T23:43:40.274Z] 8534.67 IOPS, 33.34 MiB/s [2024-11-17T23:43:41.209Z] 8633.00 IOPS, 33.72 MiB/s [2024-11-17T23:43:42.144Z] 8605.40 IOPS, 33.61 MiB/s [2024-11-17T23:43:43.078Z] 8672.67 IOPS, 33.88 MiB/s [2024-11-17T23:43:44.012Z] 8637.57 IOPS, 33.74 MiB/s [2024-11-17T23:43:45.387Z] 8699.50 IOPS, 33.98 MiB/s [2024-11-17T23:43:46.320Z] 8697.44 IOPS, 33.97 MiB/s [2024-11-17T23:43:46.320Z] 8704.20 IOPS, 34.00 MiB/s 00:38:22.498 Latency(us) 00:38:22.498 [2024-11-17T23:43:46.320Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:22.498 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:38:22.498 Verification LBA range: start 0x0 length 0x4000 00:38:22.498 NVMe0n1 : 10.12 8699.57 33.98 0.00 0.00 116821.76 20777.34 88934.78 00:38:22.498 [2024-11-17T23:43:46.320Z] =================================================================================================================== 00:38:22.498 [2024-11-17T23:43:46.320Z] Total : 8699.57 33.98 0.00 0.00 116821.76 20777.34 88934.78 00:38:22.498 { 00:38:22.498 "results": [ 00:38:22.498 { 00:38:22.498 "job": "NVMe0n1", 00:38:22.498 "core_mask": "0x1", 00:38:22.498 "workload": "verify", 00:38:22.498 "status": "finished", 00:38:22.498 "verify_range": { 00:38:22.498 "start": 0, 00:38:22.498 "length": 16384 00:38:22.498 }, 00:38:22.498 "queue_depth": 1024, 00:38:22.498 "io_size": 4096, 00:38:22.498 "runtime": 10.123034, 00:38:22.498 "iops": 8699.56576259647, 00:38:22.498 "mibps": 33.98267876014246, 00:38:22.498 "io_failed": 0, 00:38:22.498 "io_timeout": 0, 00:38:22.498 "avg_latency_us": 116821.75729441977, 00:38:22.498 "min_latency_us": 20777.33925925926, 00:38:22.498 "max_latency_us": 88934.77925925926 00:38:22.498 } 00:38:22.498 ], 00:38:22.498 "core_count": 1 00:38:22.498 } 00:38:22.498 00:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 435476 00:38:22.498 00:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 435476 ']' 00:38:22.498 00:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 435476 00:38:22.498 00:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:38:22.498 00:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:22.498 00:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 435476 00:38:22.498 00:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:22.498 00:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:22.498 00:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 435476' 00:38:22.498 killing process with pid 435476 00:38:22.498 00:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 435476 00:38:22.498 Received shutdown signal, test time was about 10.000000 seconds 00:38:22.498 00:38:22.498 Latency(us) 00:38:22.498 [2024-11-17T23:43:46.320Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:22.498 [2024-11-17T23:43:46.320Z] =================================================================================================================== 00:38:22.498 [2024-11-17T23:43:46.320Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:22.498 00:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 435476 00:38:22.498 00:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:38:22.498 00:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:38:22.498 00:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:22.498 00:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:38:22.498 00:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:22.498 00:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:38:22.498 00:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:22.499 00:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:22.499 rmmod nvme_tcp 00:38:22.757 rmmod nvme_fabrics 00:38:22.757 rmmod nvme_keyring 00:38:22.757 00:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:22.757 00:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:38:22.757 00:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:38:22.757 00:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 435457 ']' 00:38:22.757 00:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 435457 00:38:22.757 00:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 435457 ']' 00:38:22.757 00:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 435457 00:38:22.757 00:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:38:22.757 00:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:22.757 00:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 435457 00:38:22.757 00:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:22.757 00:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:22.757 00:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 435457' 00:38:22.757 killing process with pid 435457 00:38:22.757 00:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 435457 00:38:22.757 00:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 435457 00:38:23.017 00:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:23.017 00:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:23.017 00:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:23.017 00:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:38:23.017 00:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:38:23.017 00:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:38:23.017 00:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:23.017 00:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:23.017 00:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:23.017 00:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:23.017 00:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:23.017 00:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:24.926 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:24.926 00:38:24.926 real 0m16.046s 00:38:24.926 user 0m22.214s 00:38:24.927 sys 0m3.315s 00:38:24.927 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:24.927 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:24.927 ************************************ 00:38:24.927 END TEST nvmf_queue_depth 00:38:24.927 ************************************ 00:38:24.927 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:38:24.927 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:24.927 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:24.927 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:24.927 ************************************ 00:38:24.927 START TEST nvmf_target_multipath 00:38:24.927 ************************************ 00:38:24.927 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:38:25.186 * Looking for test storage... 00:38:25.186 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:25.186 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:25.186 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:38:25.186 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:25.186 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:25.186 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:25.186 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:25.186 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:25.186 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:38:25.186 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:38:25.186 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:38:25.186 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:38:25.186 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:38:25.186 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:38:25.186 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:38:25.186 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:25.186 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:38:25.186 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:38:25.186 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:25.186 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:25.186 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:38:25.186 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:38:25.186 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:25.186 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:38:25.186 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:38:25.186 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:38:25.186 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:38:25.186 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:25.186 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:38:25.186 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:38:25.186 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:25.186 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:25.186 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:38:25.187 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:25.187 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:25.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:25.187 --rc genhtml_branch_coverage=1 00:38:25.187 --rc genhtml_function_coverage=1 00:38:25.187 --rc genhtml_legend=1 00:38:25.187 --rc geninfo_all_blocks=1 00:38:25.187 --rc geninfo_unexecuted_blocks=1 00:38:25.187 00:38:25.187 ' 00:38:25.187 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:25.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:25.187 --rc genhtml_branch_coverage=1 00:38:25.187 --rc genhtml_function_coverage=1 00:38:25.187 --rc genhtml_legend=1 00:38:25.187 --rc geninfo_all_blocks=1 00:38:25.187 --rc geninfo_unexecuted_blocks=1 00:38:25.187 00:38:25.187 ' 00:38:25.187 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:25.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:25.187 --rc genhtml_branch_coverage=1 00:38:25.187 --rc genhtml_function_coverage=1 00:38:25.187 --rc genhtml_legend=1 00:38:25.187 --rc geninfo_all_blocks=1 00:38:25.187 --rc geninfo_unexecuted_blocks=1 00:38:25.187 00:38:25.187 ' 00:38:25.187 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:25.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:25.187 --rc genhtml_branch_coverage=1 00:38:25.187 --rc genhtml_function_coverage=1 00:38:25.187 --rc genhtml_legend=1 00:38:25.187 --rc geninfo_all_blocks=1 00:38:25.187 --rc geninfo_unexecuted_blocks=1 00:38:25.187 00:38:25.187 ' 00:38:25.187 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:25.187 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:38:25.187 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:25.187 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:25.187 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:25.187 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:25.187 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:25.187 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:25.187 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:25.187 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:25.187 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:25.187 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:25.187 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:25.187 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:25.187 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:25.187 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:25.187 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:25.187 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:25.187 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:25.187 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:38:25.187 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:25.187 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:25.187 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:25.187 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:25.187 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:25.187 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:25.187 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:38:25.187 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:25.187 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:38:25.187 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:25.187 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:25.187 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:25.187 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:25.187 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:25.187 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:25.187 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:25.187 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:25.187 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:25.187 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:25.187 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:25.187 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:25.187 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:38:25.187 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:25.187 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:38:25.187 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:25.187 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:25.187 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:25.187 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:25.187 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:25.187 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:25.187 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:25.187 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:25.187 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:25.187 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:25.187 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:38:25.187 00:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:38:27.720 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:27.721 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:27.721 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:27.721 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:27.721 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:27.721 00:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:27.721 00:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:27.721 00:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:27.721 00:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:27.721 00:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:27.722 00:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:27.722 00:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:27.722 00:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:27.722 00:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:27.722 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:27.722 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:38:27.722 00:38:27.722 --- 10.0.0.2 ping statistics --- 00:38:27.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:27.722 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:38:27.722 00:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:27.722 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:27.722 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:38:27.722 00:38:27.722 --- 10.0.0.1 ping statistics --- 00:38:27.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:27.722 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:38:27.722 00:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:27.722 00:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:38:27.722 00:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:27.722 00:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:27.722 00:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:27.722 00:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:27.722 00:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:27.722 00:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:27.722 00:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:27.722 00:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:38:27.722 00:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:38:27.722 only one NIC for nvmf test 00:38:27.722 00:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:38:27.722 00:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:27.722 00:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:38:27.722 00:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:27.722 00:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:38:27.722 00:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:27.722 00:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:27.722 rmmod nvme_tcp 00:38:27.722 rmmod nvme_fabrics 00:38:27.722 rmmod nvme_keyring 00:38:27.722 00:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:27.722 00:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:38:27.722 00:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:38:27.722 00:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:38:27.722 00:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:27.722 00:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:27.722 00:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:27.722 00:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:38:27.722 00:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:38:27.722 00:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:27.722 00:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:38:27.722 00:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:27.722 00:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:27.722 00:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:27.722 00:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:27.722 00:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:29.632 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:29.632 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:38:29.632 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:38:29.632 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:29.632 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:38:29.632 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:29.632 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:38:29.632 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:29.632 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:29.632 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:29.632 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:38:29.632 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:38:29.632 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:38:29.632 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:29.632 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:29.632 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:29.632 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:38:29.632 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:38:29.632 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:29.632 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:38:29.632 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:29.632 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:29.632 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:29.632 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:29.632 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:29.632 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:29.632 00:38:29.632 real 0m4.526s 00:38:29.632 user 0m0.907s 00:38:29.632 sys 0m1.638s 00:38:29.632 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:29.632 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:38:29.632 ************************************ 00:38:29.632 END TEST nvmf_target_multipath 00:38:29.632 ************************************ 00:38:29.632 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:38:29.632 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:29.632 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:29.632 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:29.632 ************************************ 00:38:29.632 START TEST nvmf_zcopy 00:38:29.632 ************************************ 00:38:29.632 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:38:29.632 * Looking for test storage... 00:38:29.632 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:29.632 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:29.632 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:38:29.632 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:29.632 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:29.632 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:29.632 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:29.632 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:29.632 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:38:29.632 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:38:29.632 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:38:29.632 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:38:29.632 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:38:29.632 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:38:29.632 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:38:29.632 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:29.632 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:38:29.632 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:38:29.632 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:29.632 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:29.632 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:38:29.632 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:38:29.632 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:29.632 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:38:29.632 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:38:29.632 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:38:29.632 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:38:29.632 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:29.632 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:38:29.632 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:38:29.632 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:29.632 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:29.632 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:38:29.632 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:29.632 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:29.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:29.633 --rc genhtml_branch_coverage=1 00:38:29.633 --rc genhtml_function_coverage=1 00:38:29.633 --rc genhtml_legend=1 00:38:29.633 --rc geninfo_all_blocks=1 00:38:29.633 --rc geninfo_unexecuted_blocks=1 00:38:29.633 00:38:29.633 ' 00:38:29.633 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:29.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:29.633 --rc genhtml_branch_coverage=1 00:38:29.633 --rc genhtml_function_coverage=1 00:38:29.633 --rc genhtml_legend=1 00:38:29.633 --rc geninfo_all_blocks=1 00:38:29.633 --rc geninfo_unexecuted_blocks=1 00:38:29.633 00:38:29.633 ' 00:38:29.633 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:29.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:29.633 --rc genhtml_branch_coverage=1 00:38:29.633 --rc genhtml_function_coverage=1 00:38:29.633 --rc genhtml_legend=1 00:38:29.633 --rc geninfo_all_blocks=1 00:38:29.633 --rc geninfo_unexecuted_blocks=1 00:38:29.633 00:38:29.633 ' 00:38:29.633 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:29.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:29.633 --rc genhtml_branch_coverage=1 00:38:29.633 --rc genhtml_function_coverage=1 00:38:29.633 --rc genhtml_legend=1 00:38:29.633 --rc geninfo_all_blocks=1 00:38:29.633 --rc geninfo_unexecuted_blocks=1 00:38:29.633 00:38:29.633 ' 00:38:29.633 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:29.633 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:38:29.633 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:29.633 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:29.633 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:29.633 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:29.633 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:29.633 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:29.633 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:29.633 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:29.633 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:29.633 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:29.633 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:29.633 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:29.633 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:29.633 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:29.633 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:29.633 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:29.633 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:29.633 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:38:29.633 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:29.633 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:29.633 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:29.633 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:29.633 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:29.633 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:29.633 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:38:29.633 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:29.633 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:38:29.633 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:29.633 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:29.633 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:29.633 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:29.633 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:29.633 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:29.633 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:29.633 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:29.633 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:29.633 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:29.633 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:38:29.633 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:29.633 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:29.633 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:29.633 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:29.633 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:29.633 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:29.633 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:29.633 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:29.633 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:29.633 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:29.633 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:38:29.633 00:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:32.224 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:32.224 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:38:32.224 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:32.224 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:32.224 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:32.224 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:32.224 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:32.224 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:38:32.224 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:32.224 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:38:32.224 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:38:32.224 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:38:32.224 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:38:32.224 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:38:32.224 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:38:32.224 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:32.224 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:32.224 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:32.224 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:32.224 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:32.224 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:32.224 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:32.224 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:32.224 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:32.224 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:32.224 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:32.225 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:32.225 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:32.225 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:32.225 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:32.225 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:32.225 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:38:32.225 00:38:32.225 --- 10.0.0.2 ping statistics --- 00:38:32.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:32.225 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:32.225 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:32.225 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:38:32.225 00:38:32.225 --- 10.0.0.1 ping statistics --- 00:38:32.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:32.225 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=440656 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 440656 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 440656 ']' 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:32.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:32.225 [2024-11-18 00:43:55.763461] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:32.225 [2024-11-18 00:43:55.764574] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:38:32.225 [2024-11-18 00:43:55.764640] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:32.225 [2024-11-18 00:43:55.836676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:32.225 [2024-11-18 00:43:55.884568] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:32.225 [2024-11-18 00:43:55.884638] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:32.225 [2024-11-18 00:43:55.884666] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:32.225 [2024-11-18 00:43:55.884677] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:32.225 [2024-11-18 00:43:55.884687] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:32.225 [2024-11-18 00:43:55.885297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:32.225 [2024-11-18 00:43:55.979399] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:32.225 [2024-11-18 00:43:55.979740] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:32.225 00:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:32.510 00:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:32.510 00:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:38:32.510 00:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:38:32.510 00:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:32.510 00:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:32.510 [2024-11-18 00:43:56.029890] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:32.510 00:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:32.510 00:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:32.510 00:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:32.510 00:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:32.510 00:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:32.510 00:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:32.510 00:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:32.511 00:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:32.511 [2024-11-18 00:43:56.046032] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:32.511 00:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:32.511 00:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:32.511 00:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:32.511 00:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:32.511 00:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:32.511 00:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:38:32.511 00:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:32.511 00:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:32.511 malloc0 00:38:32.511 00:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:32.511 00:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:38:32.511 00:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:32.511 00:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:32.511 00:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:32.511 00:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:38:32.511 00:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:38:32.511 00:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:38:32.511 00:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:38:32.511 00:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:32.511 00:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:32.511 { 00:38:32.511 "params": { 00:38:32.511 "name": "Nvme$subsystem", 00:38:32.511 "trtype": "$TEST_TRANSPORT", 00:38:32.511 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:32.511 "adrfam": "ipv4", 00:38:32.511 "trsvcid": "$NVMF_PORT", 00:38:32.511 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:32.511 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:32.511 "hdgst": ${hdgst:-false}, 00:38:32.511 "ddgst": ${ddgst:-false} 00:38:32.511 }, 00:38:32.511 "method": "bdev_nvme_attach_controller" 00:38:32.511 } 00:38:32.511 EOF 00:38:32.511 )") 00:38:32.511 00:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:38:32.511 00:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:38:32.511 00:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:38:32.511 00:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:32.511 "params": { 00:38:32.511 "name": "Nvme1", 00:38:32.511 "trtype": "tcp", 00:38:32.511 "traddr": "10.0.0.2", 00:38:32.511 "adrfam": "ipv4", 00:38:32.511 "trsvcid": "4420", 00:38:32.511 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:32.511 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:32.511 "hdgst": false, 00:38:32.511 "ddgst": false 00:38:32.511 }, 00:38:32.511 "method": "bdev_nvme_attach_controller" 00:38:32.511 }' 00:38:32.511 [2024-11-18 00:43:56.130029] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:38:32.511 [2024-11-18 00:43:56.130096] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid440685 ] 00:38:32.511 [2024-11-18 00:43:56.199262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:32.511 [2024-11-18 00:43:56.247093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:33.077 Running I/O for 10 seconds... 00:38:34.942 5605.00 IOPS, 43.79 MiB/s [2024-11-17T23:43:59.712Z] 5670.50 IOPS, 44.30 MiB/s [2024-11-17T23:44:00.647Z] 5670.00 IOPS, 44.30 MiB/s [2024-11-17T23:44:02.019Z] 5678.00 IOPS, 44.36 MiB/s [2024-11-17T23:44:02.966Z] 5685.60 IOPS, 44.42 MiB/s [2024-11-17T23:44:03.904Z] 5694.17 IOPS, 44.49 MiB/s [2024-11-17T23:44:04.839Z] 5690.57 IOPS, 44.46 MiB/s [2024-11-17T23:44:05.774Z] 5690.75 IOPS, 44.46 MiB/s [2024-11-17T23:44:06.709Z] 5692.33 IOPS, 44.47 MiB/s [2024-11-17T23:44:06.709Z] 5691.90 IOPS, 44.47 MiB/s 00:38:42.887 Latency(us) 00:38:42.887 [2024-11-17T23:44:06.709Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:42.887 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:38:42.887 Verification LBA range: start 0x0 length 0x1000 00:38:42.887 Nvme1n1 : 10.01 5692.21 44.47 0.00 0.00 22426.54 1389.61 29709.65 00:38:42.887 [2024-11-17T23:44:06.709Z] =================================================================================================================== 00:38:42.887 [2024-11-17T23:44:06.709Z] Total : 5692.21 44.47 0.00 0.00 22426.54 1389.61 29709.65 00:38:43.145 00:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=441945 00:38:43.145 00:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:38:43.145 00:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:43.145 00:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:38:43.145 00:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:38:43.145 00:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:38:43.145 00:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:38:43.145 00:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:43.145 00:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:43.145 { 00:38:43.145 "params": { 00:38:43.145 "name": "Nvme$subsystem", 00:38:43.145 "trtype": "$TEST_TRANSPORT", 00:38:43.145 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:43.145 "adrfam": "ipv4", 00:38:43.145 "trsvcid": "$NVMF_PORT", 00:38:43.145 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:43.145 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:43.145 "hdgst": ${hdgst:-false}, 00:38:43.145 "ddgst": ${ddgst:-false} 00:38:43.145 }, 00:38:43.145 "method": "bdev_nvme_attach_controller" 00:38:43.145 } 00:38:43.145 EOF 00:38:43.145 )") 00:38:43.145 00:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:38:43.145 [2024-11-18 00:44:06.829824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.145 [2024-11-18 00:44:06.829866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.145 00:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:38:43.145 00:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:38:43.145 00:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:43.145 "params": { 00:38:43.145 "name": "Nvme1", 00:38:43.145 "trtype": "tcp", 00:38:43.145 "traddr": "10.0.0.2", 00:38:43.145 "adrfam": "ipv4", 00:38:43.145 "trsvcid": "4420", 00:38:43.145 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:43.145 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:43.145 "hdgst": false, 00:38:43.145 "ddgst": false 00:38:43.145 }, 00:38:43.145 "method": "bdev_nvme_attach_controller" 00:38:43.145 }' 00:38:43.145 [2024-11-18 00:44:06.837765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.145 [2024-11-18 00:44:06.837787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.145 [2024-11-18 00:44:06.845756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.145 [2024-11-18 00:44:06.845777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.145 [2024-11-18 00:44:06.853751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.145 [2024-11-18 00:44:06.853771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.146 [2024-11-18 00:44:06.861752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.146 [2024-11-18 00:44:06.861773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.146 [2024-11-18 00:44:06.869589] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:38:43.146 [2024-11-18 00:44:06.869683] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid441945 ] 00:38:43.146 [2024-11-18 00:44:06.869763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.146 [2024-11-18 00:44:06.869782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.146 [2024-11-18 00:44:06.877748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.146 [2024-11-18 00:44:06.877767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.146 [2024-11-18 00:44:06.885747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.146 [2024-11-18 00:44:06.885766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.146 [2024-11-18 00:44:06.893747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.146 [2024-11-18 00:44:06.893766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.146 [2024-11-18 00:44:06.901750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.146 [2024-11-18 00:44:06.901769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.146 [2024-11-18 00:44:06.909757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.146 [2024-11-18 00:44:06.909777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.146 [2024-11-18 00:44:06.917750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.146 [2024-11-18 00:44:06.917769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.146 [2024-11-18 00:44:06.925747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.146 [2024-11-18 00:44:06.925766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.146 [2024-11-18 00:44:06.933754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.146 [2024-11-18 00:44:06.933774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.146 [2024-11-18 00:44:06.939612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:43.146 [2024-11-18 00:44:06.941747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.146 [2024-11-18 00:44:06.941766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.146 [2024-11-18 00:44:06.949803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.146 [2024-11-18 00:44:06.949841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.146 [2024-11-18 00:44:06.957788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.146 [2024-11-18 00:44:06.957820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.146 [2024-11-18 00:44:06.965787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.146 [2024-11-18 00:44:06.965818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.405 [2024-11-18 00:44:06.973750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.405 [2024-11-18 00:44:06.973770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.405 [2024-11-18 00:44:06.981748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.405 [2024-11-18 00:44:06.981767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.405 [2024-11-18 00:44:06.989748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.405 [2024-11-18 00:44:06.989767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.405 [2024-11-18 00:44:06.990342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:43.405 [2024-11-18 00:44:06.997747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.406 [2024-11-18 00:44:06.997766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.406 [2024-11-18 00:44:07.005770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.406 [2024-11-18 00:44:07.005798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.406 [2024-11-18 00:44:07.013800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.406 [2024-11-18 00:44:07.013836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.406 [2024-11-18 00:44:07.021793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.406 [2024-11-18 00:44:07.021828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.406 [2024-11-18 00:44:07.029797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.406 [2024-11-18 00:44:07.029833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.406 [2024-11-18 00:44:07.037796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.406 [2024-11-18 00:44:07.037832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.406 [2024-11-18 00:44:07.045796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.406 [2024-11-18 00:44:07.045833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.406 [2024-11-18 00:44:07.053797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.406 [2024-11-18 00:44:07.053833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.406 [2024-11-18 00:44:07.061750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.406 [2024-11-18 00:44:07.061770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.406 [2024-11-18 00:44:07.069793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.406 [2024-11-18 00:44:07.069828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.406 [2024-11-18 00:44:07.077794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.406 [2024-11-18 00:44:07.077829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.406 [2024-11-18 00:44:07.085776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.406 [2024-11-18 00:44:07.085806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.406 [2024-11-18 00:44:07.093748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.406 [2024-11-18 00:44:07.093767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.406 [2024-11-18 00:44:07.101758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.406 [2024-11-18 00:44:07.101782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.406 [2024-11-18 00:44:07.109755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.406 [2024-11-18 00:44:07.109784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.406 [2024-11-18 00:44:07.117762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.406 [2024-11-18 00:44:07.117785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.406 [2024-11-18 00:44:07.125752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.406 [2024-11-18 00:44:07.125774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.406 [2024-11-18 00:44:07.133752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.406 [2024-11-18 00:44:07.133774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.406 [2024-11-18 00:44:07.141748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.406 [2024-11-18 00:44:07.141768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.406 [2024-11-18 00:44:07.149747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.406 [2024-11-18 00:44:07.149766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.406 [2024-11-18 00:44:07.157747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.406 [2024-11-18 00:44:07.157766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.406 [2024-11-18 00:44:07.165748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.406 [2024-11-18 00:44:07.165767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.406 [2024-11-18 00:44:07.173752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.406 [2024-11-18 00:44:07.173774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.406 [2024-11-18 00:44:07.181752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.406 [2024-11-18 00:44:07.181774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.406 [2024-11-18 00:44:07.189752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.406 [2024-11-18 00:44:07.189774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.406 [2024-11-18 00:44:07.198269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.406 [2024-11-18 00:44:07.198319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.406 [2024-11-18 00:44:07.205753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.406 [2024-11-18 00:44:07.205775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.406 Running I/O for 5 seconds... 00:38:43.406 [2024-11-18 00:44:07.221099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.406 [2024-11-18 00:44:07.221141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.665 [2024-11-18 00:44:07.232740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.665 [2024-11-18 00:44:07.232768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.665 [2024-11-18 00:44:07.249690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.665 [2024-11-18 00:44:07.249732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.665 [2024-11-18 00:44:07.261500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.665 [2024-11-18 00:44:07.261528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.665 [2024-11-18 00:44:07.274807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.665 [2024-11-18 00:44:07.274834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.665 [2024-11-18 00:44:07.293549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.665 [2024-11-18 00:44:07.293578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.665 [2024-11-18 00:44:07.303973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.665 [2024-11-18 00:44:07.304010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.665 [2024-11-18 00:44:07.319631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.665 [2024-11-18 00:44:07.319673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.665 [2024-11-18 00:44:07.331546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.665 [2024-11-18 00:44:07.331573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.665 [2024-11-18 00:44:07.346253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.665 [2024-11-18 00:44:07.346293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.665 [2024-11-18 00:44:07.357189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.665 [2024-11-18 00:44:07.357230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.665 [2024-11-18 00:44:07.370926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.665 [2024-11-18 00:44:07.370951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.665 [2024-11-18 00:44:07.385046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.665 [2024-11-18 00:44:07.385074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.665 [2024-11-18 00:44:07.395814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.665 [2024-11-18 00:44:07.395840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.665 [2024-11-18 00:44:07.408856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.665 [2024-11-18 00:44:07.408881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.665 [2024-11-18 00:44:07.420268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.665 [2024-11-18 00:44:07.420317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.665 [2024-11-18 00:44:07.436639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.665 [2024-11-18 00:44:07.436680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.665 [2024-11-18 00:44:07.447468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.665 [2024-11-18 00:44:07.447495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.665 [2024-11-18 00:44:07.460715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.665 [2024-11-18 00:44:07.460741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.665 [2024-11-18 00:44:07.472728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.665 [2024-11-18 00:44:07.472768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.665 [2024-11-18 00:44:07.484626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.665 [2024-11-18 00:44:07.484653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.923 [2024-11-18 00:44:07.496762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.923 [2024-11-18 00:44:07.496789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.923 [2024-11-18 00:44:07.508543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.923 [2024-11-18 00:44:07.508571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.923 [2024-11-18 00:44:07.521738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.923 [2024-11-18 00:44:07.521764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.923 [2024-11-18 00:44:07.533325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.923 [2024-11-18 00:44:07.533353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.923 [2024-11-18 00:44:07.545238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.923 [2024-11-18 00:44:07.545279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.923 [2024-11-18 00:44:07.557815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.923 [2024-11-18 00:44:07.557854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.923 [2024-11-18 00:44:07.569832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.923 [2024-11-18 00:44:07.569857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.923 [2024-11-18 00:44:07.582137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.923 [2024-11-18 00:44:07.582177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.923 [2024-11-18 00:44:07.594430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.923 [2024-11-18 00:44:07.594458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.923 [2024-11-18 00:44:07.606959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.923 [2024-11-18 00:44:07.606984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.923 [2024-11-18 00:44:07.622731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.923 [2024-11-18 00:44:07.622759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.923 [2024-11-18 00:44:07.633385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.923 [2024-11-18 00:44:07.633414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.923 [2024-11-18 00:44:07.646685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.923 [2024-11-18 00:44:07.646726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.923 [2024-11-18 00:44:07.662533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.923 [2024-11-18 00:44:07.662561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.923 [2024-11-18 00:44:07.673128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.923 [2024-11-18 00:44:07.673155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.923 [2024-11-18 00:44:07.686426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.923 [2024-11-18 00:44:07.686454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.923 [2024-11-18 00:44:07.698690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.923 [2024-11-18 00:44:07.698715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.923 [2024-11-18 00:44:07.711273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.923 [2024-11-18 00:44:07.711325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.923 [2024-11-18 00:44:07.725750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.923 [2024-11-18 00:44:07.725777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.923 [2024-11-18 00:44:07.736670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.923 [2024-11-18 00:44:07.736711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.182 [2024-11-18 00:44:07.752411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.182 [2024-11-18 00:44:07.752438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.182 [2024-11-18 00:44:07.763366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.182 [2024-11-18 00:44:07.763393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.182 [2024-11-18 00:44:07.777138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.182 [2024-11-18 00:44:07.777164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.182 [2024-11-18 00:44:07.789907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.182 [2024-11-18 00:44:07.789932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.182 [2024-11-18 00:44:07.803006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.182 [2024-11-18 00:44:07.803031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.182 [2024-11-18 00:44:07.820120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.182 [2024-11-18 00:44:07.820148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.182 [2024-11-18 00:44:07.831161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.182 [2024-11-18 00:44:07.831188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.182 [2024-11-18 00:44:07.844341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.182 [2024-11-18 00:44:07.844369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.182 [2024-11-18 00:44:07.855758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.182 [2024-11-18 00:44:07.855783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.182 [2024-11-18 00:44:07.872383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.182 [2024-11-18 00:44:07.872411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.182 [2024-11-18 00:44:07.883223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.182 [2024-11-18 00:44:07.883260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.182 [2024-11-18 00:44:07.896760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.182 [2024-11-18 00:44:07.896802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.182 [2024-11-18 00:44:07.909375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.182 [2024-11-18 00:44:07.909403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.182 [2024-11-18 00:44:07.921719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.182 [2024-11-18 00:44:07.921745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.182 [2024-11-18 00:44:07.933237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.182 [2024-11-18 00:44:07.933264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.182 [2024-11-18 00:44:07.945801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.182 [2024-11-18 00:44:07.945827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.182 [2024-11-18 00:44:07.958222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.182 [2024-11-18 00:44:07.958249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.182 [2024-11-18 00:44:07.975421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.182 [2024-11-18 00:44:07.975450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.182 [2024-11-18 00:44:07.986207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.182 [2024-11-18 00:44:07.986234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.182 [2024-11-18 00:44:07.999587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.182 [2024-11-18 00:44:07.999632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.442 [2024-11-18 00:44:08.016952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.442 [2024-11-18 00:44:08.016980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.442 [2024-11-18 00:44:08.027891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.442 [2024-11-18 00:44:08.027918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.442 [2024-11-18 00:44:08.041270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.442 [2024-11-18 00:44:08.041320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.442 [2024-11-18 00:44:08.053916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.442 [2024-11-18 00:44:08.053941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.442 [2024-11-18 00:44:08.066643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.442 [2024-11-18 00:44:08.066684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.442 [2024-11-18 00:44:08.084726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.442 [2024-11-18 00:44:08.084752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.442 [2024-11-18 00:44:08.095967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.442 [2024-11-18 00:44:08.095992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.442 [2024-11-18 00:44:08.109485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.442 [2024-11-18 00:44:08.109511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.442 [2024-11-18 00:44:08.121134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.442 [2024-11-18 00:44:08.121174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.442 [2024-11-18 00:44:08.133382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.442 [2024-11-18 00:44:08.133410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.442 [2024-11-18 00:44:08.146394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.442 [2024-11-18 00:44:08.146423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.442 [2024-11-18 00:44:08.158812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.442 [2024-11-18 00:44:08.158839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.442 [2024-11-18 00:44:08.175434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.442 [2024-11-18 00:44:08.175467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.442 [2024-11-18 00:44:08.186603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.442 [2024-11-18 00:44:08.186643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.442 [2024-11-18 00:44:08.199869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.442 [2024-11-18 00:44:08.199895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.442 [2024-11-18 00:44:08.214091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.442 [2024-11-18 00:44:08.214118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.442 10230.00 IOPS, 79.92 MiB/s [2024-11-17T23:44:08.264Z] [2024-11-18 00:44:08.225330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.442 [2024-11-18 00:44:08.225357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.442 [2024-11-18 00:44:08.238769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.442 [2024-11-18 00:44:08.238794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.442 [2024-11-18 00:44:08.255920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.442 [2024-11-18 00:44:08.255946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.701 [2024-11-18 00:44:08.266911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.701 [2024-11-18 00:44:08.266937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.701 [2024-11-18 00:44:08.283334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.701 [2024-11-18 00:44:08.283363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.701 [2024-11-18 00:44:08.301008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.701 [2024-11-18 00:44:08.301033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.701 [2024-11-18 00:44:08.312353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.701 [2024-11-18 00:44:08.312381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.701 [2024-11-18 00:44:08.328896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.701 [2024-11-18 00:44:08.328922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.701 [2024-11-18 00:44:08.340970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.701 [2024-11-18 00:44:08.340996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.701 [2024-11-18 00:44:08.353120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.701 [2024-11-18 00:44:08.353146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.701 [2024-11-18 00:44:08.365517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.701 [2024-11-18 00:44:08.365545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.701 [2024-11-18 00:44:08.377614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.701 [2024-11-18 00:44:08.377655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.701 [2024-11-18 00:44:08.389447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.701 [2024-11-18 00:44:08.389474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.701 [2024-11-18 00:44:08.401257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.701 [2024-11-18 00:44:08.401283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.701 [2024-11-18 00:44:08.414600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.701 [2024-11-18 00:44:08.414626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.701 [2024-11-18 00:44:08.433135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.701 [2024-11-18 00:44:08.433161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.701 [2024-11-18 00:44:08.444484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.701 [2024-11-18 00:44:08.444512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.701 [2024-11-18 00:44:08.460802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.701 [2024-11-18 00:44:08.460827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.701 [2024-11-18 00:44:08.471618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.701 [2024-11-18 00:44:08.471644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.701 [2024-11-18 00:44:08.484781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.701 [2024-11-18 00:44:08.484806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.701 [2024-11-18 00:44:08.497015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.701 [2024-11-18 00:44:08.497040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.701 [2024-11-18 00:44:08.509413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.701 [2024-11-18 00:44:08.509455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.701 [2024-11-18 00:44:08.521778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.701 [2024-11-18 00:44:08.521818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.960 [2024-11-18 00:44:08.534776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.960 [2024-11-18 00:44:08.534815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.960 [2024-11-18 00:44:08.552714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.960 [2024-11-18 00:44:08.552739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.960 [2024-11-18 00:44:08.563497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.960 [2024-11-18 00:44:08.563524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.960 [2024-11-18 00:44:08.576979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.960 [2024-11-18 00:44:08.577003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.960 [2024-11-18 00:44:08.588877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.960 [2024-11-18 00:44:08.588901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.960 [2024-11-18 00:44:08.600164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.960 [2024-11-18 00:44:08.600189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.960 [2024-11-18 00:44:08.612872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.960 [2024-11-18 00:44:08.612898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.960 [2024-11-18 00:44:08.625355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.960 [2024-11-18 00:44:08.625381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.960 [2024-11-18 00:44:08.637988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.960 [2024-11-18 00:44:08.638013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.960 [2024-11-18 00:44:08.651098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.960 [2024-11-18 00:44:08.651122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.961 [2024-11-18 00:44:08.669447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.961 [2024-11-18 00:44:08.669490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.961 [2024-11-18 00:44:08.680563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.961 [2024-11-18 00:44:08.680604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.961 [2024-11-18 00:44:08.697051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.961 [2024-11-18 00:44:08.697077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.961 [2024-11-18 00:44:08.708994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.961 [2024-11-18 00:44:08.709019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.961 [2024-11-18 00:44:08.721125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.961 [2024-11-18 00:44:08.721150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.961 [2024-11-18 00:44:08.734234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.961 [2024-11-18 00:44:08.734259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.961 [2024-11-18 00:44:08.746042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.961 [2024-11-18 00:44:08.746066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.961 [2024-11-18 00:44:08.758673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.961 [2024-11-18 00:44:08.758698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.961 [2024-11-18 00:44:08.776705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.961 [2024-11-18 00:44:08.776746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.219 [2024-11-18 00:44:08.787512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.219 [2024-11-18 00:44:08.787552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.219 [2024-11-18 00:44:08.800486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.219 [2024-11-18 00:44:08.800514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.219 [2024-11-18 00:44:08.811636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.219 [2024-11-18 00:44:08.811676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.219 [2024-11-18 00:44:08.828327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.219 [2024-11-18 00:44:08.828355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.219 [2024-11-18 00:44:08.843880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.219 [2024-11-18 00:44:08.843906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.219 [2024-11-18 00:44:08.860786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.219 [2024-11-18 00:44:08.860827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.219 [2024-11-18 00:44:08.871721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.219 [2024-11-18 00:44:08.871746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.219 [2024-11-18 00:44:08.888813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.219 [2024-11-18 00:44:08.888837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.219 [2024-11-18 00:44:08.899399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.219 [2024-11-18 00:44:08.899440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.219 [2024-11-18 00:44:08.912397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.219 [2024-11-18 00:44:08.912425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.219 [2024-11-18 00:44:08.924649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.219 [2024-11-18 00:44:08.924688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.219 [2024-11-18 00:44:08.936904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.219 [2024-11-18 00:44:08.936928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.219 [2024-11-18 00:44:08.949864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.219 [2024-11-18 00:44:08.949888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.219 [2024-11-18 00:44:08.962748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.219 [2024-11-18 00:44:08.962773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.219 [2024-11-18 00:44:08.981242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.219 [2024-11-18 00:44:08.981267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.219 [2024-11-18 00:44:08.992691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.219 [2024-11-18 00:44:08.992715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.219 [2024-11-18 00:44:09.007642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.219 [2024-11-18 00:44:09.007666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.219 [2024-11-18 00:44:09.021446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.219 [2024-11-18 00:44:09.021474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.219 [2024-11-18 00:44:09.032801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.219 [2024-11-18 00:44:09.032827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.477 [2024-11-18 00:44:09.049465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.477 [2024-11-18 00:44:09.049504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.477 [2024-11-18 00:44:09.060793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.477 [2024-11-18 00:44:09.060829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.477 [2024-11-18 00:44:09.075691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.477 [2024-11-18 00:44:09.075716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.477 [2024-11-18 00:44:09.086794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.477 [2024-11-18 00:44:09.086821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.478 [2024-11-18 00:44:09.103499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.478 [2024-11-18 00:44:09.103526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.478 [2024-11-18 00:44:09.119490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.478 [2024-11-18 00:44:09.119518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.478 [2024-11-18 00:44:09.130945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.478 [2024-11-18 00:44:09.130970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.478 [2024-11-18 00:44:09.147256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.478 [2024-11-18 00:44:09.147295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.478 [2024-11-18 00:44:09.158984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.478 [2024-11-18 00:44:09.159010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.478 [2024-11-18 00:44:09.174904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.478 [2024-11-18 00:44:09.174944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.478 [2024-11-18 00:44:09.185509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.478 [2024-11-18 00:44:09.185541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.478 [2024-11-18 00:44:09.199587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.478 [2024-11-18 00:44:09.199626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.478 [2024-11-18 00:44:09.215696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.478 [2024-11-18 00:44:09.215737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.478 10262.00 IOPS, 80.17 MiB/s [2024-11-17T23:44:09.300Z] [2024-11-18 00:44:09.227206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.478 [2024-11-18 00:44:09.227231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.478 [2024-11-18 00:44:09.240911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.478 [2024-11-18 00:44:09.240936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.478 [2024-11-18 00:44:09.253501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.478 [2024-11-18 00:44:09.253528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.478 [2024-11-18 00:44:09.266161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.478 [2024-11-18 00:44:09.266185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.478 [2024-11-18 00:44:09.278551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.478 [2024-11-18 00:44:09.278578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.478 [2024-11-18 00:44:09.290989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.478 [2024-11-18 00:44:09.291013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.736 [2024-11-18 00:44:09.309027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.736 [2024-11-18 00:44:09.309052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.736 [2024-11-18 00:44:09.320373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.736 [2024-11-18 00:44:09.320401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.736 [2024-11-18 00:44:09.336469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.736 [2024-11-18 00:44:09.336496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.736 [2024-11-18 00:44:09.347908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.736 [2024-11-18 00:44:09.347933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.736 [2024-11-18 00:44:09.364098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.736 [2024-11-18 00:44:09.364123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.736 [2024-11-18 00:44:09.375952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.736 [2024-11-18 00:44:09.375977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.736 [2024-11-18 00:44:09.390760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.737 [2024-11-18 00:44:09.390785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.737 [2024-11-18 00:44:09.402453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.737 [2024-11-18 00:44:09.402480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.737 [2024-11-18 00:44:09.415854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.737 [2024-11-18 00:44:09.415878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.737 [2024-11-18 00:44:09.432290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.737 [2024-11-18 00:44:09.432323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.737 [2024-11-18 00:44:09.443493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.737 [2024-11-18 00:44:09.443521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.737 [2024-11-18 00:44:09.456938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.737 [2024-11-18 00:44:09.456963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.737 [2024-11-18 00:44:09.468966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.737 [2024-11-18 00:44:09.468990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.737 [2024-11-18 00:44:09.481173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.737 [2024-11-18 00:44:09.481211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.737 [2024-11-18 00:44:09.493549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.737 [2024-11-18 00:44:09.493576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.737 [2024-11-18 00:44:09.506155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.737 [2024-11-18 00:44:09.506179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.737 [2024-11-18 00:44:09.518303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.737 [2024-11-18 00:44:09.518351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.737 [2024-11-18 00:44:09.530915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.737 [2024-11-18 00:44:09.530939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.737 [2024-11-18 00:44:09.545538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.737 [2024-11-18 00:44:09.545566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.737 [2024-11-18 00:44:09.556527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.737 [2024-11-18 00:44:09.556554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.994 [2024-11-18 00:44:09.572712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.994 [2024-11-18 00:44:09.572736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.994 [2024-11-18 00:44:09.583574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.994 [2024-11-18 00:44:09.583618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.994 [2024-11-18 00:44:09.596911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.994 [2024-11-18 00:44:09.596935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.994 [2024-11-18 00:44:09.609181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.994 [2024-11-18 00:44:09.609207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.994 [2024-11-18 00:44:09.622238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.994 [2024-11-18 00:44:09.622263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.994 [2024-11-18 00:44:09.634514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.994 [2024-11-18 00:44:09.634540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.994 [2024-11-18 00:44:09.647227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.994 [2024-11-18 00:44:09.647252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.994 [2024-11-18 00:44:09.662882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.994 [2024-11-18 00:44:09.662909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.994 [2024-11-18 00:44:09.673570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.994 [2024-11-18 00:44:09.673615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.994 [2024-11-18 00:44:09.686975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.994 [2024-11-18 00:44:09.687001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.994 [2024-11-18 00:44:09.702043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.994 [2024-11-18 00:44:09.702069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.994 [2024-11-18 00:44:09.712619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.994 [2024-11-18 00:44:09.712647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.994 [2024-11-18 00:44:09.728673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.994 [2024-11-18 00:44:09.728698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.994 [2024-11-18 00:44:09.740306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.994 [2024-11-18 00:44:09.740341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.994 [2024-11-18 00:44:09.753706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.994 [2024-11-18 00:44:09.753731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.994 [2024-11-18 00:44:09.765814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.994 [2024-11-18 00:44:09.765840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.994 [2024-11-18 00:44:09.778226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.994 [2024-11-18 00:44:09.778252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.994 [2024-11-18 00:44:09.790952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.994 [2024-11-18 00:44:09.790979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.994 [2024-11-18 00:44:09.806019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.994 [2024-11-18 00:44:09.806060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.252 [2024-11-18 00:44:09.817044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.252 [2024-11-18 00:44:09.817086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.252 [2024-11-18 00:44:09.830494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.252 [2024-11-18 00:44:09.830522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.252 [2024-11-18 00:44:09.843113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.252 [2024-11-18 00:44:09.843152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.252 [2024-11-18 00:44:09.859285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.252 [2024-11-18 00:44:09.859337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.252 [2024-11-18 00:44:09.870484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.252 [2024-11-18 00:44:09.870512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.252 [2024-11-18 00:44:09.884117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.252 [2024-11-18 00:44:09.884142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.252 [2024-11-18 00:44:09.896372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.252 [2024-11-18 00:44:09.896398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.252 [2024-11-18 00:44:09.908168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.252 [2024-11-18 00:44:09.908193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.252 [2024-11-18 00:44:09.924878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.252 [2024-11-18 00:44:09.924903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.252 [2024-11-18 00:44:09.935848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.252 [2024-11-18 00:44:09.935873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.252 [2024-11-18 00:44:09.952360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.253 [2024-11-18 00:44:09.952386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.253 [2024-11-18 00:44:09.963483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.253 [2024-11-18 00:44:09.963510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.253 [2024-11-18 00:44:09.977133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.253 [2024-11-18 00:44:09.977173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.253 [2024-11-18 00:44:09.989637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.253 [2024-11-18 00:44:09.989680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.253 [2024-11-18 00:44:10.001601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.253 [2024-11-18 00:44:10.001628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.253 [2024-11-18 00:44:10.013811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.253 [2024-11-18 00:44:10.013839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.253 [2024-11-18 00:44:10.026467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.253 [2024-11-18 00:44:10.026498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.253 [2024-11-18 00:44:10.041896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.253 [2024-11-18 00:44:10.041943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.253 [2024-11-18 00:44:10.053153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.253 [2024-11-18 00:44:10.053180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.253 [2024-11-18 00:44:10.065935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.253 [2024-11-18 00:44:10.065962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.511 [2024-11-18 00:44:10.077377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.511 [2024-11-18 00:44:10.077405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.511 [2024-11-18 00:44:10.090307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.511 [2024-11-18 00:44:10.090340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.511 [2024-11-18 00:44:10.102992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.511 [2024-11-18 00:44:10.103016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.511 [2024-11-18 00:44:10.119286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.511 [2024-11-18 00:44:10.119318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.511 [2024-11-18 00:44:10.129994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.511 [2024-11-18 00:44:10.130018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.511 [2024-11-18 00:44:10.143494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.511 [2024-11-18 00:44:10.143521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.511 [2024-11-18 00:44:10.160134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.511 [2024-11-18 00:44:10.160162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.511 [2024-11-18 00:44:10.171355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.511 [2024-11-18 00:44:10.171397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.511 [2024-11-18 00:44:10.185500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.511 [2024-11-18 00:44:10.185527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.511 [2024-11-18 00:44:10.197768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.511 [2024-11-18 00:44:10.197793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.511 [2024-11-18 00:44:10.210187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.511 [2024-11-18 00:44:10.210213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.511 10249.33 IOPS, 80.07 MiB/s [2024-11-17T23:44:10.333Z] [2024-11-18 00:44:10.222251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.511 [2024-11-18 00:44:10.222287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.511 [2024-11-18 00:44:10.234768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.511 [2024-11-18 00:44:10.234793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.511 [2024-11-18 00:44:10.250890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.511 [2024-11-18 00:44:10.250924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.511 [2024-11-18 00:44:10.261537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.511 [2024-11-18 00:44:10.261565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.511 [2024-11-18 00:44:10.275114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.511 [2024-11-18 00:44:10.275139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.511 [2024-11-18 00:44:10.291720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.511 [2024-11-18 00:44:10.291756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.511 [2024-11-18 00:44:10.302690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.511 [2024-11-18 00:44:10.302716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.511 [2024-11-18 00:44:10.321233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.511 [2024-11-18 00:44:10.321258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.511 [2024-11-18 00:44:10.332441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.511 [2024-11-18 00:44:10.332468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.770 [2024-11-18 00:44:10.346732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.770 [2024-11-18 00:44:10.346756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.770 [2024-11-18 00:44:10.364456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.770 [2024-11-18 00:44:10.364483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.770 [2024-11-18 00:44:10.375162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.770 [2024-11-18 00:44:10.375186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.770 [2024-11-18 00:44:10.388571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.770 [2024-11-18 00:44:10.388613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.770 [2024-11-18 00:44:10.400500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.770 [2024-11-18 00:44:10.400526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.770 [2024-11-18 00:44:10.415989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.770 [2024-11-18 00:44:10.416014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.770 [2024-11-18 00:44:10.426739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.770 [2024-11-18 00:44:10.426763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.770 [2024-11-18 00:44:10.442647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.770 [2024-11-18 00:44:10.442672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.770 [2024-11-18 00:44:10.453825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.770 [2024-11-18 00:44:10.453851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.770 [2024-11-18 00:44:10.466996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.770 [2024-11-18 00:44:10.467020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.770 [2024-11-18 00:44:10.482985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.770 [2024-11-18 00:44:10.483011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.770 [2024-11-18 00:44:10.493976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.770 [2024-11-18 00:44:10.494002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.770 [2024-11-18 00:44:10.507698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.770 [2024-11-18 00:44:10.507723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.770 [2024-11-18 00:44:10.523207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.770 [2024-11-18 00:44:10.523234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.770 [2024-11-18 00:44:10.534458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.770 [2024-11-18 00:44:10.534491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.770 [2024-11-18 00:44:10.547431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.770 [2024-11-18 00:44:10.547468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.770 [2024-11-18 00:44:10.564467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.770 [2024-11-18 00:44:10.564493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.770 [2024-11-18 00:44:10.574992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.770 [2024-11-18 00:44:10.575016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.770 [2024-11-18 00:44:10.591048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.770 [2024-11-18 00:44:10.591073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.029 [2024-11-18 00:44:10.602125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.029 [2024-11-18 00:44:10.602150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.029 [2024-11-18 00:44:10.615405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.029 [2024-11-18 00:44:10.615430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.029 [2024-11-18 00:44:10.629774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.029 [2024-11-18 00:44:10.629800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.029 [2024-11-18 00:44:10.640624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.029 [2024-11-18 00:44:10.640665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.029 [2024-11-18 00:44:10.657517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.029 [2024-11-18 00:44:10.657543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.029 [2024-11-18 00:44:10.669073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.029 [2024-11-18 00:44:10.669097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.029 [2024-11-18 00:44:10.682388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.029 [2024-11-18 00:44:10.682414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.029 [2024-11-18 00:44:10.694181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.029 [2024-11-18 00:44:10.694205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.029 [2024-11-18 00:44:10.706307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.029 [2024-11-18 00:44:10.706340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.029 [2024-11-18 00:44:10.718935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.029 [2024-11-18 00:44:10.718959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.029 [2024-11-18 00:44:10.734158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.029 [2024-11-18 00:44:10.734186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.029 [2024-11-18 00:44:10.745030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.029 [2024-11-18 00:44:10.745056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.029 [2024-11-18 00:44:10.758039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.029 [2024-11-18 00:44:10.758065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.029 [2024-11-18 00:44:10.770103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.029 [2024-11-18 00:44:10.770127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.029 [2024-11-18 00:44:10.783141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.029 [2024-11-18 00:44:10.783166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.029 [2024-11-18 00:44:10.797754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.029 [2024-11-18 00:44:10.797794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.029 [2024-11-18 00:44:10.808581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.029 [2024-11-18 00:44:10.808620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.029 [2024-11-18 00:44:10.824401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.029 [2024-11-18 00:44:10.824426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.029 [2024-11-18 00:44:10.837402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.029 [2024-11-18 00:44:10.837430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.029 [2024-11-18 00:44:10.847811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.029 [2024-11-18 00:44:10.847836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.287 [2024-11-18 00:44:10.864525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.288 [2024-11-18 00:44:10.864552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.288 [2024-11-18 00:44:10.876086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.288 [2024-11-18 00:44:10.876111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.288 [2024-11-18 00:44:10.891836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.288 [2024-11-18 00:44:10.891876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.288 [2024-11-18 00:44:10.905366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.288 [2024-11-18 00:44:10.905393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.288 [2024-11-18 00:44:10.916541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.288 [2024-11-18 00:44:10.916568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.288 [2024-11-18 00:44:10.932748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.288 [2024-11-18 00:44:10.932773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.288 [2024-11-18 00:44:10.944974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.288 [2024-11-18 00:44:10.944998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.288 [2024-11-18 00:44:10.956675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.288 [2024-11-18 00:44:10.956715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.288 [2024-11-18 00:44:10.969642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.288 [2024-11-18 00:44:10.969689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.288 [2024-11-18 00:44:10.982328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.288 [2024-11-18 00:44:10.982369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.288 [2024-11-18 00:44:10.994228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.288 [2024-11-18 00:44:10.994253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.288 [2024-11-18 00:44:11.005663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.288 [2024-11-18 00:44:11.005705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.288 [2024-11-18 00:44:11.018630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.288 [2024-11-18 00:44:11.018655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.288 [2024-11-18 00:44:11.030948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.288 [2024-11-18 00:44:11.030973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.288 [2024-11-18 00:44:11.045475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.288 [2024-11-18 00:44:11.045502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.288 [2024-11-18 00:44:11.057079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.288 [2024-11-18 00:44:11.057103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.288 [2024-11-18 00:44:11.070154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.288 [2024-11-18 00:44:11.070179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.288 [2024-11-18 00:44:11.082258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.288 [2024-11-18 00:44:11.082283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.288 [2024-11-18 00:44:11.094964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.288 [2024-11-18 00:44:11.094990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.546 [2024-11-18 00:44:11.112816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.546 [2024-11-18 00:44:11.112843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.546 [2024-11-18 00:44:11.123861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.546 [2024-11-18 00:44:11.123886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.546 [2024-11-18 00:44:11.137075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.546 [2024-11-18 00:44:11.137099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.546 [2024-11-18 00:44:11.149340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.546 [2024-11-18 00:44:11.149378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.546 [2024-11-18 00:44:11.161666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.546 [2024-11-18 00:44:11.161693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.546 [2024-11-18 00:44:11.174070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.546 [2024-11-18 00:44:11.174096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.546 [2024-11-18 00:44:11.186035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.546 [2024-11-18 00:44:11.186061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.546 [2024-11-18 00:44:11.197736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.546 [2024-11-18 00:44:11.197762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.546 [2024-11-18 00:44:11.210521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.546 [2024-11-18 00:44:11.210549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.546 [2024-11-18 00:44:11.222751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.546 [2024-11-18 00:44:11.222775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.546 10251.00 IOPS, 80.09 MiB/s [2024-11-17T23:44:11.369Z] [2024-11-18 00:44:11.241651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.547 [2024-11-18 00:44:11.241677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.547 [2024-11-18 00:44:11.253080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.547 [2024-11-18 00:44:11.253105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.547 [2024-11-18 00:44:11.266563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.547 [2024-11-18 00:44:11.266590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.547 [2024-11-18 00:44:11.278289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.547 [2024-11-18 00:44:11.278340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.547 [2024-11-18 00:44:11.290897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.547 [2024-11-18 00:44:11.290923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.547 [2024-11-18 00:44:11.307625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.547 [2024-11-18 00:44:11.307651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.547 [2024-11-18 00:44:11.318926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.547 [2024-11-18 00:44:11.318953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.547 [2024-11-18 00:44:11.335475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.547 [2024-11-18 00:44:11.335503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.547 [2024-11-18 00:44:11.349389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.547 [2024-11-18 00:44:11.349431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.547 [2024-11-18 00:44:11.360976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.547 [2024-11-18 00:44:11.361003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.805 [2024-11-18 00:44:11.374187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.805 [2024-11-18 00:44:11.374212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.805 [2024-11-18 00:44:11.386102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.805 [2024-11-18 00:44:11.386127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.805 [2024-11-18 00:44:11.398107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.805 [2024-11-18 00:44:11.398132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.805 [2024-11-18 00:44:11.410779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.805 [2024-11-18 00:44:11.410804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.805 [2024-11-18 00:44:11.423284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.805 [2024-11-18 00:44:11.423336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.805 [2024-11-18 00:44:11.441245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.805 [2024-11-18 00:44:11.441285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.806 [2024-11-18 00:44:11.451938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.806 [2024-11-18 00:44:11.451963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.806 [2024-11-18 00:44:11.468715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.806 [2024-11-18 00:44:11.468739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.806 [2024-11-18 00:44:11.480084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.806 [2024-11-18 00:44:11.480110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.806 [2024-11-18 00:44:11.496829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.806 [2024-11-18 00:44:11.496854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.806 [2024-11-18 00:44:11.507809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.806 [2024-11-18 00:44:11.507834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.806 [2024-11-18 00:44:11.523754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.806 [2024-11-18 00:44:11.523779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.806 [2024-11-18 00:44:11.535003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.806 [2024-11-18 00:44:11.535043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.806 [2024-11-18 00:44:11.548421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.806 [2024-11-18 00:44:11.548449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.806 [2024-11-18 00:44:11.560907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.806 [2024-11-18 00:44:11.560932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.806 [2024-11-18 00:44:11.573433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.806 [2024-11-18 00:44:11.573459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.806 [2024-11-18 00:44:11.585612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.806 [2024-11-18 00:44:11.585636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.806 [2024-11-18 00:44:11.598464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.806 [2024-11-18 00:44:11.598508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.806 [2024-11-18 00:44:11.610205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.806 [2024-11-18 00:44:11.610229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.806 [2024-11-18 00:44:11.623420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.806 [2024-11-18 00:44:11.623447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.065 [2024-11-18 00:44:11.638615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.065 [2024-11-18 00:44:11.638642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.065 [2024-11-18 00:44:11.650124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.065 [2024-11-18 00:44:11.650149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.065 [2024-11-18 00:44:11.663492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.065 [2024-11-18 00:44:11.663519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.065 [2024-11-18 00:44:11.680325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.065 [2024-11-18 00:44:11.680352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.065 [2024-11-18 00:44:11.690943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.065 [2024-11-18 00:44:11.690968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.065 [2024-11-18 00:44:11.707145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.065 [2024-11-18 00:44:11.707171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.065 [2024-11-18 00:44:11.717754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.065 [2024-11-18 00:44:11.717780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.065 [2024-11-18 00:44:11.730851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.065 [2024-11-18 00:44:11.730876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.065 [2024-11-18 00:44:11.745846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.065 [2024-11-18 00:44:11.745887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.065 [2024-11-18 00:44:11.756274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.065 [2024-11-18 00:44:11.756322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.065 [2024-11-18 00:44:11.769151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.065 [2024-11-18 00:44:11.769177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.065 [2024-11-18 00:44:11.780960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.065 [2024-11-18 00:44:11.780997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.065 [2024-11-18 00:44:11.793155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.065 [2024-11-18 00:44:11.793194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.065 [2024-11-18 00:44:11.805268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.065 [2024-11-18 00:44:11.805307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.065 [2024-11-18 00:44:11.817575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.065 [2024-11-18 00:44:11.817602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.065 [2024-11-18 00:44:11.830042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.065 [2024-11-18 00:44:11.830067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.065 [2024-11-18 00:44:11.842466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.065 [2024-11-18 00:44:11.842493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.065 [2024-11-18 00:44:11.855041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.065 [2024-11-18 00:44:11.855065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.065 [2024-11-18 00:44:11.871761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.065 [2024-11-18 00:44:11.871789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.065 [2024-11-18 00:44:11.882820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.065 [2024-11-18 00:44:11.882859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.324 [2024-11-18 00:44:11.899451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.324 [2024-11-18 00:44:11.899478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.324 [2024-11-18 00:44:11.911120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.324 [2024-11-18 00:44:11.911145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.324 [2024-11-18 00:44:11.924825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.324 [2024-11-18 00:44:11.924850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.324 [2024-11-18 00:44:11.936896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.324 [2024-11-18 00:44:11.936921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.324 [2024-11-18 00:44:11.948993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.324 [2024-11-18 00:44:11.949018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.324 [2024-11-18 00:44:11.961781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.324 [2024-11-18 00:44:11.961806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.324 [2024-11-18 00:44:11.973681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.324 [2024-11-18 00:44:11.973708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.324 [2024-11-18 00:44:11.986644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.324 [2024-11-18 00:44:11.986668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.324 [2024-11-18 00:44:12.001041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.324 [2024-11-18 00:44:12.001069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.324 [2024-11-18 00:44:12.011817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.324 [2024-11-18 00:44:12.011857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.324 [2024-11-18 00:44:12.028447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.324 [2024-11-18 00:44:12.028504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.324 [2024-11-18 00:44:12.041272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.324 [2024-11-18 00:44:12.041321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.324 [2024-11-18 00:44:12.053883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.324 [2024-11-18 00:44:12.053908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.324 [2024-11-18 00:44:12.066286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.324 [2024-11-18 00:44:12.066335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.324 [2024-11-18 00:44:12.078494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.324 [2024-11-18 00:44:12.078522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.324 [2024-11-18 00:44:12.090952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.324 [2024-11-18 00:44:12.090977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.324 [2024-11-18 00:44:12.108510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.324 [2024-11-18 00:44:12.108537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.324 [2024-11-18 00:44:12.119209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.324 [2024-11-18 00:44:12.119234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.324 [2024-11-18 00:44:12.132424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.324 [2024-11-18 00:44:12.132451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.324 [2024-11-18 00:44:12.143575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.324 [2024-11-18 00:44:12.143618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.582 [2024-11-18 00:44:12.158633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.582 [2024-11-18 00:44:12.158674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.582 [2024-11-18 00:44:12.169645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.582 [2024-11-18 00:44:12.169688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.582 [2024-11-18 00:44:12.183064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.582 [2024-11-18 00:44:12.183089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.582 [2024-11-18 00:44:12.197755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.582 [2024-11-18 00:44:12.197783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.582 [2024-11-18 00:44:12.208523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.582 [2024-11-18 00:44:12.208551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.583 [2024-11-18 00:44:12.224620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.583 [2024-11-18 00:44:12.224647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.583 10249.00 IOPS, 80.07 MiB/s [2024-11-17T23:44:12.405Z] [2024-11-18 00:44:12.235004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.583 [2024-11-18 00:44:12.235032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.583 00:38:48.583 Latency(us) 00:38:48.583 [2024-11-17T23:44:12.405Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:48.583 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:38:48.583 Nvme1n1 : 5.01 10251.50 80.09 0.00 0.00 12469.35 2791.35 20194.80 00:38:48.583 [2024-11-17T23:44:12.405Z] =================================================================================================================== 00:38:48.583 [2024-11-17T23:44:12.405Z] Total : 10251.50 80.09 0.00 0.00 12469.35 2791.35 20194.80 00:38:48.583 [2024-11-18 00:44:12.241757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.583 [2024-11-18 00:44:12.241781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.583 [2024-11-18 00:44:12.249762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.583 [2024-11-18 00:44:12.249786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.583 [2024-11-18 00:44:12.257821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.583 [2024-11-18 00:44:12.257868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.583 [2024-11-18 00:44:12.265830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.583 [2024-11-18 00:44:12.265879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.583 [2024-11-18 00:44:12.273830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.583 [2024-11-18 00:44:12.273878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.583 [2024-11-18 00:44:12.281835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.583 [2024-11-18 00:44:12.281881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.583 [2024-11-18 00:44:12.289820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.583 [2024-11-18 00:44:12.289865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.583 [2024-11-18 00:44:12.297835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.583 [2024-11-18 00:44:12.297884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.583 [2024-11-18 00:44:12.305825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.583 [2024-11-18 00:44:12.305872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.583 [2024-11-18 00:44:12.313825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.583 [2024-11-18 00:44:12.313867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.583 [2024-11-18 00:44:12.321834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.583 [2024-11-18 00:44:12.321886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.583 [2024-11-18 00:44:12.329829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.583 [2024-11-18 00:44:12.329878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.583 [2024-11-18 00:44:12.337829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.583 [2024-11-18 00:44:12.337876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.583 [2024-11-18 00:44:12.345836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.583 [2024-11-18 00:44:12.345881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.583 [2024-11-18 00:44:12.353829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.583 [2024-11-18 00:44:12.353876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.583 [2024-11-18 00:44:12.361829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.583 [2024-11-18 00:44:12.361878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.583 [2024-11-18 00:44:12.369810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.583 [2024-11-18 00:44:12.369869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.583 [2024-11-18 00:44:12.377766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.583 [2024-11-18 00:44:12.377789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.583 [2024-11-18 00:44:12.385813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.583 [2024-11-18 00:44:12.385856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.583 [2024-11-18 00:44:12.393821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.583 [2024-11-18 00:44:12.393866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.583 [2024-11-18 00:44:12.401847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.583 [2024-11-18 00:44:12.401898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.842 [2024-11-18 00:44:12.409757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.842 [2024-11-18 00:44:12.409778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.842 [2024-11-18 00:44:12.417754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.842 [2024-11-18 00:44:12.417775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.842 [2024-11-18 00:44:12.425750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.842 [2024-11-18 00:44:12.425770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.842 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (441945) - No such process 00:38:48.842 00:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 441945 00:38:48.842 00:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:48.842 00:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:48.842 00:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:48.842 00:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:48.842 00:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:38:48.842 00:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:48.842 00:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:48.842 delay0 00:38:48.842 00:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:48.842 00:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:38:48.842 00:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:48.842 00:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:48.842 00:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:48.842 00:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:38:48.842 [2024-11-18 00:44:12.594482] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:38:56.952 Initializing NVMe Controllers 00:38:56.952 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:56.952 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:38:56.952 Initialization complete. Launching workers. 00:38:56.952 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 223, failed: 25841 00:38:56.952 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 25908, failed to submit 156 00:38:56.952 success 25841, unsuccessful 67, failed 0 00:38:56.952 00:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:38:56.953 00:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:38:56.953 00:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:56.953 00:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:38:56.953 00:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:56.953 00:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:38:56.953 00:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:56.953 00:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:56.953 rmmod nvme_tcp 00:38:56.953 rmmod nvme_fabrics 00:38:56.953 rmmod nvme_keyring 00:38:56.953 00:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:56.953 00:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:38:56.953 00:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:38:56.953 00:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 440656 ']' 00:38:56.953 00:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 440656 00:38:56.953 00:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 440656 ']' 00:38:56.953 00:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 440656 00:38:56.953 00:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:38:56.953 00:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:56.953 00:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 440656 00:38:56.953 00:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:56.953 00:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:56.953 00:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 440656' 00:38:56.953 killing process with pid 440656 00:38:56.953 00:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 440656 00:38:56.953 00:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 440656 00:38:56.953 00:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:56.953 00:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:56.953 00:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:56.953 00:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:38:56.953 00:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:38:56.953 00:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:56.953 00:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:38:56.953 00:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:56.953 00:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:56.953 00:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:56.953 00:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:56.953 00:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:58.331 00:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:58.331 00:38:58.331 real 0m28.671s 00:38:58.331 user 0m40.033s 00:38:58.331 sys 0m10.274s 00:38:58.331 00:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:58.331 00:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:58.331 ************************************ 00:38:58.331 END TEST nvmf_zcopy 00:38:58.332 ************************************ 00:38:58.332 00:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:38:58.332 00:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:58.332 00:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:58.332 00:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:58.332 ************************************ 00:38:58.332 START TEST nvmf_nmic 00:38:58.332 ************************************ 00:38:58.332 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:38:58.332 * Looking for test storage... 00:38:58.332 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:58.332 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:58.332 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:38:58.332 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:58.332 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:58.332 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:58.332 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:58.332 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:58.332 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:38:58.332 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:38:58.332 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:38:58.332 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:38:58.332 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:38:58.332 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:38:58.332 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:38:58.332 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:58.332 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:38:58.332 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:38:58.332 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:58.332 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:58.332 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:38:58.332 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:38:58.332 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:58.332 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:38:58.332 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:38:58.332 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:38:58.332 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:38:58.332 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:58.332 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:38:58.332 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:38:58.332 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:58.332 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:58.332 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:38:58.332 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:58.332 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:58.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:58.332 --rc genhtml_branch_coverage=1 00:38:58.332 --rc genhtml_function_coverage=1 00:38:58.332 --rc genhtml_legend=1 00:38:58.332 --rc geninfo_all_blocks=1 00:38:58.332 --rc geninfo_unexecuted_blocks=1 00:38:58.332 00:38:58.332 ' 00:38:58.332 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:58.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:58.332 --rc genhtml_branch_coverage=1 00:38:58.332 --rc genhtml_function_coverage=1 00:38:58.332 --rc genhtml_legend=1 00:38:58.332 --rc geninfo_all_blocks=1 00:38:58.332 --rc geninfo_unexecuted_blocks=1 00:38:58.332 00:38:58.332 ' 00:38:58.332 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:58.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:58.332 --rc genhtml_branch_coverage=1 00:38:58.332 --rc genhtml_function_coverage=1 00:38:58.332 --rc genhtml_legend=1 00:38:58.332 --rc geninfo_all_blocks=1 00:38:58.332 --rc geninfo_unexecuted_blocks=1 00:38:58.332 00:38:58.332 ' 00:38:58.332 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:58.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:58.332 --rc genhtml_branch_coverage=1 00:38:58.332 --rc genhtml_function_coverage=1 00:38:58.332 --rc genhtml_legend=1 00:38:58.332 --rc geninfo_all_blocks=1 00:38:58.332 --rc geninfo_unexecuted_blocks=1 00:38:58.332 00:38:58.332 ' 00:38:58.332 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:58.332 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:38:58.332 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:58.332 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:58.332 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:58.332 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:58.332 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:58.332 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:58.332 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:58.332 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:58.332 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:58.593 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:58.593 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:58.593 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:58.593 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:58.593 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:58.593 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:58.593 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:58.593 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:58.593 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:38:58.593 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:58.593 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:58.593 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:58.593 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:58.593 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:58.593 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:58.593 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:38:58.593 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:58.593 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:38:58.593 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:58.593 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:58.593 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:58.593 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:58.593 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:58.593 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:58.593 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:58.593 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:58.593 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:58.593 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:58.593 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:58.593 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:58.593 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:38:58.593 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:58.593 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:58.593 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:58.593 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:58.593 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:58.593 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:58.593 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:58.593 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:58.593 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:58.593 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:58.593 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:38:58.593 00:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:00.513 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:00.513 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:39:00.513 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:00.513 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:00.513 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:00.513 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:00.513 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:00.513 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:39:00.513 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:00.513 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:39:00.513 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:39:00.513 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:39:00.513 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:39:00.513 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:39:00.513 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:39:00.513 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:00.513 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:00.513 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:00.513 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:00.513 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:00.513 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:00.513 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:00.513 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:00.513 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:00.513 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:00.513 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:00.513 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:00.513 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:00.513 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:00.513 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:00.513 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:00.513 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:00.513 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:00.513 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:00.513 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:00.513 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:00.513 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:00.513 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:00.514 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:00.514 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:00.514 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:00.514 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:00.514 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.321 ms 00:39:00.514 00:39:00.514 --- 10.0.0.2 ping statistics --- 00:39:00.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:00.514 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:00.514 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:00.514 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:39:00.514 00:39:00.514 --- 10.0.0.1 ping statistics --- 00:39:00.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:00.514 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=445360 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 445360 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 445360 ']' 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:00.514 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:00.773 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:00.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:00.773 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:00.773 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:00.773 [2024-11-18 00:44:24.383505] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:00.773 [2024-11-18 00:44:24.384728] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:39:00.773 [2024-11-18 00:44:24.384794] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:00.773 [2024-11-18 00:44:24.457451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:00.773 [2024-11-18 00:44:24.504973] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:00.773 [2024-11-18 00:44:24.505024] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:00.773 [2024-11-18 00:44:24.505052] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:00.773 [2024-11-18 00:44:24.505063] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:00.773 [2024-11-18 00:44:24.505072] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:00.773 [2024-11-18 00:44:24.506625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:00.773 [2024-11-18 00:44:24.506716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:00.773 [2024-11-18 00:44:24.506782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:00.773 [2024-11-18 00:44:24.506785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:00.773 [2024-11-18 00:44:24.588801] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:00.773 [2024-11-18 00:44:24.589034] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:00.773 [2024-11-18 00:44:24.589293] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:00.773 [2024-11-18 00:44:24.589843] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:00.773 [2024-11-18 00:44:24.590074] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:01.032 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:01.032 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:39:01.032 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:01.032 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:01.032 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:01.032 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:01.032 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:01.032 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.032 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:01.032 [2024-11-18 00:44:24.647489] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:01.032 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.032 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:01.032 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.032 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:01.032 Malloc0 00:39:01.032 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.032 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:39:01.032 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.032 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:01.032 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.032 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:01.032 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.032 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:01.032 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.032 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:01.032 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.032 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:01.032 [2024-11-18 00:44:24.707646] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:01.032 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.032 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:39:01.032 test case1: single bdev can't be used in multiple subsystems 00:39:01.032 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:39:01.032 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.032 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:01.032 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.032 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:39:01.032 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.032 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:01.032 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.032 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:39:01.032 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:39:01.032 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.032 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:01.032 [2024-11-18 00:44:24.731398] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:39:01.032 [2024-11-18 00:44:24.731429] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:39:01.032 [2024-11-18 00:44:24.731445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:01.032 request: 00:39:01.032 { 00:39:01.032 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:39:01.032 "namespace": { 00:39:01.032 "bdev_name": "Malloc0", 00:39:01.032 "no_auto_visible": false 00:39:01.032 }, 00:39:01.032 "method": "nvmf_subsystem_add_ns", 00:39:01.032 "req_id": 1 00:39:01.032 } 00:39:01.032 Got JSON-RPC error response 00:39:01.032 response: 00:39:01.032 { 00:39:01.032 "code": -32602, 00:39:01.032 "message": "Invalid parameters" 00:39:01.032 } 00:39:01.032 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:39:01.032 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:39:01.032 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:39:01.032 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:39:01.032 Adding namespace failed - expected result. 00:39:01.032 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:39:01.032 test case2: host connect to nvmf target in multiple paths 00:39:01.032 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:39:01.032 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.032 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:01.032 [2024-11-18 00:44:24.739485] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:39:01.032 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.032 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:39:01.291 00:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:39:01.291 00:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:39:01.291 00:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:39:01.291 00:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:39:01.291 00:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:39:01.291 00:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:39:03.820 00:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:39:03.820 00:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:39:03.820 00:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:39:03.820 00:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:39:03.820 00:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:39:03.820 00:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:39:03.820 00:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:39:03.820 [global] 00:39:03.820 thread=1 00:39:03.820 invalidate=1 00:39:03.820 rw=write 00:39:03.820 time_based=1 00:39:03.820 runtime=1 00:39:03.820 ioengine=libaio 00:39:03.820 direct=1 00:39:03.820 bs=4096 00:39:03.820 iodepth=1 00:39:03.820 norandommap=0 00:39:03.820 numjobs=1 00:39:03.820 00:39:03.820 verify_dump=1 00:39:03.820 verify_backlog=512 00:39:03.820 verify_state_save=0 00:39:03.820 do_verify=1 00:39:03.820 verify=crc32c-intel 00:39:03.820 [job0] 00:39:03.820 filename=/dev/nvme0n1 00:39:03.820 Could not set queue depth (nvme0n1) 00:39:03.820 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:03.820 fio-3.35 00:39:03.820 Starting 1 thread 00:39:04.755 00:39:04.755 job0: (groupid=0, jobs=1): err= 0: pid=445741: Mon Nov 18 00:44:28 2024 00:39:04.755 read: IOPS=20, BW=83.0KiB/s (85.0kB/s)(84.0KiB/1012msec) 00:39:04.755 slat (nsec): min=5795, max=34003, avg=29015.14, stdev=8883.62 00:39:04.755 clat (usec): min=40534, max=41022, avg=40940.22, stdev=97.24 00:39:04.755 lat (usec): min=40539, max=41045, avg=40969.23, stdev=101.92 00:39:04.755 clat percentiles (usec): 00:39:04.755 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:39:04.755 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:04.755 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:04.755 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:04.755 | 99.99th=[41157] 00:39:04.755 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:39:04.755 slat (usec): min=5, max=40707, avg=141.15, stdev=2183.46 00:39:04.755 clat (usec): min=122, max=376, avg=151.79, stdev=36.85 00:39:04.755 lat (usec): min=129, max=41083, avg=292.95, stdev=2196.08 00:39:04.755 clat percentiles (usec): 00:39:04.755 | 1.00th=[ 125], 5.00th=[ 128], 10.00th=[ 130], 20.00th=[ 135], 00:39:04.755 | 30.00th=[ 135], 40.00th=[ 137], 50.00th=[ 139], 60.00th=[ 143], 00:39:04.755 | 70.00th=[ 145], 80.00th=[ 149], 90.00th=[ 243], 95.00th=[ 247], 00:39:04.755 | 99.00th=[ 251], 99.50th=[ 253], 99.90th=[ 375], 99.95th=[ 375], 00:39:04.755 | 99.99th=[ 375] 00:39:04.755 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:39:04.755 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:04.755 lat (usec) : 250=94.37%, 500=1.69% 00:39:04.755 lat (msec) : 50=3.94% 00:39:04.755 cpu : usr=0.10%, sys=0.40%, ctx=537, majf=0, minf=1 00:39:04.755 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:04.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:04.755 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:04.755 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:04.755 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:04.755 00:39:04.755 Run status group 0 (all jobs): 00:39:04.755 READ: bw=83.0KiB/s (85.0kB/s), 83.0KiB/s-83.0KiB/s (85.0kB/s-85.0kB/s), io=84.0KiB (86.0kB), run=1012-1012msec 00:39:04.755 WRITE: bw=2024KiB/s (2072kB/s), 2024KiB/s-2024KiB/s (2072kB/s-2072kB/s), io=2048KiB (2097kB), run=1012-1012msec 00:39:04.755 00:39:04.755 Disk stats (read/write): 00:39:04.755 nvme0n1: ios=43/512, merge=0/0, ticks=1724/76, in_queue=1800, util=99.80% 00:39:04.755 00:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:39:04.755 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:39:04.755 00:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:39:04.755 00:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:39:04.755 00:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:39:04.755 00:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:04.755 00:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:39:04.755 00:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:04.755 00:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:39:04.755 00:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:39:04.755 00:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:39:04.755 00:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:04.755 00:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:39:04.755 00:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:04.755 00:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:39:04.755 00:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:04.755 00:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:04.755 rmmod nvme_tcp 00:39:04.755 rmmod nvme_fabrics 00:39:05.014 rmmod nvme_keyring 00:39:05.014 00:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:05.014 00:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:39:05.014 00:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:39:05.014 00:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 445360 ']' 00:39:05.014 00:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 445360 00:39:05.014 00:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 445360 ']' 00:39:05.014 00:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 445360 00:39:05.014 00:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:39:05.014 00:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:05.014 00:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 445360 00:39:05.014 00:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:05.014 00:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:05.014 00:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 445360' 00:39:05.014 killing process with pid 445360 00:39:05.015 00:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 445360 00:39:05.015 00:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 445360 00:39:05.273 00:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:05.273 00:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:05.273 00:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:05.273 00:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:39:05.273 00:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:39:05.273 00:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:05.273 00:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:39:05.273 00:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:05.274 00:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:05.274 00:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:05.274 00:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:05.274 00:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:07.193 00:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:07.193 00:39:07.193 real 0m8.893s 00:39:07.193 user 0m16.334s 00:39:07.193 sys 0m3.290s 00:39:07.193 00:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:07.193 00:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:07.193 ************************************ 00:39:07.193 END TEST nvmf_nmic 00:39:07.193 ************************************ 00:39:07.193 00:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:39:07.193 00:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:07.193 00:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:07.193 00:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:07.193 ************************************ 00:39:07.193 START TEST nvmf_fio_target 00:39:07.193 ************************************ 00:39:07.193 00:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:39:07.193 * Looking for test storage... 00:39:07.193 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:07.193 00:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:07.193 00:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:39:07.193 00:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:07.453 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:07.453 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:07.453 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:07.453 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:07.453 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:39:07.453 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:39:07.453 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:39:07.453 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:39:07.453 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:39:07.453 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:39:07.453 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:39:07.453 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:07.453 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:39:07.453 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:39:07.453 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:07.453 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:07.453 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:39:07.453 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:39:07.453 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:07.453 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:39:07.453 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:39:07.453 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:39:07.453 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:39:07.453 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:07.453 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:39:07.453 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:39:07.453 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:07.453 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:07.453 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:39:07.453 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:07.453 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:07.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:07.453 --rc genhtml_branch_coverage=1 00:39:07.453 --rc genhtml_function_coverage=1 00:39:07.453 --rc genhtml_legend=1 00:39:07.453 --rc geninfo_all_blocks=1 00:39:07.453 --rc geninfo_unexecuted_blocks=1 00:39:07.453 00:39:07.453 ' 00:39:07.453 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:07.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:07.453 --rc genhtml_branch_coverage=1 00:39:07.453 --rc genhtml_function_coverage=1 00:39:07.453 --rc genhtml_legend=1 00:39:07.453 --rc geninfo_all_blocks=1 00:39:07.453 --rc geninfo_unexecuted_blocks=1 00:39:07.453 00:39:07.453 ' 00:39:07.453 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:07.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:07.453 --rc genhtml_branch_coverage=1 00:39:07.453 --rc genhtml_function_coverage=1 00:39:07.453 --rc genhtml_legend=1 00:39:07.453 --rc geninfo_all_blocks=1 00:39:07.453 --rc geninfo_unexecuted_blocks=1 00:39:07.453 00:39:07.453 ' 00:39:07.453 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:07.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:07.453 --rc genhtml_branch_coverage=1 00:39:07.453 --rc genhtml_function_coverage=1 00:39:07.453 --rc genhtml_legend=1 00:39:07.453 --rc geninfo_all_blocks=1 00:39:07.453 --rc geninfo_unexecuted_blocks=1 00:39:07.453 00:39:07.453 ' 00:39:07.453 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:07.453 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:39:07.453 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:07.453 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:07.453 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:07.453 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:07.453 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:07.453 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:07.453 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:07.453 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:07.453 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:07.453 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:07.453 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:07.453 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:07.453 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:07.453 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:07.453 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:07.453 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:07.453 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:07.453 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:39:07.453 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:07.453 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:07.453 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:07.453 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:07.454 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:07.454 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:07.454 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:39:07.454 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:07.454 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:39:07.454 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:07.454 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:07.454 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:07.454 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:07.454 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:07.454 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:07.454 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:07.454 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:07.454 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:07.454 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:07.454 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:07.454 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:07.454 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:07.454 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:39:07.454 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:07.454 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:07.454 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:07.454 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:07.454 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:07.454 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:07.454 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:07.454 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:07.454 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:07.454 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:07.454 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:39:07.454 00:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:09.357 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:09.357 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:39:09.357 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:09.357 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:09.357 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:09.357 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:09.357 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:09.357 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:39:09.357 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:09.357 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:39:09.357 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:39:09.357 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:39:09.357 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:39:09.357 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:39:09.357 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:39:09.357 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:09.357 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:09.357 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:09.357 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:09.357 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:09.357 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:09.357 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:09.357 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:09.357 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:09.357 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:09.357 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:09.357 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:09.357 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:09.357 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:09.357 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:09.357 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:09.357 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:09.357 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:09.357 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:09.357 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:09.357 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:09.357 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:09.357 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:09.357 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:09.357 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:09.357 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:09.357 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:09.357 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:09.357 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:09.357 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:09.357 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:09.357 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:09.357 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:09.357 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:09.357 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:09.357 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:09.357 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:09.357 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:09.357 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:09.357 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:09.357 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:09.357 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:09.357 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:09.357 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:09.357 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:09.357 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:09.357 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:09.357 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:09.357 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:09.357 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:09.357 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:09.357 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:09.357 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:09.357 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:09.357 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:09.358 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:09.358 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:09.358 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:09.358 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:39:09.358 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:09.358 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:09.358 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:09.358 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:09.358 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:09.358 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:09.358 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:09.358 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:09.358 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:09.358 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:09.358 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:09.358 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:09.358 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:09.358 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:09.358 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:09.358 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:09.358 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:09.358 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:09.626 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:09.626 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:09.626 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:09.626 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:09.626 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:09.626 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:09.626 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:09.626 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:09.626 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:09.627 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:39:09.627 00:39:09.627 --- 10.0.0.2 ping statistics --- 00:39:09.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:09.627 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:39:09.627 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:09.627 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:09.627 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:39:09.627 00:39:09.627 --- 10.0.0.1 ping statistics --- 00:39:09.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:09.627 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:39:09.627 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:09.627 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:39:09.627 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:09.627 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:09.627 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:09.627 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:09.627 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:09.627 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:09.627 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:09.627 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:39:09.627 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:09.627 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:09.627 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:09.627 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=447936 00:39:09.627 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:39:09.627 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 447936 00:39:09.627 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 447936 ']' 00:39:09.627 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:09.627 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:09.627 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:09.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:09.627 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:09.627 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:09.627 [2024-11-18 00:44:33.381001] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:09.627 [2024-11-18 00:44:33.382050] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:39:09.627 [2024-11-18 00:44:33.382121] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:09.896 [2024-11-18 00:44:33.456460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:09.896 [2024-11-18 00:44:33.502607] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:09.896 [2024-11-18 00:44:33.502672] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:09.896 [2024-11-18 00:44:33.502685] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:09.896 [2024-11-18 00:44:33.502696] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:09.896 [2024-11-18 00:44:33.502706] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:09.896 [2024-11-18 00:44:33.504228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:09.896 [2024-11-18 00:44:33.504304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:09.896 [2024-11-18 00:44:33.504362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:09.896 [2024-11-18 00:44:33.504365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:09.896 [2024-11-18 00:44:33.587875] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:09.896 [2024-11-18 00:44:33.588100] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:09.896 [2024-11-18 00:44:33.588408] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:09.896 [2024-11-18 00:44:33.589002] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:09.896 [2024-11-18 00:44:33.589213] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:09.896 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:09.896 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:39:09.896 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:09.896 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:09.896 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:09.896 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:09.896 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:10.154 [2024-11-18 00:44:33.881117] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:10.154 00:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:10.720 00:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:39:10.720 00:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:10.979 00:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:39:10.979 00:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:11.250 00:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:39:11.250 00:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:11.515 00:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:39:11.515 00:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:39:11.773 00:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:12.031 00:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:39:12.031 00:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:12.289 00:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:39:12.289 00:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:12.548 00:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:39:12.548 00:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:39:12.806 00:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:39:13.062 00:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:39:13.062 00:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:13.320 00:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:39:13.320 00:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:39:13.886 00:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:13.886 [2024-11-18 00:44:37.693291] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:14.144 00:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:39:14.402 00:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:39:14.661 00:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:39:14.661 00:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:39:14.661 00:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:39:14.661 00:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:39:14.661 00:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:39:14.661 00:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:39:14.661 00:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:39:17.200 00:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:39:17.200 00:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:39:17.200 00:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:39:17.200 00:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:39:17.200 00:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:39:17.200 00:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:39:17.200 00:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:39:17.200 [global] 00:39:17.200 thread=1 00:39:17.200 invalidate=1 00:39:17.200 rw=write 00:39:17.200 time_based=1 00:39:17.200 runtime=1 00:39:17.200 ioengine=libaio 00:39:17.200 direct=1 00:39:17.200 bs=4096 00:39:17.200 iodepth=1 00:39:17.200 norandommap=0 00:39:17.200 numjobs=1 00:39:17.200 00:39:17.200 verify_dump=1 00:39:17.200 verify_backlog=512 00:39:17.200 verify_state_save=0 00:39:17.200 do_verify=1 00:39:17.200 verify=crc32c-intel 00:39:17.200 [job0] 00:39:17.200 filename=/dev/nvme0n1 00:39:17.200 [job1] 00:39:17.200 filename=/dev/nvme0n2 00:39:17.200 [job2] 00:39:17.200 filename=/dev/nvme0n3 00:39:17.200 [job3] 00:39:17.200 filename=/dev/nvme0n4 00:39:17.200 Could not set queue depth (nvme0n1) 00:39:17.200 Could not set queue depth (nvme0n2) 00:39:17.200 Could not set queue depth (nvme0n3) 00:39:17.200 Could not set queue depth (nvme0n4) 00:39:17.200 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:17.200 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:17.200 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:17.200 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:17.200 fio-3.35 00:39:17.200 Starting 4 threads 00:39:18.140 00:39:18.140 job0: (groupid=0, jobs=1): err= 0: pid=448881: Mon Nov 18 00:44:41 2024 00:39:18.140 read: IOPS=66, BW=267KiB/s (273kB/s)(268KiB/1005msec) 00:39:18.140 slat (nsec): min=6644, max=40501, avg=13348.01, stdev=10420.33 00:39:18.140 clat (usec): min=300, max=42044, avg=12645.75, stdev=18893.90 00:39:18.140 lat (usec): min=313, max=42079, avg=12659.09, stdev=18902.16 00:39:18.140 clat percentiles (usec): 00:39:18.140 | 1.00th=[ 302], 5.00th=[ 408], 10.00th=[ 408], 20.00th=[ 412], 00:39:18.140 | 30.00th=[ 412], 40.00th=[ 416], 50.00th=[ 420], 60.00th=[ 424], 00:39:18.140 | 70.00th=[ 449], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:39:18.140 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:39:18.140 | 99.99th=[42206] 00:39:18.140 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:39:18.140 slat (nsec): min=7877, max=43607, avg=12402.38, stdev=4941.41 00:39:18.140 clat (usec): min=138, max=4252, avg=289.88, stdev=192.40 00:39:18.140 lat (usec): min=149, max=4268, avg=302.28, stdev=192.88 00:39:18.140 clat percentiles (usec): 00:39:18.140 | 1.00th=[ 149], 5.00th=[ 172], 10.00th=[ 196], 20.00th=[ 227], 00:39:18.140 | 30.00th=[ 241], 40.00th=[ 247], 50.00th=[ 258], 60.00th=[ 281], 00:39:18.140 | 70.00th=[ 306], 80.00th=[ 355], 90.00th=[ 400], 95.00th=[ 412], 00:39:18.140 | 99.00th=[ 502], 99.50th=[ 685], 99.90th=[ 4228], 99.95th=[ 4228], 00:39:18.140 | 99.99th=[ 4228] 00:39:18.140 bw ( KiB/s): min= 4096, max= 4096, per=23.95%, avg=4096.00, stdev= 0.00, samples=1 00:39:18.140 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:18.140 lat (usec) : 250=37.82%, 500=57.69%, 750=0.86% 00:39:18.140 lat (msec) : 10=0.17%, 50=3.45% 00:39:18.140 cpu : usr=0.10%, sys=1.29%, ctx=580, majf=0, minf=1 00:39:18.140 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:18.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:18.140 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:18.140 issued rwts: total=67,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:18.140 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:18.140 job1: (groupid=0, jobs=1): err= 0: pid=448882: Mon Nov 18 00:44:41 2024 00:39:18.140 read: IOPS=22, BW=89.1KiB/s (91.3kB/s)(92.0KiB/1032msec) 00:39:18.140 slat (nsec): min=7701, max=34200, avg=22613.04, stdev=10078.01 00:39:18.140 clat (usec): min=234, max=41876, avg=39225.95, stdev=8502.39 00:39:18.140 lat (usec): min=248, max=41890, avg=39248.57, stdev=8504.22 00:39:18.140 clat percentiles (usec): 00:39:18.140 | 1.00th=[ 235], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:39:18.140 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:18.140 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:18.140 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:39:18.140 | 99.99th=[41681] 00:39:18.140 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:39:18.140 slat (nsec): min=6385, max=32335, avg=9650.36, stdev=4305.89 00:39:18.140 clat (usec): min=141, max=516, avg=239.89, stdev=49.27 00:39:18.140 lat (usec): min=149, max=533, avg=249.54, stdev=49.42 00:39:18.140 clat percentiles (usec): 00:39:18.140 | 1.00th=[ 147], 5.00th=[ 167], 10.00th=[ 196], 20.00th=[ 215], 00:39:18.140 | 30.00th=[ 221], 40.00th=[ 227], 50.00th=[ 231], 60.00th=[ 237], 00:39:18.140 | 70.00th=[ 247], 80.00th=[ 260], 90.00th=[ 285], 95.00th=[ 359], 00:39:18.140 | 99.00th=[ 400], 99.50th=[ 437], 99.90th=[ 519], 99.95th=[ 519], 00:39:18.140 | 99.99th=[ 519] 00:39:18.140 bw ( KiB/s): min= 4096, max= 4096, per=23.95%, avg=4096.00, stdev= 0.00, samples=1 00:39:18.140 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:18.140 lat (usec) : 250=70.65%, 500=25.05%, 750=0.19% 00:39:18.140 lat (msec) : 50=4.11% 00:39:18.140 cpu : usr=0.48%, sys=0.19%, ctx=536, majf=0, minf=1 00:39:18.140 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:18.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:18.140 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:18.140 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:18.140 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:18.140 job2: (groupid=0, jobs=1): err= 0: pid=448883: Mon Nov 18 00:44:41 2024 00:39:18.140 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:39:18.140 slat (nsec): min=4627, max=53290, avg=15285.65, stdev=7172.34 00:39:18.140 clat (usec): min=221, max=40981, avg=340.08, stdev=1459.81 00:39:18.140 lat (usec): min=231, max=40995, avg=355.37, stdev=1459.73 00:39:18.140 clat percentiles (usec): 00:39:18.140 | 1.00th=[ 229], 5.00th=[ 237], 10.00th=[ 239], 20.00th=[ 245], 00:39:18.140 | 30.00th=[ 251], 40.00th=[ 260], 50.00th=[ 269], 60.00th=[ 277], 00:39:18.140 | 70.00th=[ 285], 80.00th=[ 314], 90.00th=[ 392], 95.00th=[ 420], 00:39:18.140 | 99.00th=[ 490], 99.50th=[ 502], 99.90th=[40633], 99.95th=[41157], 00:39:18.140 | 99.99th=[41157] 00:39:18.140 write: IOPS=1850, BW=7401KiB/s (7578kB/s)(7408KiB/1001msec); 0 zone resets 00:39:18.140 slat (nsec): min=6218, max=48265, avg=16130.45, stdev=8023.05 00:39:18.140 clat (usec): min=158, max=662, avg=221.67, stdev=51.17 00:39:18.140 lat (usec): min=169, max=670, avg=237.80, stdev=49.75 00:39:18.140 clat percentiles (usec): 00:39:18.140 | 1.00th=[ 165], 5.00th=[ 182], 10.00th=[ 184], 20.00th=[ 192], 00:39:18.140 | 30.00th=[ 198], 40.00th=[ 202], 50.00th=[ 208], 60.00th=[ 215], 00:39:18.140 | 70.00th=[ 221], 80.00th=[ 231], 90.00th=[ 262], 95.00th=[ 375], 00:39:18.140 | 99.00th=[ 408], 99.50th=[ 424], 99.90th=[ 553], 99.95th=[ 660], 00:39:18.140 | 99.99th=[ 660] 00:39:18.140 bw ( KiB/s): min= 8016, max= 8016, per=46.88%, avg=8016.00, stdev= 0.00, samples=1 00:39:18.140 iops : min= 2004, max= 2004, avg=2004.00, stdev= 0.00, samples=1 00:39:18.140 lat (usec) : 250=61.42%, 500=38.19%, 750=0.32% 00:39:18.140 lat (msec) : 50=0.06% 00:39:18.140 cpu : usr=4.00%, sys=6.00%, ctx=3389, majf=0, minf=1 00:39:18.140 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:18.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:18.140 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:18.140 issued rwts: total=1536,1852,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:18.140 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:18.140 job3: (groupid=0, jobs=1): err= 0: pid=448884: Mon Nov 18 00:44:41 2024 00:39:18.140 read: IOPS=1426, BW=5704KiB/s (5841kB/s)(5824KiB/1021msec) 00:39:18.140 slat (nsec): min=4558, max=36783, avg=9011.14, stdev=4939.99 00:39:18.140 clat (usec): min=211, max=41464, avg=456.19, stdev=2821.16 00:39:18.140 lat (usec): min=217, max=41498, avg=465.20, stdev=2822.79 00:39:18.140 clat percentiles (usec): 00:39:18.140 | 1.00th=[ 219], 5.00th=[ 223], 10.00th=[ 231], 20.00th=[ 241], 00:39:18.140 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 258], 00:39:18.140 | 70.00th=[ 265], 80.00th=[ 269], 90.00th=[ 285], 95.00th=[ 322], 00:39:18.140 | 99.00th=[ 429], 99.50th=[ 1418], 99.90th=[41157], 99.95th=[41681], 00:39:18.140 | 99.99th=[41681] 00:39:18.140 write: IOPS=1504, BW=6018KiB/s (6162kB/s)(6144KiB/1021msec); 0 zone resets 00:39:18.140 slat (nsec): min=6015, max=35126, avg=10820.43, stdev=4904.27 00:39:18.140 clat (usec): min=158, max=422, avg=207.35, stdev=40.21 00:39:18.140 lat (usec): min=165, max=457, avg=218.17, stdev=40.53 00:39:18.140 clat percentiles (usec): 00:39:18.140 | 1.00th=[ 163], 5.00th=[ 169], 10.00th=[ 174], 20.00th=[ 180], 00:39:18.140 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 200], 00:39:18.140 | 70.00th=[ 212], 80.00th=[ 235], 90.00th=[ 265], 95.00th=[ 297], 00:39:18.140 | 99.00th=[ 347], 99.50th=[ 392], 99.90th=[ 416], 99.95th=[ 424], 00:39:18.140 | 99.99th=[ 424] 00:39:18.141 bw ( KiB/s): min= 4096, max= 8192, per=35.93%, avg=6144.00, stdev=2896.31, samples=2 00:39:18.141 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:39:18.141 lat (usec) : 250=65.27%, 500=34.43%, 750=0.03% 00:39:18.141 lat (msec) : 2=0.03%, 50=0.23% 00:39:18.141 cpu : usr=1.76%, sys=2.94%, ctx=2992, majf=0, minf=1 00:39:18.141 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:18.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:18.141 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:18.141 issued rwts: total=1456,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:18.141 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:18.141 00:39:18.141 Run status group 0 (all jobs): 00:39:18.141 READ: bw=11.7MiB/s (12.2MB/s), 89.1KiB/s-6138KiB/s (91.3kB/s-6285kB/s), io=12.0MiB (12.6MB), run=1001-1032msec 00:39:18.141 WRITE: bw=16.7MiB/s (17.5MB/s), 1984KiB/s-7401KiB/s (2032kB/s-7578kB/s), io=17.2MiB (18.1MB), run=1001-1032msec 00:39:18.141 00:39:18.141 Disk stats (read/write): 00:39:18.141 nvme0n1: ios=112/512, merge=0/0, ticks=804/142, in_queue=946, util=85.77% 00:39:18.141 nvme0n2: ios=42/512, merge=0/0, ticks=1603/122, in_queue=1725, util=89.83% 00:39:18.141 nvme0n3: ios=1310/1536, merge=0/0, ticks=1204/324, in_queue=1528, util=93.63% 00:39:18.141 nvme0n4: ios=1508/1536, merge=0/0, ticks=526/308, in_queue=834, util=95.58% 00:39:18.141 00:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:39:18.141 [global] 00:39:18.141 thread=1 00:39:18.141 invalidate=1 00:39:18.141 rw=randwrite 00:39:18.141 time_based=1 00:39:18.141 runtime=1 00:39:18.141 ioengine=libaio 00:39:18.141 direct=1 00:39:18.141 bs=4096 00:39:18.141 iodepth=1 00:39:18.141 norandommap=0 00:39:18.141 numjobs=1 00:39:18.141 00:39:18.141 verify_dump=1 00:39:18.141 verify_backlog=512 00:39:18.141 verify_state_save=0 00:39:18.141 do_verify=1 00:39:18.141 verify=crc32c-intel 00:39:18.141 [job0] 00:39:18.141 filename=/dev/nvme0n1 00:39:18.141 [job1] 00:39:18.141 filename=/dev/nvme0n2 00:39:18.141 [job2] 00:39:18.141 filename=/dev/nvme0n3 00:39:18.141 [job3] 00:39:18.141 filename=/dev/nvme0n4 00:39:18.399 Could not set queue depth (nvme0n1) 00:39:18.399 Could not set queue depth (nvme0n2) 00:39:18.399 Could not set queue depth (nvme0n3) 00:39:18.399 Could not set queue depth (nvme0n4) 00:39:18.399 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:18.399 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:18.399 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:18.399 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:18.399 fio-3.35 00:39:18.399 Starting 4 threads 00:39:19.773 00:39:19.773 job0: (groupid=0, jobs=1): err= 0: pid=449123: Mon Nov 18 00:44:43 2024 00:39:19.773 read: IOPS=25, BW=103KiB/s (106kB/s)(104KiB/1007msec) 00:39:19.773 slat (nsec): min=8838, max=14747, avg=13127.92, stdev=1436.72 00:39:19.773 clat (usec): min=449, max=41014, avg=33776.07, stdev=15261.35 00:39:19.773 lat (usec): min=463, max=41028, avg=33789.20, stdev=15262.24 00:39:19.773 clat percentiles (usec): 00:39:19.773 | 1.00th=[ 449], 5.00th=[ 553], 10.00th=[ 709], 20.00th=[40633], 00:39:19.773 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:19.773 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:19.773 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:19.773 | 99.99th=[41157] 00:39:19.773 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:39:19.773 slat (nsec): min=6758, max=34296, avg=9882.90, stdev=4608.16 00:39:19.773 clat (usec): min=145, max=3249, avg=237.19, stdev=174.05 00:39:19.773 lat (usec): min=152, max=3262, avg=247.07, stdev=174.58 00:39:19.773 clat percentiles (usec): 00:39:19.773 | 1.00th=[ 151], 5.00th=[ 165], 10.00th=[ 176], 20.00th=[ 188], 00:39:19.773 | 30.00th=[ 194], 40.00th=[ 202], 50.00th=[ 217], 60.00th=[ 241], 00:39:19.773 | 70.00th=[ 245], 80.00th=[ 258], 90.00th=[ 289], 95.00th=[ 326], 00:39:19.773 | 99.00th=[ 383], 99.50th=[ 396], 99.90th=[ 3261], 99.95th=[ 3261], 00:39:19.773 | 99.99th=[ 3261] 00:39:19.773 bw ( KiB/s): min= 4096, max= 4096, per=51.45%, avg=4096.00, stdev= 0.00, samples=1 00:39:19.773 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:19.773 lat (usec) : 250=71.75%, 500=23.23%, 750=0.56% 00:39:19.773 lat (msec) : 4=0.37%, 20=0.19%, 50=3.90% 00:39:19.773 cpu : usr=0.50%, sys=0.30%, ctx=539, majf=0, minf=1 00:39:19.773 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:19.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:19.773 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:19.773 issued rwts: total=26,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:19.773 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:19.773 job1: (groupid=0, jobs=1): err= 0: pid=449130: Mon Nov 18 00:44:43 2024 00:39:19.773 read: IOPS=20, BW=81.6KiB/s (83.6kB/s)(84.0KiB/1029msec) 00:39:19.773 slat (nsec): min=6128, max=27676, avg=13978.67, stdev=3597.45 00:39:19.773 clat (usec): min=40850, max=41031, avg=40971.10, stdev=44.90 00:39:19.773 lat (usec): min=40857, max=41044, avg=40985.08, stdev=45.44 00:39:19.773 clat percentiles (usec): 00:39:19.773 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:39:19.773 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:19.773 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:19.773 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:19.773 | 99.99th=[41157] 00:39:19.773 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:39:19.773 slat (usec): min=5, max=27840, avg=65.86, stdev=1229.91 00:39:19.773 clat (usec): min=128, max=901, avg=259.82, stdev=104.77 00:39:19.773 lat (usec): min=135, max=28247, avg=325.68, stdev=1241.09 00:39:19.773 clat percentiles (usec): 00:39:19.773 | 1.00th=[ 133], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 155], 00:39:19.773 | 30.00th=[ 202], 40.00th=[ 233], 50.00th=[ 245], 60.00th=[ 253], 00:39:19.773 | 70.00th=[ 277], 80.00th=[ 367], 90.00th=[ 396], 95.00th=[ 445], 00:39:19.773 | 99.00th=[ 506], 99.50th=[ 619], 99.90th=[ 906], 99.95th=[ 906], 00:39:19.773 | 99.99th=[ 906] 00:39:19.773 bw ( KiB/s): min= 4096, max= 4096, per=51.45%, avg=4096.00, stdev= 0.00, samples=1 00:39:19.773 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:19.773 lat (usec) : 250=55.53%, 500=38.84%, 750=1.31%, 1000=0.38% 00:39:19.773 lat (msec) : 50=3.94% 00:39:19.773 cpu : usr=0.29%, sys=0.49%, ctx=536, majf=0, minf=1 00:39:19.773 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:19.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:19.773 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:19.773 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:19.773 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:19.773 job2: (groupid=0, jobs=1): err= 0: pid=449163: Mon Nov 18 00:44:43 2024 00:39:19.773 read: IOPS=20, BW=83.7KiB/s (85.7kB/s)(84.0KiB/1004msec) 00:39:19.773 slat (nsec): min=7994, max=14846, avg=12653.86, stdev=1331.42 00:39:19.773 clat (usec): min=40929, max=41037, avg=40979.18, stdev=25.06 00:39:19.774 lat (usec): min=40942, max=41049, avg=40991.83, stdev=25.08 00:39:19.774 clat percentiles (usec): 00:39:19.774 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:39:19.774 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:19.774 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:19.774 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:19.774 | 99.99th=[41157] 00:39:19.774 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:39:19.774 slat (nsec): min=7400, max=56168, avg=10748.93, stdev=5045.52 00:39:19.774 clat (usec): min=171, max=832, avg=265.92, stdev=72.96 00:39:19.774 lat (usec): min=179, max=842, avg=276.67, stdev=73.41 00:39:19.774 clat percentiles (usec): 00:39:19.774 | 1.00th=[ 180], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 208], 00:39:19.774 | 30.00th=[ 225], 40.00th=[ 243], 50.00th=[ 251], 60.00th=[ 269], 00:39:19.774 | 70.00th=[ 281], 80.00th=[ 302], 90.00th=[ 355], 95.00th=[ 400], 00:39:19.774 | 99.00th=[ 461], 99.50th=[ 701], 99.90th=[ 832], 99.95th=[ 832], 00:39:19.774 | 99.99th=[ 832] 00:39:19.774 bw ( KiB/s): min= 4096, max= 4096, per=51.45%, avg=4096.00, stdev= 0.00, samples=1 00:39:19.774 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:19.774 lat (usec) : 250=47.47%, 500=47.65%, 750=0.75%, 1000=0.19% 00:39:19.774 lat (msec) : 50=3.94% 00:39:19.774 cpu : usr=0.20%, sys=0.90%, ctx=533, majf=0, minf=2 00:39:19.774 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:19.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:19.774 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:19.774 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:19.774 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:19.774 job3: (groupid=0, jobs=1): err= 0: pid=449174: Mon Nov 18 00:44:43 2024 00:39:19.774 read: IOPS=20, BW=82.6KiB/s (84.6kB/s)(84.0KiB/1017msec) 00:39:19.774 slat (nsec): min=7338, max=27852, avg=14034.43, stdev=3563.28 00:39:19.774 clat (usec): min=41700, max=42022, avg=41962.36, stdev=68.82 00:39:19.774 lat (usec): min=41707, max=42035, avg=41976.40, stdev=70.45 00:39:19.774 clat percentiles (usec): 00:39:19.774 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:39:19.774 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:39:19.774 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:39:19.774 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:39:19.774 | 99.99th=[42206] 00:39:19.774 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:39:19.774 slat (nsec): min=7657, max=36168, avg=11304.46, stdev=4967.19 00:39:19.774 clat (usec): min=164, max=2573, avg=250.08, stdev=107.04 00:39:19.774 lat (usec): min=172, max=2583, avg=261.39, stdev=107.02 00:39:19.774 clat percentiles (usec): 00:39:19.774 | 1.00th=[ 180], 5.00th=[ 202], 10.00th=[ 223], 20.00th=[ 233], 00:39:19.774 | 30.00th=[ 239], 40.00th=[ 241], 50.00th=[ 243], 60.00th=[ 245], 00:39:19.774 | 70.00th=[ 247], 80.00th=[ 253], 90.00th=[ 277], 95.00th=[ 297], 00:39:19.774 | 99.00th=[ 359], 99.50th=[ 400], 99.90th=[ 2573], 99.95th=[ 2573], 00:39:19.774 | 99.99th=[ 2573] 00:39:19.774 bw ( KiB/s): min= 4096, max= 4096, per=51.45%, avg=4096.00, stdev= 0.00, samples=1 00:39:19.774 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:19.774 lat (usec) : 250=74.67%, 500=21.20% 00:39:19.774 lat (msec) : 4=0.19%, 50=3.94% 00:39:19.774 cpu : usr=0.59%, sys=0.49%, ctx=536, majf=0, minf=1 00:39:19.774 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:19.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:19.774 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:19.774 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:19.774 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:19.774 00:39:19.774 Run status group 0 (all jobs): 00:39:19.774 READ: bw=346KiB/s (354kB/s), 81.6KiB/s-103KiB/s (83.6kB/s-106kB/s), io=356KiB (365kB), run=1004-1029msec 00:39:19.774 WRITE: bw=7961KiB/s (8152kB/s), 1990KiB/s-2040KiB/s (2038kB/s-2089kB/s), io=8192KiB (8389kB), run=1004-1029msec 00:39:19.774 00:39:19.774 Disk stats (read/write): 00:39:19.774 nvme0n1: ios=71/512, merge=0/0, ticks=898/116, in_queue=1014, util=89.68% 00:39:19.774 nvme0n2: ios=69/512, merge=0/0, ticks=1121/121, in_queue=1242, util=93.90% 00:39:19.774 nvme0n3: ios=74/512, merge=0/0, ticks=776/132, in_queue=908, util=95.09% 00:39:19.774 nvme0n4: ios=65/512, merge=0/0, ticks=1599/125, in_queue=1724, util=99.89% 00:39:19.774 00:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:39:19.774 [global] 00:39:19.774 thread=1 00:39:19.774 invalidate=1 00:39:19.774 rw=write 00:39:19.774 time_based=1 00:39:19.774 runtime=1 00:39:19.774 ioengine=libaio 00:39:19.774 direct=1 00:39:19.774 bs=4096 00:39:19.774 iodepth=128 00:39:19.774 norandommap=0 00:39:19.774 numjobs=1 00:39:19.774 00:39:19.774 verify_dump=1 00:39:19.774 verify_backlog=512 00:39:19.774 verify_state_save=0 00:39:19.774 do_verify=1 00:39:19.774 verify=crc32c-intel 00:39:19.774 [job0] 00:39:19.774 filename=/dev/nvme0n1 00:39:19.774 [job1] 00:39:19.774 filename=/dev/nvme0n2 00:39:19.774 [job2] 00:39:19.774 filename=/dev/nvme0n3 00:39:19.774 [job3] 00:39:19.774 filename=/dev/nvme0n4 00:39:19.774 Could not set queue depth (nvme0n1) 00:39:19.774 Could not set queue depth (nvme0n2) 00:39:19.774 Could not set queue depth (nvme0n3) 00:39:19.774 Could not set queue depth (nvme0n4) 00:39:20.032 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:20.032 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:20.032 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:20.032 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:20.032 fio-3.35 00:39:20.032 Starting 4 threads 00:39:21.405 00:39:21.405 job0: (groupid=0, jobs=1): err= 0: pid=449456: Mon Nov 18 00:44:44 2024 00:39:21.405 read: IOPS=3156, BW=12.3MiB/s (12.9MB/s)(12.5MiB/1013msec) 00:39:21.405 slat (usec): min=2, max=15388, avg=147.12, stdev=997.83 00:39:21.405 clat (msec): min=2, max=106, avg=15.94, stdev=12.96 00:39:21.405 lat (msec): min=4, max=106, avg=16.08, stdev=13.09 00:39:21.405 clat percentiles (msec): 00:39:21.405 | 1.00th=[ 9], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 11], 00:39:21.405 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 14], 00:39:21.405 | 70.00th=[ 14], 80.00th=[ 16], 90.00th=[ 22], 95.00th=[ 40], 00:39:21.405 | 99.00th=[ 86], 99.50th=[ 97], 99.90th=[ 107], 99.95th=[ 107], 00:39:21.405 | 99.99th=[ 107] 00:39:21.405 write: IOPS=3538, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1013msec); 0 zone resets 00:39:21.405 slat (usec): min=3, max=24049, avg=142.99, stdev=823.71 00:39:21.405 clat (msec): min=3, max=106, avg=20.63, stdev=16.83 00:39:21.405 lat (msec): min=3, max=106, avg=20.78, stdev=16.93 00:39:21.405 clat percentiles (msec): 00:39:21.405 | 1.00th=[ 6], 5.00th=[ 8], 10.00th=[ 10], 20.00th=[ 10], 00:39:21.405 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 15], 00:39:21.405 | 70.00th=[ 24], 80.00th=[ 29], 90.00th=[ 54], 95.00th=[ 59], 00:39:21.405 | 99.00th=[ 80], 99.50th=[ 89], 99.90th=[ 95], 99.95th=[ 105], 00:39:21.405 | 99.99th=[ 107] 00:39:21.405 bw ( KiB/s): min=11136, max=17520, per=23.10%, avg=14328.00, stdev=4514.17, samples=2 00:39:21.405 iops : min= 2784, max= 4380, avg=3582.00, stdev=1128.54, samples=2 00:39:21.405 lat (msec) : 4=0.10%, 10=21.23%, 20=54.11%, 50=17.10%, 100=7.23% 00:39:21.405 lat (msec) : 250=0.22% 00:39:21.405 cpu : usr=3.46%, sys=5.04%, ctx=289, majf=0, minf=13 00:39:21.405 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:39:21.405 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:21.405 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:21.405 issued rwts: total=3198,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:21.405 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:21.405 job1: (groupid=0, jobs=1): err= 0: pid=449457: Mon Nov 18 00:44:44 2024 00:39:21.405 read: IOPS=4663, BW=18.2MiB/s (19.1MB/s)(18.3MiB/1002msec) 00:39:21.405 slat (usec): min=3, max=21348, avg=104.41, stdev=607.66 00:39:21.405 clat (usec): min=667, max=63509, avg=13623.30, stdev=7613.55 00:39:21.405 lat (usec): min=2806, max=63522, avg=13727.71, stdev=7656.35 00:39:21.405 clat percentiles (usec): 00:39:21.405 | 1.00th=[ 6259], 5.00th=[ 9634], 10.00th=[10159], 20.00th=[10814], 00:39:21.405 | 30.00th=[11207], 40.00th=[11600], 50.00th=[12125], 60.00th=[12518], 00:39:21.405 | 70.00th=[12911], 80.00th=[13435], 90.00th=[15664], 95.00th=[21890], 00:39:21.405 | 99.00th=[53740], 99.50th=[63701], 99.90th=[63701], 99.95th=[63701], 00:39:21.405 | 99.99th=[63701] 00:39:21.405 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:39:21.405 slat (usec): min=4, max=3533, avg=89.95, stdev=371.17 00:39:21.405 clat (usec): min=6409, max=23460, avg=12268.90, stdev=3046.12 00:39:21.405 lat (usec): min=6418, max=23478, avg=12358.85, stdev=3052.03 00:39:21.405 clat percentiles (usec): 00:39:21.405 | 1.00th=[ 7832], 5.00th=[ 9372], 10.00th=[ 9634], 20.00th=[10159], 00:39:21.405 | 30.00th=[10945], 40.00th=[11469], 50.00th=[11863], 60.00th=[12125], 00:39:21.405 | 70.00th=[12518], 80.00th=[12780], 90.00th=[14091], 95.00th=[21627], 00:39:21.405 | 99.00th=[23200], 99.50th=[23200], 99.90th=[23462], 99.95th=[23462], 00:39:21.405 | 99.99th=[23462] 00:39:21.405 bw ( KiB/s): min=16392, max=24072, per=32.62%, avg=20232.00, stdev=5430.58, samples=2 00:39:21.405 iops : min= 4098, max= 6018, avg=5058.00, stdev=1357.65, samples=2 00:39:21.405 lat (usec) : 750=0.01% 00:39:21.405 lat (msec) : 4=0.37%, 10=12.18%, 20=81.63%, 50=5.18%, 100=0.63% 00:39:21.405 cpu : usr=6.29%, sys=11.39%, ctx=572, majf=0, minf=15 00:39:21.405 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:39:21.405 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:21.405 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:21.405 issued rwts: total=4673,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:21.405 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:21.405 job2: (groupid=0, jobs=1): err= 0: pid=449459: Mon Nov 18 00:44:44 2024 00:39:21.405 read: IOPS=2529, BW=9.88MiB/s (10.4MB/s)(10.0MiB/1012msec) 00:39:21.405 slat (usec): min=2, max=22259, avg=165.66, stdev=1171.76 00:39:21.405 clat (usec): min=8474, max=76032, avg=20461.57, stdev=10053.82 00:39:21.405 lat (usec): min=8486, max=76043, avg=20627.23, stdev=10162.90 00:39:21.405 clat percentiles (usec): 00:39:21.405 | 1.00th=[ 9241], 5.00th=[11994], 10.00th=[12649], 20.00th=[13304], 00:39:21.405 | 30.00th=[15139], 40.00th=[15664], 50.00th=[16909], 60.00th=[19268], 00:39:21.405 | 70.00th=[23200], 80.00th=[25035], 90.00th=[31065], 95.00th=[38536], 00:39:21.405 | 99.00th=[72877], 99.50th=[76022], 99.90th=[76022], 99.95th=[76022], 00:39:21.405 | 99.99th=[76022] 00:39:21.405 write: IOPS=2560, BW=10.0MiB/s (10.5MB/s)(10.1MiB/1012msec); 0 zone resets 00:39:21.405 slat (usec): min=3, max=42037, avg=214.16, stdev=1320.19 00:39:21.405 clat (msec): min=7, max=115, avg=25.93, stdev=21.55 00:39:21.405 lat (msec): min=7, max=115, avg=26.14, stdev=21.68 00:39:21.405 clat percentiles (msec): 00:39:21.405 | 1.00th=[ 10], 5.00th=[ 13], 10.00th=[ 13], 20.00th=[ 14], 00:39:21.405 | 30.00th=[ 14], 40.00th=[ 15], 50.00th=[ 16], 60.00th=[ 24], 00:39:21.405 | 70.00th=[ 25], 80.00th=[ 28], 90.00th=[ 57], 95.00th=[ 73], 00:39:21.405 | 99.00th=[ 111], 99.50th=[ 114], 99.90th=[ 115], 99.95th=[ 115], 00:39:21.405 | 99.99th=[ 115] 00:39:21.405 bw ( KiB/s): min= 8456, max=12024, per=16.51%, avg=10240.00, stdev=2522.96, samples=2 00:39:21.405 iops : min= 2114, max= 3006, avg=2560.00, stdev=630.74, samples=2 00:39:21.405 lat (msec) : 10=1.15%, 20=59.33%, 50=31.97%, 100=6.17%, 250=1.38% 00:39:21.405 cpu : usr=3.26%, sys=4.95%, ctx=270, majf=0, minf=13 00:39:21.405 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:39:21.405 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:21.405 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:21.405 issued rwts: total=2560,2591,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:21.405 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:21.405 job3: (groupid=0, jobs=1): err= 0: pid=449465: Mon Nov 18 00:44:44 2024 00:39:21.405 read: IOPS=4043, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1013msec) 00:39:21.405 slat (usec): min=2, max=16001, avg=112.21, stdev=855.53 00:39:21.405 clat (usec): min=5530, max=41673, avg=14610.20, stdev=4760.42 00:39:21.405 lat (usec): min=5533, max=41684, avg=14722.41, stdev=4824.73 00:39:21.405 clat percentiles (usec): 00:39:21.405 | 1.00th=[ 5997], 5.00th=[ 9634], 10.00th=[10683], 20.00th=[11863], 00:39:21.405 | 30.00th=[12125], 40.00th=[12518], 50.00th=[13173], 60.00th=[13698], 00:39:21.405 | 70.00th=[15008], 80.00th=[16909], 90.00th=[21365], 95.00th=[23987], 00:39:21.405 | 99.00th=[33162], 99.50th=[38011], 99.90th=[40633], 99.95th=[41681], 00:39:21.405 | 99.99th=[41681] 00:39:21.405 write: IOPS=4356, BW=17.0MiB/s (17.8MB/s)(17.2MiB/1013msec); 0 zone resets 00:39:21.405 slat (usec): min=3, max=12828, avg=113.94, stdev=768.99 00:39:21.405 clat (usec): min=738, max=41675, avg=15565.68, stdev=6047.55 00:39:21.405 lat (usec): min=753, max=41691, avg=15679.63, stdev=6110.54 00:39:21.405 clat percentiles (usec): 00:39:21.405 | 1.00th=[ 2147], 5.00th=[ 8225], 10.00th=[10159], 20.00th=[11994], 00:39:21.405 | 30.00th=[12649], 40.00th=[13173], 50.00th=[13829], 60.00th=[14222], 00:39:21.405 | 70.00th=[15533], 80.00th=[21890], 90.00th=[25035], 95.00th=[27919], 00:39:21.405 | 99.00th=[31327], 99.50th=[32375], 99.90th=[37487], 99.95th=[41681], 00:39:21.405 | 99.99th=[41681] 00:39:21.405 bw ( KiB/s): min=16384, max=17896, per=27.63%, avg=17140.00, stdev=1069.15, samples=2 00:39:21.405 iops : min= 4096, max= 4474, avg=4285.00, stdev=267.29, samples=2 00:39:21.405 lat (usec) : 750=0.02%, 1000=0.12% 00:39:21.405 lat (msec) : 2=0.34%, 4=0.54%, 10=6.68%, 20=73.92%, 50=18.38% 00:39:21.405 cpu : usr=4.64%, sys=8.00%, ctx=363, majf=0, minf=10 00:39:21.405 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:39:21.405 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:21.405 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:21.405 issued rwts: total=4096,4413,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:21.405 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:21.405 00:39:21.405 Run status group 0 (all jobs): 00:39:21.405 READ: bw=56.0MiB/s (58.7MB/s), 9.88MiB/s-18.2MiB/s (10.4MB/s-19.1MB/s), io=56.7MiB (59.5MB), run=1002-1013msec 00:39:21.405 WRITE: bw=60.6MiB/s (63.5MB/s), 10.0MiB/s-20.0MiB/s (10.5MB/s-20.9MB/s), io=61.4MiB (64.3MB), run=1002-1013msec 00:39:21.405 00:39:21.405 Disk stats (read/write): 00:39:21.405 nvme0n1: ios=3092/3223, merge=0/0, ticks=42159/58451, in_queue=100610, util=91.68% 00:39:21.405 nvme0n2: ios=3897/4096, merge=0/0, ticks=14118/12076, in_queue=26194, util=99.49% 00:39:21.405 nvme0n3: ios=2077/2335, merge=0/0, ticks=17272/27737, in_queue=45009, util=95.63% 00:39:21.405 nvme0n4: ios=3641/3879, merge=0/0, ticks=45250/53043, in_queue=98293, util=95.60% 00:39:21.405 00:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:39:21.405 [global] 00:39:21.405 thread=1 00:39:21.405 invalidate=1 00:39:21.405 rw=randwrite 00:39:21.405 time_based=1 00:39:21.405 runtime=1 00:39:21.405 ioengine=libaio 00:39:21.405 direct=1 00:39:21.405 bs=4096 00:39:21.405 iodepth=128 00:39:21.405 norandommap=0 00:39:21.405 numjobs=1 00:39:21.405 00:39:21.405 verify_dump=1 00:39:21.406 verify_backlog=512 00:39:21.406 verify_state_save=0 00:39:21.406 do_verify=1 00:39:21.406 verify=crc32c-intel 00:39:21.406 [job0] 00:39:21.406 filename=/dev/nvme0n1 00:39:21.406 [job1] 00:39:21.406 filename=/dev/nvme0n2 00:39:21.406 [job2] 00:39:21.406 filename=/dev/nvme0n3 00:39:21.406 [job3] 00:39:21.406 filename=/dev/nvme0n4 00:39:21.406 Could not set queue depth (nvme0n1) 00:39:21.406 Could not set queue depth (nvme0n2) 00:39:21.406 Could not set queue depth (nvme0n3) 00:39:21.406 Could not set queue depth (nvme0n4) 00:39:21.406 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:21.406 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:21.406 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:21.406 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:21.406 fio-3.35 00:39:21.406 Starting 4 threads 00:39:22.779 00:39:22.779 job0: (groupid=0, jobs=1): err= 0: pid=449690: Mon Nov 18 00:44:46 2024 00:39:22.779 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:39:22.779 slat (usec): min=2, max=26025, avg=113.48, stdev=842.88 00:39:22.779 clat (usec): min=8008, max=83505, avg=14770.18, stdev=9864.73 00:39:22.779 lat (usec): min=8019, max=83524, avg=14883.66, stdev=9937.85 00:39:22.779 clat percentiles (usec): 00:39:22.779 | 1.00th=[ 8717], 5.00th=[ 9896], 10.00th=[10159], 20.00th=[10552], 00:39:22.779 | 30.00th=[10814], 40.00th=[11469], 50.00th=[12387], 60.00th=[12649], 00:39:22.779 | 70.00th=[13042], 80.00th=[14091], 90.00th=[21890], 95.00th=[33817], 00:39:22.779 | 99.00th=[68682], 99.50th=[68682], 99.90th=[68682], 99.95th=[68682], 00:39:22.779 | 99.99th=[83362] 00:39:22.779 write: IOPS=4376, BW=17.1MiB/s (17.9MB/s)(17.2MiB/1004msec); 0 zone resets 00:39:22.779 slat (usec): min=3, max=17924, avg=113.11, stdev=664.42 00:39:22.779 clat (msec): min=3, max=105, avg=14.85, stdev=13.61 00:39:22.779 lat (msec): min=4, max=105, avg=14.96, stdev=13.69 00:39:22.779 clat percentiles (msec): 00:39:22.779 | 1.00th=[ 9], 5.00th=[ 11], 10.00th=[ 11], 20.00th=[ 11], 00:39:22.779 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 13], 00:39:22.779 | 70.00th=[ 13], 80.00th=[ 14], 90.00th=[ 15], 95.00th=[ 26], 00:39:22.779 | 99.00th=[ 100], 99.50th=[ 106], 99.90th=[ 106], 99.95th=[ 106], 00:39:22.779 | 99.99th=[ 106] 00:39:22.779 bw ( KiB/s): min=16648, max=17523, per=23.93%, avg=17085.50, stdev=618.72, samples=2 00:39:22.779 iops : min= 4162, max= 4380, avg=4271.00, stdev=154.15, samples=2 00:39:22.779 lat (msec) : 4=0.01%, 10=5.09%, 20=87.03%, 50=4.69%, 100=2.72% 00:39:22.779 lat (msec) : 250=0.46% 00:39:22.779 cpu : usr=5.68%, sys=6.68%, ctx=462, majf=0, minf=1 00:39:22.779 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:39:22.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:22.779 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:22.779 issued rwts: total=4096,4394,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:22.779 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:22.779 job1: (groupid=0, jobs=1): err= 0: pid=449691: Mon Nov 18 00:44:46 2024 00:39:22.779 read: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec) 00:39:22.779 slat (usec): min=2, max=5779, avg=93.20, stdev=541.27 00:39:22.779 clat (usec): min=6586, max=20919, avg=12115.59, stdev=1694.31 00:39:22.779 lat (usec): min=6605, max=20923, avg=12208.79, stdev=1736.76 00:39:22.779 clat percentiles (usec): 00:39:22.779 | 1.00th=[ 8717], 5.00th=[10028], 10.00th=[10552], 20.00th=[11076], 00:39:22.779 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11600], 60.00th=[11994], 00:39:22.779 | 70.00th=[12387], 80.00th=[13304], 90.00th=[14615], 95.00th=[15401], 00:39:22.779 | 99.00th=[17433], 99.50th=[17695], 99.90th=[19268], 99.95th=[19268], 00:39:22.779 | 99.99th=[20841] 00:39:22.779 write: IOPS=5454, BW=21.3MiB/s (22.3MB/s)(21.4MiB/1006msec); 0 zone resets 00:39:22.779 slat (usec): min=3, max=5427, avg=87.12, stdev=436.97 00:39:22.779 clat (usec): min=4781, max=18492, avg=11876.93, stdev=1668.27 00:39:22.779 lat (usec): min=5372, max=18498, avg=11964.06, stdev=1702.84 00:39:22.779 clat percentiles (usec): 00:39:22.779 | 1.00th=[ 7832], 5.00th=[ 9503], 10.00th=[10159], 20.00th=[10683], 00:39:22.779 | 30.00th=[10945], 40.00th=[11338], 50.00th=[11863], 60.00th=[11994], 00:39:22.779 | 70.00th=[12518], 80.00th=[13304], 90.00th=[13698], 95.00th=[14353], 00:39:22.779 | 99.00th=[17433], 99.50th=[17433], 99.90th=[18482], 99.95th=[18482], 00:39:22.779 | 99.99th=[18482] 00:39:22.779 bw ( KiB/s): min=19592, max=23288, per=30.02%, avg=21440.00, stdev=2613.47, samples=2 00:39:22.779 iops : min= 4898, max= 5822, avg=5360.00, stdev=653.37, samples=2 00:39:22.779 lat (msec) : 10=6.67%, 20=93.32%, 50=0.01% 00:39:22.779 cpu : usr=5.07%, sys=9.65%, ctx=508, majf=0, minf=2 00:39:22.779 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:39:22.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:22.779 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:22.779 issued rwts: total=5120,5487,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:22.779 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:22.779 job2: (groupid=0, jobs=1): err= 0: pid=449692: Mon Nov 18 00:44:46 2024 00:39:22.779 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:39:22.779 slat (usec): min=2, max=32968, avg=126.85, stdev=958.80 00:39:22.779 clat (usec): min=6682, max=46250, avg=16115.08, stdev=6029.90 00:39:22.779 lat (usec): min=6686, max=46319, avg=16241.93, stdev=6080.15 00:39:22.779 clat percentiles (usec): 00:39:22.779 | 1.00th=[ 9896], 5.00th=[11731], 10.00th=[12125], 20.00th=[13042], 00:39:22.779 | 30.00th=[13960], 40.00th=[14091], 50.00th=[14222], 60.00th=[14484], 00:39:22.779 | 70.00th=[15270], 80.00th=[17695], 90.00th=[21103], 95.00th=[27919], 00:39:22.779 | 99.00th=[45351], 99.50th=[46400], 99.90th=[46400], 99.95th=[46400], 00:39:22.779 | 99.99th=[46400] 00:39:22.779 write: IOPS=3970, BW=15.5MiB/s (16.3MB/s)(15.6MiB/1003msec); 0 zone resets 00:39:22.779 slat (usec): min=3, max=12096, avg=129.19, stdev=673.85 00:39:22.779 clat (usec): min=590, max=45557, avg=17441.92, stdev=8900.05 00:39:22.779 lat (usec): min=3521, max=45562, avg=17571.11, stdev=8963.11 00:39:22.779 clat percentiles (usec): 00:39:22.779 | 1.00th=[ 5014], 5.00th=[ 9896], 10.00th=[11076], 20.00th=[12911], 00:39:22.779 | 30.00th=[13698], 40.00th=[13960], 50.00th=[14222], 60.00th=[14615], 00:39:22.779 | 70.00th=[15008], 80.00th=[20841], 90.00th=[29492], 95.00th=[43254], 00:39:22.779 | 99.00th=[45351], 99.50th=[45351], 99.90th=[45351], 99.95th=[45351], 00:39:22.779 | 99.99th=[45351] 00:39:22.779 bw ( KiB/s): min=12288, max=18544, per=21.59%, avg=15416.00, stdev=4423.66, samples=2 00:39:22.779 iops : min= 3072, max= 4636, avg=3854.00, stdev=1105.92, samples=2 00:39:22.779 lat (usec) : 750=0.01% 00:39:22.779 lat (msec) : 4=0.08%, 10=4.37%, 20=78.67%, 50=16.86% 00:39:22.779 cpu : usr=3.29%, sys=7.29%, ctx=325, majf=0, minf=1 00:39:22.779 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:39:22.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:22.779 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:22.779 issued rwts: total=3584,3982,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:22.779 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:22.779 job3: (groupid=0, jobs=1): err= 0: pid=449693: Mon Nov 18 00:44:46 2024 00:39:22.779 read: IOPS=4038, BW=15.8MiB/s (16.5MB/s)(15.9MiB/1005msec) 00:39:22.779 slat (usec): min=2, max=14686, avg=110.20, stdev=693.35 00:39:22.779 clat (usec): min=914, max=28455, avg=14177.01, stdev=2923.00 00:39:22.779 lat (usec): min=3477, max=28467, avg=14287.21, stdev=2975.02 00:39:22.779 clat percentiles (usec): 00:39:22.779 | 1.00th=[ 4293], 5.00th=[10290], 10.00th=[11338], 20.00th=[12780], 00:39:22.779 | 30.00th=[13173], 40.00th=[13566], 50.00th=[13829], 60.00th=[14222], 00:39:22.779 | 70.00th=[14877], 80.00th=[16319], 90.00th=[17171], 95.00th=[17957], 00:39:22.779 | 99.00th=[23462], 99.50th=[26346], 99.90th=[28443], 99.95th=[28443], 00:39:22.779 | 99.99th=[28443] 00:39:22.780 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:39:22.780 slat (usec): min=2, max=14667, avg=128.27, stdev=773.90 00:39:22.780 clat (usec): min=3679, max=70452, avg=17076.38, stdev=9923.64 00:39:22.780 lat (usec): min=3683, max=70458, avg=17204.65, stdev=9992.23 00:39:22.780 clat percentiles (usec): 00:39:22.780 | 1.00th=[ 5145], 5.00th=[10028], 10.00th=[11600], 20.00th=[13042], 00:39:22.780 | 30.00th=[13698], 40.00th=[13960], 50.00th=[14353], 60.00th=[14615], 00:39:22.780 | 70.00th=[16450], 80.00th=[17695], 90.00th=[25560], 95.00th=[27395], 00:39:22.780 | 99.00th=[69731], 99.50th=[70779], 99.90th=[70779], 99.95th=[70779], 00:39:22.780 | 99.99th=[70779] 00:39:22.780 bw ( KiB/s): min=14912, max=17856, per=22.94%, avg=16384.00, stdev=2081.72, samples=2 00:39:22.780 iops : min= 3728, max= 4464, avg=4096.00, stdev=520.43, samples=2 00:39:22.780 lat (usec) : 1000=0.01% 00:39:22.780 lat (msec) : 4=0.54%, 10=3.91%, 20=85.59%, 50=8.58%, 100=1.36% 00:39:22.780 cpu : usr=1.99%, sys=5.38%, ctx=393, majf=0, minf=1 00:39:22.780 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:39:22.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:22.780 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:22.780 issued rwts: total=4059,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:22.780 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:22.780 00:39:22.780 Run status group 0 (all jobs): 00:39:22.780 READ: bw=65.5MiB/s (68.6MB/s), 14.0MiB/s-19.9MiB/s (14.6MB/s-20.8MB/s), io=65.9MiB (69.1MB), run=1003-1006msec 00:39:22.780 WRITE: bw=69.7MiB/s (73.1MB/s), 15.5MiB/s-21.3MiB/s (16.3MB/s-22.3MB/s), io=70.2MiB (73.6MB), run=1003-1006msec 00:39:22.780 00:39:22.780 Disk stats (read/write): 00:39:22.780 nvme0n1: ios=3392/3584, merge=0/0, ticks=16638/17645, in_queue=34283, util=86.67% 00:39:22.780 nvme0n2: ios=4437/4608, merge=0/0, ticks=21061/19960, in_queue=41021, util=86.47% 00:39:22.780 nvme0n3: ios=3072/3080, merge=0/0, ticks=29970/42349, in_queue=72319, util=88.90% 00:39:22.780 nvme0n4: ios=3113/3584, merge=0/0, ticks=21456/28899, in_queue=50355, util=89.65% 00:39:22.780 00:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:39:22.780 00:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=449825 00:39:22.780 00:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:39:22.780 00:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:39:22.780 [global] 00:39:22.780 thread=1 00:39:22.780 invalidate=1 00:39:22.780 rw=read 00:39:22.780 time_based=1 00:39:22.780 runtime=10 00:39:22.780 ioengine=libaio 00:39:22.780 direct=1 00:39:22.780 bs=4096 00:39:22.780 iodepth=1 00:39:22.780 norandommap=1 00:39:22.780 numjobs=1 00:39:22.780 00:39:22.780 [job0] 00:39:22.780 filename=/dev/nvme0n1 00:39:22.780 [job1] 00:39:22.780 filename=/dev/nvme0n2 00:39:22.780 [job2] 00:39:22.780 filename=/dev/nvme0n3 00:39:22.780 [job3] 00:39:22.780 filename=/dev/nvme0n4 00:39:22.780 Could not set queue depth (nvme0n1) 00:39:22.780 Could not set queue depth (nvme0n2) 00:39:22.780 Could not set queue depth (nvme0n3) 00:39:22.780 Could not set queue depth (nvme0n4) 00:39:22.780 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:22.780 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:22.780 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:22.780 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:22.780 fio-3.35 00:39:22.780 Starting 4 threads 00:39:26.060 00:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:39:26.060 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=39256064, buflen=4096 00:39:26.060 fio: pid=449918, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:26.060 00:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:39:26.060 00:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:26.060 00:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:39:26.060 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=38420480, buflen=4096 00:39:26.060 fio: pid=449917, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:26.627 00:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:26.627 00:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:39:26.627 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=13668352, buflen=4096 00:39:26.627 fio: pid=449915, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:26.886 00:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:26.886 00:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:39:26.886 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=589824, buflen=4096 00:39:26.886 fio: pid=449916, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:39:26.886 00:39:26.886 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=449915: Mon Nov 18 00:44:50 2024 00:39:26.886 read: IOPS=957, BW=3828KiB/s (3920kB/s)(13.0MiB/3487msec) 00:39:26.886 slat (usec): min=4, max=10947, avg=14.99, stdev=189.35 00:39:26.886 clat (usec): min=180, max=41097, avg=1020.08, stdev=5074.94 00:39:26.886 lat (usec): min=196, max=52045, avg=1035.07, stdev=5105.45 00:39:26.886 clat percentiles (usec): 00:39:26.886 | 1.00th=[ 215], 5.00th=[ 237], 10.00th=[ 247], 20.00th=[ 269], 00:39:26.886 | 30.00th=[ 318], 40.00th=[ 379], 50.00th=[ 383], 60.00th=[ 392], 00:39:26.886 | 70.00th=[ 424], 80.00th=[ 465], 90.00th=[ 510], 95.00th=[ 537], 00:39:26.886 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:26.886 | 99.99th=[41157] 00:39:26.886 bw ( KiB/s): min= 96, max=10624, per=18.71%, avg=4433.33, stdev=4945.24, samples=6 00:39:26.886 iops : min= 24, max= 2656, avg=1108.33, stdev=1236.31, samples=6 00:39:26.886 lat (usec) : 250=11.95%, 500=76.27%, 750=10.13% 00:39:26.886 lat (msec) : 2=0.03%, 50=1.59% 00:39:26.886 cpu : usr=0.46%, sys=1.26%, ctx=3341, majf=0, minf=1 00:39:26.886 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:26.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:26.886 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:26.886 issued rwts: total=3338,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:26.886 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:26.886 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=449916: Mon Nov 18 00:44:50 2024 00:39:26.886 read: IOPS=38, BW=152KiB/s (156kB/s)(576KiB/3789msec) 00:39:26.886 slat (usec): min=7, max=14907, avg=173.29, stdev=1379.01 00:39:26.886 clat (usec): min=243, max=42389, avg=26129.76, stdev=20045.83 00:39:26.886 lat (usec): min=266, max=55952, avg=26252.33, stdev=20158.46 00:39:26.886 clat percentiles (usec): 00:39:26.886 | 1.00th=[ 273], 5.00th=[ 293], 10.00th=[ 322], 20.00th=[ 351], 00:39:26.886 | 30.00th=[ 375], 40.00th=[40633], 50.00th=[41157], 60.00th=[41681], 00:39:26.886 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:39:26.886 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:39:26.886 | 99.99th=[42206] 00:39:26.886 bw ( KiB/s): min= 96, max= 352, per=0.65%, avg=155.86, stdev=91.51, samples=7 00:39:26.886 iops : min= 24, max= 88, avg=38.86, stdev=22.95, samples=7 00:39:26.886 lat (usec) : 250=0.69%, 500=35.86%, 750=0.69% 00:39:26.886 lat (msec) : 50=62.07% 00:39:26.886 cpu : usr=0.03%, sys=0.29%, ctx=148, majf=0, minf=1 00:39:26.886 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:26.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:26.886 complete : 0=0.7%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:26.886 issued rwts: total=145,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:26.886 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:26.886 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=449917: Mon Nov 18 00:44:50 2024 00:39:26.886 read: IOPS=2934, BW=11.5MiB/s (12.0MB/s)(36.6MiB/3197msec) 00:39:26.886 slat (usec): min=5, max=11469, avg=13.02, stdev=142.53 00:39:26.886 clat (usec): min=198, max=41054, avg=322.12, stdev=599.60 00:39:26.886 lat (usec): min=204, max=41062, avg=335.14, stdev=616.23 00:39:26.886 clat percentiles (usec): 00:39:26.886 | 1.00th=[ 215], 5.00th=[ 227], 10.00th=[ 239], 20.00th=[ 255], 00:39:26.887 | 30.00th=[ 265], 40.00th=[ 273], 50.00th=[ 281], 60.00th=[ 293], 00:39:26.887 | 70.00th=[ 322], 80.00th=[ 383], 90.00th=[ 453], 95.00th=[ 490], 00:39:26.887 | 99.00th=[ 562], 99.50th=[ 586], 99.90th=[ 619], 99.95th=[ 644], 00:39:26.887 | 99.99th=[41157] 00:39:26.887 bw ( KiB/s): min= 9304, max=13528, per=49.44%, avg=11714.67, stdev=1819.19, samples=6 00:39:26.887 iops : min= 2326, max= 3382, avg=2928.67, stdev=454.80, samples=6 00:39:26.887 lat (usec) : 250=16.42%, 500=79.66%, 750=3.88%, 1000=0.01% 00:39:26.887 lat (msec) : 50=0.02% 00:39:26.887 cpu : usr=1.85%, sys=5.13%, ctx=9384, majf=0, minf=2 00:39:26.887 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:26.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:26.887 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:26.887 issued rwts: total=9381,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:26.887 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:26.887 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=449918: Mon Nov 18 00:44:50 2024 00:39:26.887 read: IOPS=3310, BW=12.9MiB/s (13.6MB/s)(37.4MiB/2895msec) 00:39:26.887 slat (nsec): min=4596, max=54356, avg=11596.92, stdev=6069.20 00:39:26.887 clat (usec): min=205, max=7510, avg=287.00, stdev=162.01 00:39:26.887 lat (usec): min=212, max=7515, avg=298.60, stdev=162.74 00:39:26.887 clat percentiles (usec): 00:39:26.887 | 1.00th=[ 217], 5.00th=[ 223], 10.00th=[ 229], 20.00th=[ 239], 00:39:26.887 | 30.00th=[ 247], 40.00th=[ 253], 50.00th=[ 260], 60.00th=[ 269], 00:39:26.887 | 70.00th=[ 281], 80.00th=[ 302], 90.00th=[ 388], 95.00th=[ 461], 00:39:26.887 | 99.00th=[ 537], 99.50th=[ 562], 99.90th=[ 635], 99.95th=[ 3916], 00:39:26.887 | 99.99th=[ 7504] 00:39:26.887 bw ( KiB/s): min=11064, max=14600, per=57.16%, avg=13545.60, stdev=1523.77, samples=5 00:39:26.887 iops : min= 2766, max= 3650, avg=3386.40, stdev=380.94, samples=5 00:39:26.887 lat (usec) : 250=34.75%, 500=62.71%, 750=2.46% 00:39:26.887 lat (msec) : 4=0.02%, 10=0.04% 00:39:26.887 cpu : usr=1.73%, sys=4.15%, ctx=9588, majf=0, minf=2 00:39:26.887 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:26.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:26.887 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:26.887 issued rwts: total=9585,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:26.887 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:26.887 00:39:26.887 Run status group 0 (all jobs): 00:39:26.887 READ: bw=23.1MiB/s (24.3MB/s), 152KiB/s-12.9MiB/s (156kB/s-13.6MB/s), io=87.7MiB (91.9MB), run=2895-3789msec 00:39:26.887 00:39:26.887 Disk stats (read/write): 00:39:26.887 nvme0n1: ios=3334/0, merge=0/0, ticks=3258/0, in_queue=3258, util=95.54% 00:39:26.887 nvme0n2: ios=139/0, merge=0/0, ticks=3552/0, in_queue=3552, util=96.09% 00:39:26.887 nvme0n3: ios=9146/0, merge=0/0, ticks=2850/0, in_queue=2850, util=96.19% 00:39:26.887 nvme0n4: ios=9529/0, merge=0/0, ticks=2796/0, in_queue=2796, util=100.00% 00:39:27.143 00:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:27.143 00:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:39:27.400 00:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:27.401 00:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:39:27.659 00:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:27.659 00:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:39:27.919 00:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:27.919 00:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:39:28.183 00:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:39:28.183 00:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 449825 00:39:28.183 00:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:39:28.183 00:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:39:28.183 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:39:28.440 00:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:39:28.440 00:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:39:28.440 00:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:39:28.440 00:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:28.440 00:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:39:28.440 00:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:28.440 00:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:39:28.440 00:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:39:28.440 00:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:39:28.440 nvmf hotplug test: fio failed as expected 00:39:28.440 00:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:28.698 00:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:39:28.698 00:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:39:28.698 00:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:39:28.698 00:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:39:28.698 00:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:39:28.698 00:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:28.698 00:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:39:28.698 00:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:28.698 00:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:39:28.698 00:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:28.698 00:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:28.698 rmmod nvme_tcp 00:39:28.698 rmmod nvme_fabrics 00:39:28.698 rmmod nvme_keyring 00:39:28.698 00:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:28.698 00:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:39:28.698 00:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:39:28.698 00:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 447936 ']' 00:39:28.698 00:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 447936 00:39:28.698 00:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 447936 ']' 00:39:28.698 00:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 447936 00:39:28.698 00:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:39:28.698 00:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:28.698 00:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 447936 00:39:28.698 00:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:28.698 00:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:28.698 00:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 447936' 00:39:28.698 killing process with pid 447936 00:39:28.698 00:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 447936 00:39:28.698 00:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 447936 00:39:28.958 00:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:28.958 00:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:28.958 00:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:28.958 00:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:39:28.958 00:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:39:28.958 00:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:28.958 00:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:39:28.958 00:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:28.958 00:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:28.958 00:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:28.958 00:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:28.958 00:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:30.868 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:30.868 00:39:30.868 real 0m23.677s 00:39:30.868 user 1m6.214s 00:39:30.868 sys 0m10.573s 00:39:30.868 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:30.868 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:30.868 ************************************ 00:39:30.868 END TEST nvmf_fio_target 00:39:30.868 ************************************ 00:39:30.868 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:39:30.868 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:30.868 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:30.868 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:30.868 ************************************ 00:39:30.868 START TEST nvmf_bdevio 00:39:30.868 ************************************ 00:39:30.868 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:39:31.126 * Looking for test storage... 00:39:31.126 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:31.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:31.126 --rc genhtml_branch_coverage=1 00:39:31.126 --rc genhtml_function_coverage=1 00:39:31.126 --rc genhtml_legend=1 00:39:31.126 --rc geninfo_all_blocks=1 00:39:31.126 --rc geninfo_unexecuted_blocks=1 00:39:31.126 00:39:31.126 ' 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:31.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:31.126 --rc genhtml_branch_coverage=1 00:39:31.126 --rc genhtml_function_coverage=1 00:39:31.126 --rc genhtml_legend=1 00:39:31.126 --rc geninfo_all_blocks=1 00:39:31.126 --rc geninfo_unexecuted_blocks=1 00:39:31.126 00:39:31.126 ' 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:31.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:31.126 --rc genhtml_branch_coverage=1 00:39:31.126 --rc genhtml_function_coverage=1 00:39:31.126 --rc genhtml_legend=1 00:39:31.126 --rc geninfo_all_blocks=1 00:39:31.126 --rc geninfo_unexecuted_blocks=1 00:39:31.126 00:39:31.126 ' 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:31.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:31.126 --rc genhtml_branch_coverage=1 00:39:31.126 --rc genhtml_function_coverage=1 00:39:31.126 --rc genhtml_legend=1 00:39:31.126 --rc geninfo_all_blocks=1 00:39:31.126 --rc geninfo_unexecuted_blocks=1 00:39:31.126 00:39:31.126 ' 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:39:31.126 00:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:33.162 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:33.162 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:39:33.162 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:33.162 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:33.162 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:33.162 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:33.162 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:33.162 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:39:33.162 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:33.162 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:39:33.162 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:39:33.162 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:39:33.162 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:39:33.162 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:39:33.162 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:39:33.162 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:33.162 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:33.162 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:33.162 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:33.162 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:33.162 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:33.162 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:33.162 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:33.162 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:33.162 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:33.162 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:33.162 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:33.162 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:33.162 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:33.162 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:33.162 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:33.162 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:33.162 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:33.162 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:33.162 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:33.162 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:33.162 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:33.162 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:33.162 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:33.162 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:33.162 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:33.162 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:33.162 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:33.162 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:33.162 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:33.162 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:33.162 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:33.162 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:33.162 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:33.162 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:33.162 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:33.162 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:33.162 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:33.162 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:33.162 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:33.162 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:33.162 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:33.162 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:33.162 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:33.162 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:33.162 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:33.162 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:33.162 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:33.162 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:33.162 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:33.162 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:33.162 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:33.163 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:33.163 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:33.163 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:33.163 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:33.163 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:33.163 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:33.163 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:39:33.163 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:33.163 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:33.163 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:33.163 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:33.163 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:33.163 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:33.163 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:33.163 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:33.163 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:33.163 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:33.163 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:33.163 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:33.163 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:33.163 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:33.163 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:33.163 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:33.163 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:33.163 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:33.163 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:33.163 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:33.163 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:33.163 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:33.422 00:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:33.422 00:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:33.422 00:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:33.422 00:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:33.422 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:33.422 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:39:33.422 00:39:33.422 --- 10.0.0.2 ping statistics --- 00:39:33.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:33.422 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:39:33.422 00:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:33.422 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:33.422 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:39:33.422 00:39:33.422 --- 10.0.0.1 ping statistics --- 00:39:33.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:33.422 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:39:33.422 00:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:33.422 00:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:39:33.422 00:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:33.422 00:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:33.422 00:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:33.422 00:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:33.422 00:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:33.422 00:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:33.422 00:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:33.422 00:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:39:33.422 00:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:33.422 00:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:33.422 00:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:33.422 00:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=452608 00:39:33.422 00:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:39:33.422 00:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 452608 00:39:33.422 00:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 452608 ']' 00:39:33.422 00:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:33.422 00:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:33.422 00:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:33.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:33.422 00:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:33.422 00:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:33.422 [2024-11-18 00:44:57.094744] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:33.422 [2024-11-18 00:44:57.095948] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:39:33.422 [2024-11-18 00:44:57.096025] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:33.422 [2024-11-18 00:44:57.175948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:33.422 [2024-11-18 00:44:57.225610] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:33.422 [2024-11-18 00:44:57.225672] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:33.422 [2024-11-18 00:44:57.225686] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:33.422 [2024-11-18 00:44:57.225697] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:33.422 [2024-11-18 00:44:57.225706] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:33.422 [2024-11-18 00:44:57.227381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:39:33.422 [2024-11-18 00:44:57.227434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:39:33.422 [2024-11-18 00:44:57.227462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:39:33.422 [2024-11-18 00:44:57.227465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:33.680 [2024-11-18 00:44:57.322907] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:33.680 [2024-11-18 00:44:57.323095] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:33.680 [2024-11-18 00:44:57.323400] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:33.680 [2024-11-18 00:44:57.324026] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:33.680 [2024-11-18 00:44:57.324238] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:33.680 00:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:33.680 00:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:39:33.680 00:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:33.680 00:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:33.680 00:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:33.680 00:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:33.680 00:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:33.680 00:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:33.680 00:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:33.680 [2024-11-18 00:44:57.380206] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:33.680 00:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:33.680 00:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:33.680 00:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:33.680 00:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:33.680 Malloc0 00:39:33.680 00:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:33.680 00:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:33.680 00:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:33.680 00:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:33.680 00:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:33.680 00:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:33.680 00:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:33.680 00:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:33.680 00:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:33.680 00:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:33.680 00:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:33.680 00:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:33.680 [2024-11-18 00:44:57.460535] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:33.680 00:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:33.680 00:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:39:33.680 00:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:39:33.680 00:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:39:33.680 00:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:39:33.680 00:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:33.680 00:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:33.680 { 00:39:33.680 "params": { 00:39:33.680 "name": "Nvme$subsystem", 00:39:33.680 "trtype": "$TEST_TRANSPORT", 00:39:33.680 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:33.680 "adrfam": "ipv4", 00:39:33.680 "trsvcid": "$NVMF_PORT", 00:39:33.680 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:33.680 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:33.680 "hdgst": ${hdgst:-false}, 00:39:33.680 "ddgst": ${ddgst:-false} 00:39:33.680 }, 00:39:33.681 "method": "bdev_nvme_attach_controller" 00:39:33.681 } 00:39:33.681 EOF 00:39:33.681 )") 00:39:33.681 00:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:39:33.681 00:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:39:33.681 00:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:39:33.681 00:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:33.681 "params": { 00:39:33.681 "name": "Nvme1", 00:39:33.681 "trtype": "tcp", 00:39:33.681 "traddr": "10.0.0.2", 00:39:33.681 "adrfam": "ipv4", 00:39:33.681 "trsvcid": "4420", 00:39:33.681 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:33.681 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:33.681 "hdgst": false, 00:39:33.681 "ddgst": false 00:39:33.681 }, 00:39:33.681 "method": "bdev_nvme_attach_controller" 00:39:33.681 }' 00:39:33.939 [2024-11-18 00:44:57.511082] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:39:33.939 [2024-11-18 00:44:57.511146] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid452694 ] 00:39:33.939 [2024-11-18 00:44:57.580999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:33.939 [2024-11-18 00:44:57.634362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:33.939 [2024-11-18 00:44:57.634392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:33.939 [2024-11-18 00:44:57.634396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:34.197 I/O targets: 00:39:34.197 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:39:34.197 00:39:34.197 00:39:34.197 CUnit - A unit testing framework for C - Version 2.1-3 00:39:34.197 http://cunit.sourceforge.net/ 00:39:34.197 00:39:34.197 00:39:34.197 Suite: bdevio tests on: Nvme1n1 00:39:34.197 Test: blockdev write read block ...passed 00:39:34.455 Test: blockdev write zeroes read block ...passed 00:39:34.455 Test: blockdev write zeroes read no split ...passed 00:39:34.455 Test: blockdev write zeroes read split ...passed 00:39:34.455 Test: blockdev write zeroes read split partial ...passed 00:39:34.455 Test: blockdev reset ...[2024-11-18 00:44:58.119578] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:39:34.455 [2024-11-18 00:44:58.119693] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e27b70 (9): Bad file descriptor 00:39:34.455 [2024-11-18 00:44:58.123917] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:39:34.455 passed 00:39:34.455 Test: blockdev write read 8 blocks ...passed 00:39:34.455 Test: blockdev write read size > 128k ...passed 00:39:34.455 Test: blockdev write read invalid size ...passed 00:39:34.455 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:39:34.455 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:39:34.455 Test: blockdev write read max offset ...passed 00:39:34.713 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:39:34.713 Test: blockdev writev readv 8 blocks ...passed 00:39:34.713 Test: blockdev writev readv 30 x 1block ...passed 00:39:34.713 Test: blockdev writev readv block ...passed 00:39:34.713 Test: blockdev writev readv size > 128k ...passed 00:39:34.713 Test: blockdev writev readv size > 128k in two iovs ...passed 00:39:34.713 Test: blockdev comparev and writev ...[2024-11-18 00:44:58.375515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:34.713 [2024-11-18 00:44:58.375554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:34.713 [2024-11-18 00:44:58.375579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:34.713 [2024-11-18 00:44:58.375596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:34.714 [2024-11-18 00:44:58.376036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:34.714 [2024-11-18 00:44:58.376062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:39:34.714 [2024-11-18 00:44:58.376084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:34.714 [2024-11-18 00:44:58.376100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:39:34.714 [2024-11-18 00:44:58.376497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:34.714 [2024-11-18 00:44:58.376523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:39:34.714 [2024-11-18 00:44:58.376552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:34.714 [2024-11-18 00:44:58.376572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:39:34.714 [2024-11-18 00:44:58.376999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:34.714 [2024-11-18 00:44:58.377023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:39:34.714 [2024-11-18 00:44:58.377045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:34.714 [2024-11-18 00:44:58.377061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:39:34.714 passed 00:39:34.714 Test: blockdev nvme passthru rw ...passed 00:39:34.714 Test: blockdev nvme passthru vendor specific ...[2024-11-18 00:44:58.458591] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:39:34.714 [2024-11-18 00:44:58.458619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:39:34.714 [2024-11-18 00:44:58.458765] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:39:34.714 [2024-11-18 00:44:58.458789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:39:34.714 [2024-11-18 00:44:58.458930] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:39:34.714 [2024-11-18 00:44:58.458954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:39:34.714 [2024-11-18 00:44:58.459097] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:39:34.714 [2024-11-18 00:44:58.459121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:39:34.714 passed 00:39:34.714 Test: blockdev nvme admin passthru ...passed 00:39:34.714 Test: blockdev copy ...passed 00:39:34.714 00:39:34.714 Run Summary: Type Total Ran Passed Failed Inactive 00:39:34.714 suites 1 1 n/a 0 0 00:39:34.714 tests 23 23 23 0 0 00:39:34.714 asserts 152 152 152 0 n/a 00:39:34.714 00:39:34.714 Elapsed time = 1.077 seconds 00:39:34.971 00:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:34.971 00:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:34.971 00:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:34.971 00:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:34.971 00:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:39:34.971 00:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:39:34.971 00:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:34.971 00:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:39:34.971 00:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:34.971 00:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:39:34.971 00:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:34.971 00:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:34.971 rmmod nvme_tcp 00:39:34.971 rmmod nvme_fabrics 00:39:34.971 rmmod nvme_keyring 00:39:34.971 00:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:34.971 00:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:39:34.971 00:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:39:34.971 00:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 452608 ']' 00:39:34.971 00:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 452608 00:39:34.971 00:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 452608 ']' 00:39:34.971 00:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 452608 00:39:34.971 00:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:39:34.971 00:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:34.971 00:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 452608 00:39:34.971 00:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:39:34.971 00:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:39:34.971 00:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 452608' 00:39:34.971 killing process with pid 452608 00:39:34.971 00:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 452608 00:39:34.971 00:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 452608 00:39:35.229 00:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:35.229 00:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:35.229 00:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:35.229 00:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:39:35.229 00:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:39:35.229 00:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:35.229 00:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:39:35.229 00:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:35.229 00:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:35.229 00:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:35.229 00:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:35.229 00:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:37.769 00:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:37.769 00:39:37.769 real 0m6.358s 00:39:37.769 user 0m8.660s 00:39:37.769 sys 0m2.534s 00:39:37.769 00:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:37.769 00:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:37.769 ************************************ 00:39:37.769 END TEST nvmf_bdevio 00:39:37.769 ************************************ 00:39:37.769 00:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:39:37.769 00:39:37.769 real 3m53.775s 00:39:37.769 user 8m47.633s 00:39:37.769 sys 1m24.589s 00:39:37.769 00:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:37.769 00:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:37.769 ************************************ 00:39:37.769 END TEST nvmf_target_core_interrupt_mode 00:39:37.769 ************************************ 00:39:37.769 00:45:01 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:39:37.769 00:45:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:37.769 00:45:01 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:37.769 00:45:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:37.769 ************************************ 00:39:37.769 START TEST nvmf_interrupt 00:39:37.769 ************************************ 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:39:37.769 * Looking for test storage... 00:39:37.769 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:37.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:37.769 --rc genhtml_branch_coverage=1 00:39:37.769 --rc genhtml_function_coverage=1 00:39:37.769 --rc genhtml_legend=1 00:39:37.769 --rc geninfo_all_blocks=1 00:39:37.769 --rc geninfo_unexecuted_blocks=1 00:39:37.769 00:39:37.769 ' 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:37.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:37.769 --rc genhtml_branch_coverage=1 00:39:37.769 --rc genhtml_function_coverage=1 00:39:37.769 --rc genhtml_legend=1 00:39:37.769 --rc geninfo_all_blocks=1 00:39:37.769 --rc geninfo_unexecuted_blocks=1 00:39:37.769 00:39:37.769 ' 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:37.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:37.769 --rc genhtml_branch_coverage=1 00:39:37.769 --rc genhtml_function_coverage=1 00:39:37.769 --rc genhtml_legend=1 00:39:37.769 --rc geninfo_all_blocks=1 00:39:37.769 --rc geninfo_unexecuted_blocks=1 00:39:37.769 00:39:37.769 ' 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:37.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:37.769 --rc genhtml_branch_coverage=1 00:39:37.769 --rc genhtml_function_coverage=1 00:39:37.769 --rc genhtml_legend=1 00:39:37.769 --rc geninfo_all_blocks=1 00:39:37.769 --rc geninfo_unexecuted_blocks=1 00:39:37.769 00:39:37.769 ' 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:37.769 00:45:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:37.770 00:45:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:37.770 00:45:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:37.770 00:45:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:39:37.770 00:45:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:39.677 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:39.677 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:39:39.677 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:39.677 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:39.677 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:39.677 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:39.677 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:39.677 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:39:39.677 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:39.677 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:39:39.677 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:39:39.677 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:39:39.677 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:39:39.677 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:39:39.677 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:39:39.677 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:39.677 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:39.677 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:39.677 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:39.677 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:39.677 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:39.677 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:39.677 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:39.677 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:39.677 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:39.677 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:39.677 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:39.677 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:39.677 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:39.677 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:39.677 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:39.677 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:39.677 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:39.677 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:39.677 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:39.677 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:39.678 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:39.678 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:39.678 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:39.678 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:39.678 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:39:39.678 00:39:39.678 --- 10.0.0.2 ping statistics --- 00:39:39.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:39.678 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:39.678 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:39.678 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:39:39.678 00:39:39.678 --- 10.0.0.1 ping statistics --- 00:39:39.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:39.678 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=454892 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 454892 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 454892 ']' 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:39.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:39.678 00:45:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:39.937 [2024-11-18 00:45:03.519798] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:39.937 [2024-11-18 00:45:03.520870] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:39:39.937 [2024-11-18 00:45:03.520917] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:39.937 [2024-11-18 00:45:03.590495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:39.937 [2024-11-18 00:45:03.634188] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:39.937 [2024-11-18 00:45:03.634242] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:39.937 [2024-11-18 00:45:03.634267] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:39.937 [2024-11-18 00:45:03.634277] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:39.937 [2024-11-18 00:45:03.634287] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:39.937 [2024-11-18 00:45:03.635549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:39.937 [2024-11-18 00:45:03.635554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:39.937 [2024-11-18 00:45:03.716689] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:39.937 [2024-11-18 00:45:03.716704] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:39.937 [2024-11-18 00:45:03.716950] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:39.937 00:45:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:39.937 00:45:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:39:39.937 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:39.937 00:45:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:39.937 00:45:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:40.197 00:45:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:40.197 00:45:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:39:40.197 00:45:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:39:40.197 00:45:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:39:40.197 00:45:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:39:40.197 5000+0 records in 00:39:40.197 5000+0 records out 00:39:40.197 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0140872 s, 727 MB/s 00:39:40.197 00:45:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:39:40.197 00:45:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:40.197 00:45:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:40.197 AIO0 00:39:40.197 00:45:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:40.197 00:45:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:39:40.197 00:45:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:40.197 00:45:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:40.197 [2024-11-18 00:45:03.840195] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:40.197 00:45:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:40.197 00:45:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:39:40.197 00:45:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:40.197 00:45:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:40.197 00:45:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:40.197 00:45:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:39:40.197 00:45:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:40.197 00:45:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:40.197 00:45:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:40.197 00:45:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:40.197 00:45:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:40.197 00:45:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:40.197 [2024-11-18 00:45:03.864451] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:40.197 00:45:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:40.197 00:45:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:39:40.197 00:45:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 454892 0 00:39:40.197 00:45:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 454892 0 idle 00:39:40.197 00:45:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=454892 00:39:40.197 00:45:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:39:40.197 00:45:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:40.197 00:45:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:40.197 00:45:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:40.197 00:45:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:40.197 00:45:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:40.197 00:45:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:40.197 00:45:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:40.197 00:45:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:40.197 00:45:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 454892 -w 256 00:39:40.197 00:45:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:39:40.455 00:45:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 454892 root 20 0 128.2g 46464 33792 S 0.0 0.1 0:00.24 reactor_0' 00:39:40.456 00:45:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 454892 root 20 0 128.2g 46464 33792 S 0.0 0.1 0:00.24 reactor_0 00:39:40.456 00:45:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:40.456 00:45:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:40.456 00:45:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:40.456 00:45:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:40.456 00:45:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:40.456 00:45:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:40.456 00:45:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:40.456 00:45:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:40.456 00:45:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:39:40.456 00:45:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 454892 1 00:39:40.456 00:45:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 454892 1 idle 00:39:40.456 00:45:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=454892 00:39:40.456 00:45:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:39:40.456 00:45:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:40.456 00:45:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:40.456 00:45:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:40.456 00:45:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:40.456 00:45:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:40.456 00:45:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:40.456 00:45:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:40.456 00:45:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:40.456 00:45:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 454892 -w 256 00:39:40.456 00:45:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:39:40.456 00:45:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 454896 root 20 0 128.2g 46464 33792 S 0.0 0.1 0:00.00 reactor_1' 00:39:40.456 00:45:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 454896 root 20 0 128.2g 46464 33792 S 0.0 0.1 0:00.00 reactor_1 00:39:40.456 00:45:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:40.456 00:45:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:40.456 00:45:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:40.456 00:45:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:40.456 00:45:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:40.456 00:45:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:40.456 00:45:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:40.456 00:45:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:40.456 00:45:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:39:40.456 00:45:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=455056 00:39:40.456 00:45:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:39:40.456 00:45:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:39:40.456 00:45:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:39:40.456 00:45:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 454892 0 00:39:40.456 00:45:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 454892 0 busy 00:39:40.456 00:45:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=454892 00:39:40.456 00:45:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:39:40.456 00:45:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:39:40.456 00:45:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:39:40.456 00:45:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:40.456 00:45:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:39:40.456 00:45:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:40.456 00:45:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:40.456 00:45:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:40.456 00:45:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 454892 -w 256 00:39:40.456 00:45:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:39:40.714 00:45:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 454892 root 20 0 128.2g 46848 33792 S 6.7 0.1 0:00.25 reactor_0' 00:39:40.714 00:45:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 454892 root 20 0 128.2g 46848 33792 S 6.7 0.1 0:00.25 reactor_0 00:39:40.714 00:45:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:40.714 00:45:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:40.714 00:45:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:39:40.714 00:45:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:39:40.714 00:45:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:39:40.714 00:45:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:39:40.714 00:45:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:39:41.648 00:45:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:39:41.648 00:45:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:41.648 00:45:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 454892 -w 256 00:39:41.648 00:45:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:39:41.907 00:45:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 454892 root 20 0 128.2g 47232 33792 R 99.9 0.1 0:02.48 reactor_0' 00:39:41.907 00:45:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 454892 root 20 0 128.2g 47232 33792 R 99.9 0.1 0:02.48 reactor_0 00:39:41.907 00:45:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:41.907 00:45:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:41.907 00:45:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:39:41.907 00:45:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:39:41.907 00:45:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:39:41.907 00:45:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:39:41.907 00:45:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:39:41.907 00:45:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:41.907 00:45:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:39:41.907 00:45:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:39:41.907 00:45:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 454892 1 00:39:41.907 00:45:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 454892 1 busy 00:39:41.907 00:45:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=454892 00:39:41.907 00:45:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:39:41.907 00:45:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:39:41.907 00:45:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:39:41.907 00:45:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:41.907 00:45:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:39:41.907 00:45:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:41.907 00:45:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:41.907 00:45:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:41.907 00:45:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 454892 -w 256 00:39:41.907 00:45:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:39:41.907 00:45:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 454896 root 20 0 128.2g 47232 33792 R 86.7 0.1 0:01.24 reactor_1' 00:39:41.907 00:45:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 454896 root 20 0 128.2g 47232 33792 R 86.7 0.1 0:01.24 reactor_1 00:39:41.907 00:45:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:41.907 00:45:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:42.165 00:45:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=86.7 00:39:42.165 00:45:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=86 00:39:42.165 00:45:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:39:42.165 00:45:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:39:42.165 00:45:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:39:42.165 00:45:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:42.165 00:45:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 455056 00:39:52.135 Initializing NVMe Controllers 00:39:52.135 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:52.135 Controller IO queue size 256, less than required. 00:39:52.135 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:52.135 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:39:52.135 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:39:52.135 Initialization complete. Launching workers. 00:39:52.135 ======================================================== 00:39:52.135 Latency(us) 00:39:52.135 Device Information : IOPS MiB/s Average min max 00:39:52.135 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 12967.51 50.65 19756.41 4690.93 24384.35 00:39:52.135 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 13764.81 53.77 18610.20 4706.20 22925.07 00:39:52.135 ======================================================== 00:39:52.135 Total : 26732.32 104.42 19166.21 4690.93 24384.35 00:39:52.135 00:39:52.135 00:45:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:39:52.135 00:45:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 454892 0 00:39:52.135 00:45:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 454892 0 idle 00:39:52.135 00:45:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=454892 00:39:52.135 00:45:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:39:52.135 00:45:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:52.135 00:45:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:52.135 00:45:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:52.135 00:45:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:52.135 00:45:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:52.135 00:45:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:52.135 00:45:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:52.135 00:45:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:52.135 00:45:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 454892 -w 256 00:39:52.135 00:45:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:39:52.135 00:45:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 454892 root 20 0 128.2g 47232 33792 S 0.0 0.1 0:19.76 reactor_0' 00:39:52.135 00:45:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 454892 root 20 0 128.2g 47232 33792 S 0.0 0.1 0:19.76 reactor_0 00:39:52.135 00:45:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:52.135 00:45:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:52.135 00:45:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:52.135 00:45:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:52.135 00:45:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:52.135 00:45:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:52.135 00:45:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:52.135 00:45:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:52.135 00:45:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:39:52.135 00:45:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 454892 1 00:39:52.135 00:45:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 454892 1 idle 00:39:52.135 00:45:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=454892 00:39:52.135 00:45:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:39:52.135 00:45:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:52.135 00:45:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:52.135 00:45:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:52.135 00:45:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:52.135 00:45:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:52.135 00:45:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:52.135 00:45:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:52.135 00:45:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:52.135 00:45:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 454892 -w 256 00:39:52.135 00:45:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:39:52.135 00:45:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 454896 root 20 0 128.2g 47232 33792 S 0.0 0.1 0:09.55 reactor_1' 00:39:52.135 00:45:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 454896 root 20 0 128.2g 47232 33792 S 0.0 0.1 0:09.55 reactor_1 00:39:52.135 00:45:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:52.136 00:45:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:52.136 00:45:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:52.136 00:45:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:52.136 00:45:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:52.136 00:45:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:52.136 00:45:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:52.136 00:45:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:52.136 00:45:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:39:52.136 00:45:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:39:52.136 00:45:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:39:52.136 00:45:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:39:52.136 00:45:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:39:52.136 00:45:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:39:53.520 00:45:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:39:53.520 00:45:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:39:53.520 00:45:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:39:53.520 00:45:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:39:53.520 00:45:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:39:53.520 00:45:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:39:53.520 00:45:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:39:53.520 00:45:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 454892 0 00:39:53.520 00:45:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 454892 0 idle 00:39:53.520 00:45:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=454892 00:39:53.520 00:45:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:39:53.520 00:45:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:53.520 00:45:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:53.520 00:45:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:53.520 00:45:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:53.520 00:45:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:53.520 00:45:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:53.520 00:45:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:53.520 00:45:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:53.520 00:45:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 454892 -w 256 00:39:53.520 00:45:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:39:53.520 00:45:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 454892 root 20 0 128.2g 59520 33792 S 0.0 0.1 0:19.86 reactor_0' 00:39:53.520 00:45:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 454892 root 20 0 128.2g 59520 33792 S 0.0 0.1 0:19.86 reactor_0 00:39:53.520 00:45:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:53.520 00:45:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:53.520 00:45:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:53.520 00:45:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:53.520 00:45:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:53.520 00:45:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:53.520 00:45:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:53.520 00:45:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:53.520 00:45:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:39:53.520 00:45:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 454892 1 00:39:53.520 00:45:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 454892 1 idle 00:39:53.520 00:45:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=454892 00:39:53.520 00:45:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:39:53.520 00:45:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:53.520 00:45:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:53.520 00:45:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:53.520 00:45:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:53.520 00:45:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:53.520 00:45:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:53.520 00:45:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:53.520 00:45:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:53.520 00:45:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 454892 -w 256 00:39:53.520 00:45:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:39:53.779 00:45:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 454896 root 20 0 128.2g 59520 33792 S 0.0 0.1 0:09.58 reactor_1' 00:39:53.779 00:45:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 454896 root 20 0 128.2g 59520 33792 S 0.0 0.1 0:09.58 reactor_1 00:39:53.779 00:45:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:53.779 00:45:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:53.779 00:45:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:53.779 00:45:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:53.779 00:45:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:53.779 00:45:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:53.779 00:45:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:53.779 00:45:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:53.779 00:45:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:39:53.779 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:39:53.779 00:45:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:39:53.779 00:45:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:39:53.779 00:45:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:39:53.779 00:45:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:53.779 00:45:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:39:53.779 00:45:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:53.779 00:45:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:39:53.779 00:45:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:39:53.779 00:45:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:39:53.779 00:45:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:53.779 00:45:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:39:53.779 00:45:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:53.779 00:45:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:39:53.779 00:45:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:53.779 00:45:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:53.779 rmmod nvme_tcp 00:39:53.779 rmmod nvme_fabrics 00:39:53.779 rmmod nvme_keyring 00:39:54.037 00:45:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:54.037 00:45:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:39:54.037 00:45:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:39:54.037 00:45:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 454892 ']' 00:39:54.037 00:45:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 454892 00:39:54.037 00:45:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 454892 ']' 00:39:54.037 00:45:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 454892 00:39:54.037 00:45:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:39:54.037 00:45:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:54.037 00:45:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 454892 00:39:54.037 00:45:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:54.037 00:45:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:54.037 00:45:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 454892' 00:39:54.037 killing process with pid 454892 00:39:54.037 00:45:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 454892 00:39:54.037 00:45:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 454892 00:39:54.297 00:45:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:54.297 00:45:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:54.297 00:45:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:54.297 00:45:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:39:54.297 00:45:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:39:54.297 00:45:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:54.297 00:45:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:39:54.297 00:45:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:54.297 00:45:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:54.297 00:45:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:54.297 00:45:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:54.297 00:45:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:56.204 00:45:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:56.204 00:39:56.204 real 0m18.833s 00:39:56.204 user 0m36.641s 00:39:56.204 sys 0m6.858s 00:39:56.204 00:45:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:56.204 00:45:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:56.204 ************************************ 00:39:56.204 END TEST nvmf_interrupt 00:39:56.204 ************************************ 00:39:56.204 00:39:56.204 real 33m9.154s 00:39:56.204 user 87m42.219s 00:39:56.204 sys 8m14.312s 00:39:56.204 00:45:19 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:56.204 00:45:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:56.204 ************************************ 00:39:56.204 END TEST nvmf_tcp 00:39:56.204 ************************************ 00:39:56.204 00:45:19 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:39:56.204 00:45:19 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:39:56.204 00:45:19 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:39:56.204 00:45:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:56.204 00:45:19 -- common/autotest_common.sh@10 -- # set +x 00:39:56.204 ************************************ 00:39:56.204 START TEST spdkcli_nvmf_tcp 00:39:56.204 ************************************ 00:39:56.204 00:45:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:39:56.463 * Looking for test storage... 00:39:56.463 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:39:56.463 00:45:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:56.463 00:45:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:39:56.463 00:45:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:56.463 00:45:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:56.463 00:45:20 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:56.463 00:45:20 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:56.463 00:45:20 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:56.463 00:45:20 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:39:56.463 00:45:20 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:39:56.463 00:45:20 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:39:56.463 00:45:20 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:39:56.463 00:45:20 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:39:56.463 00:45:20 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:39:56.463 00:45:20 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:39:56.463 00:45:20 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:56.463 00:45:20 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:39:56.463 00:45:20 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:39:56.463 00:45:20 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:56.463 00:45:20 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:56.463 00:45:20 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:39:56.463 00:45:20 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:39:56.463 00:45:20 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:56.463 00:45:20 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:39:56.463 00:45:20 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:39:56.463 00:45:20 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:39:56.463 00:45:20 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:39:56.463 00:45:20 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:56.463 00:45:20 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:39:56.463 00:45:20 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:39:56.463 00:45:20 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:56.463 00:45:20 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:56.463 00:45:20 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:39:56.463 00:45:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:56.463 00:45:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:56.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:56.463 --rc genhtml_branch_coverage=1 00:39:56.463 --rc genhtml_function_coverage=1 00:39:56.463 --rc genhtml_legend=1 00:39:56.463 --rc geninfo_all_blocks=1 00:39:56.463 --rc geninfo_unexecuted_blocks=1 00:39:56.463 00:39:56.463 ' 00:39:56.463 00:45:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:56.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:56.463 --rc genhtml_branch_coverage=1 00:39:56.463 --rc genhtml_function_coverage=1 00:39:56.463 --rc genhtml_legend=1 00:39:56.463 --rc geninfo_all_blocks=1 00:39:56.463 --rc geninfo_unexecuted_blocks=1 00:39:56.463 00:39:56.463 ' 00:39:56.463 00:45:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:56.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:56.463 --rc genhtml_branch_coverage=1 00:39:56.463 --rc genhtml_function_coverage=1 00:39:56.463 --rc genhtml_legend=1 00:39:56.463 --rc geninfo_all_blocks=1 00:39:56.463 --rc geninfo_unexecuted_blocks=1 00:39:56.463 00:39:56.463 ' 00:39:56.463 00:45:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:56.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:56.463 --rc genhtml_branch_coverage=1 00:39:56.463 --rc genhtml_function_coverage=1 00:39:56.463 --rc genhtml_legend=1 00:39:56.463 --rc geninfo_all_blocks=1 00:39:56.463 --rc geninfo_unexecuted_blocks=1 00:39:56.463 00:39:56.463 ' 00:39:56.463 00:45:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:39:56.463 00:45:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:39:56.463 00:45:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:39:56.463 00:45:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:56.463 00:45:20 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:39:56.463 00:45:20 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:56.463 00:45:20 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:56.463 00:45:20 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:56.463 00:45:20 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:56.463 00:45:20 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:56.463 00:45:20 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:56.463 00:45:20 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:56.463 00:45:20 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:56.463 00:45:20 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:56.463 00:45:20 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:56.463 00:45:20 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:56.463 00:45:20 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:56.463 00:45:20 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:56.463 00:45:20 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:56.463 00:45:20 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:56.463 00:45:20 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:56.463 00:45:20 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:56.463 00:45:20 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:39:56.463 00:45:20 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:56.463 00:45:20 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:56.463 00:45:20 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:56.463 00:45:20 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:56.463 00:45:20 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:56.463 00:45:20 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:56.463 00:45:20 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:39:56.464 00:45:20 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:56.464 00:45:20 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:39:56.464 00:45:20 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:56.464 00:45:20 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:56.464 00:45:20 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:56.464 00:45:20 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:56.464 00:45:20 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:56.464 00:45:20 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:56.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:56.464 00:45:20 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:56.464 00:45:20 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:56.464 00:45:20 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:56.464 00:45:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:39:56.464 00:45:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:39:56.464 00:45:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:39:56.464 00:45:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:39:56.464 00:45:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:56.464 00:45:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:56.464 00:45:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:39:56.464 00:45:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=457570 00:39:56.464 00:45:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:39:56.464 00:45:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 457570 00:39:56.464 00:45:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 457570 ']' 00:39:56.464 00:45:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:56.464 00:45:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:56.464 00:45:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:56.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:56.464 00:45:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:56.464 00:45:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:56.464 [2024-11-18 00:45:20.220783] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:39:56.464 [2024-11-18 00:45:20.220885] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid457570 ] 00:39:56.722 [2024-11-18 00:45:20.291147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:56.722 [2024-11-18 00:45:20.339790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:56.722 [2024-11-18 00:45:20.339795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:56.722 00:45:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:56.722 00:45:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:39:56.722 00:45:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:39:56.722 00:45:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:56.722 00:45:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:56.722 00:45:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:39:56.722 00:45:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:39:56.722 00:45:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:39:56.722 00:45:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:56.722 00:45:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:56.722 00:45:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:39:56.722 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:39:56.722 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:39:56.722 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:39:56.722 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:39:56.722 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:39:56.722 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:39:56.722 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:39:56.722 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:39:56.722 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:39:56.722 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:39:56.722 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:39:56.722 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:39:56.722 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:39:56.722 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:39:56.722 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:39:56.722 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:39:56.722 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:39:56.722 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:39:56.722 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:39:56.722 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:39:56.722 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:39:56.722 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:39:56.722 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:39:56.722 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:39:56.722 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:39:56.722 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:39:56.722 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:39:56.722 ' 00:40:00.000 [2024-11-18 00:45:23.168504] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:00.932 [2024-11-18 00:45:24.432778] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:40:03.459 [2024-11-18 00:45:26.775851] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:40:05.357 [2024-11-18 00:45:28.838254] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:40:06.741 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:40:06.741 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:40:06.741 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:40:06.741 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:40:06.741 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:40:06.741 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:40:06.741 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:40:06.741 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:40:06.741 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:40:06.741 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:40:06.741 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:06.741 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:06.741 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:40:06.741 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:06.741 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:06.741 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:40:06.741 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:06.741 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:40:06.741 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:40:06.741 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:06.741 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:40:06.741 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:40:06.741 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:40:06.741 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:40:06.741 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:06.741 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:40:06.741 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:40:06.742 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:40:06.742 00:45:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:40:06.742 00:45:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:06.742 00:45:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:06.742 00:45:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:40:06.742 00:45:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:06.742 00:45:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:06.742 00:45:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:40:06.742 00:45:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:40:07.316 00:45:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:40:07.316 00:45:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:40:07.316 00:45:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:40:07.316 00:45:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:07.316 00:45:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:07.316 00:45:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:40:07.316 00:45:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:07.316 00:45:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:07.316 00:45:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:40:07.316 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:40:07.316 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:40:07.316 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:40:07.316 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:40:07.316 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:40:07.316 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:40:07.316 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:40:07.316 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:40:07.316 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:40:07.316 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:40:07.316 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:40:07.316 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:40:07.316 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:40:07.316 ' 00:40:12.580 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:40:12.580 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:40:12.580 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:40:12.580 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:40:12.580 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:40:12.580 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:40:12.580 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:40:12.580 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:40:12.580 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:40:12.580 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:40:12.580 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:40:12.580 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:40:12.580 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:40:12.580 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:40:12.580 00:45:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:40:12.580 00:45:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:12.580 00:45:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:12.839 00:45:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 457570 00:40:12.839 00:45:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 457570 ']' 00:40:12.839 00:45:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 457570 00:40:12.839 00:45:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:40:12.839 00:45:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:12.839 00:45:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 457570 00:40:12.839 00:45:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:12.839 00:45:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:12.839 00:45:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 457570' 00:40:12.839 killing process with pid 457570 00:40:12.839 00:45:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 457570 00:40:12.839 00:45:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 457570 00:40:12.839 00:45:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:40:12.839 00:45:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:40:12.839 00:45:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 457570 ']' 00:40:12.839 00:45:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 457570 00:40:12.839 00:45:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 457570 ']' 00:40:12.839 00:45:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 457570 00:40:12.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (457570) - No such process 00:40:12.839 00:45:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 457570 is not found' 00:40:12.839 Process with pid 457570 is not found 00:40:12.839 00:45:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:40:12.839 00:45:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:40:12.839 00:45:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:40:12.839 00:40:12.839 real 0m16.655s 00:40:12.839 user 0m35.449s 00:40:12.839 sys 0m0.862s 00:40:12.839 00:45:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:12.839 00:45:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:12.839 ************************************ 00:40:12.839 END TEST spdkcli_nvmf_tcp 00:40:12.839 ************************************ 00:40:13.099 00:45:36 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:40:13.099 00:45:36 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:40:13.099 00:45:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:13.099 00:45:36 -- common/autotest_common.sh@10 -- # set +x 00:40:13.099 ************************************ 00:40:13.099 START TEST nvmf_identify_passthru 00:40:13.099 ************************************ 00:40:13.099 00:45:36 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:40:13.099 * Looking for test storage... 00:40:13.099 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:13.099 00:45:36 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:13.099 00:45:36 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:40:13.099 00:45:36 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:13.099 00:45:36 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:13.099 00:45:36 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:13.099 00:45:36 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:13.099 00:45:36 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:13.099 00:45:36 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:40:13.099 00:45:36 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:40:13.099 00:45:36 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:40:13.099 00:45:36 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:40:13.099 00:45:36 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:40:13.099 00:45:36 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:40:13.099 00:45:36 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:40:13.099 00:45:36 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:13.099 00:45:36 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:40:13.099 00:45:36 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:40:13.099 00:45:36 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:13.099 00:45:36 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:13.099 00:45:36 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:40:13.099 00:45:36 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:40:13.099 00:45:36 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:13.099 00:45:36 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:40:13.099 00:45:36 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:40:13.099 00:45:36 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:40:13.099 00:45:36 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:40:13.099 00:45:36 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:13.099 00:45:36 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:40:13.099 00:45:36 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:40:13.099 00:45:36 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:13.099 00:45:36 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:13.099 00:45:36 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:40:13.099 00:45:36 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:13.099 00:45:36 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:13.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:13.099 --rc genhtml_branch_coverage=1 00:40:13.099 --rc genhtml_function_coverage=1 00:40:13.099 --rc genhtml_legend=1 00:40:13.099 --rc geninfo_all_blocks=1 00:40:13.099 --rc geninfo_unexecuted_blocks=1 00:40:13.099 00:40:13.099 ' 00:40:13.099 00:45:36 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:13.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:13.099 --rc genhtml_branch_coverage=1 00:40:13.099 --rc genhtml_function_coverage=1 00:40:13.099 --rc genhtml_legend=1 00:40:13.099 --rc geninfo_all_blocks=1 00:40:13.099 --rc geninfo_unexecuted_blocks=1 00:40:13.099 00:40:13.099 ' 00:40:13.099 00:45:36 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:13.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:13.099 --rc genhtml_branch_coverage=1 00:40:13.099 --rc genhtml_function_coverage=1 00:40:13.099 --rc genhtml_legend=1 00:40:13.099 --rc geninfo_all_blocks=1 00:40:13.099 --rc geninfo_unexecuted_blocks=1 00:40:13.099 00:40:13.099 ' 00:40:13.099 00:45:36 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:13.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:13.099 --rc genhtml_branch_coverage=1 00:40:13.099 --rc genhtml_function_coverage=1 00:40:13.099 --rc genhtml_legend=1 00:40:13.099 --rc geninfo_all_blocks=1 00:40:13.099 --rc geninfo_unexecuted_blocks=1 00:40:13.099 00:40:13.099 ' 00:40:13.099 00:45:36 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:13.099 00:45:36 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:40:13.099 00:45:36 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:13.099 00:45:36 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:13.099 00:45:36 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:13.099 00:45:36 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:13.100 00:45:36 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:13.100 00:45:36 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:13.100 00:45:36 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:13.100 00:45:36 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:13.100 00:45:36 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:13.100 00:45:36 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:13.100 00:45:36 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:13.100 00:45:36 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:13.100 00:45:36 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:13.100 00:45:36 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:13.100 00:45:36 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:13.100 00:45:36 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:13.100 00:45:36 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:13.100 00:45:36 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:40:13.100 00:45:36 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:13.100 00:45:36 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:13.100 00:45:36 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:13.100 00:45:36 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:13.100 00:45:36 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:13.100 00:45:36 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:13.100 00:45:36 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:40:13.100 00:45:36 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:13.100 00:45:36 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:40:13.100 00:45:36 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:13.100 00:45:36 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:13.100 00:45:36 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:13.100 00:45:36 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:13.100 00:45:36 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:13.100 00:45:36 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:13.100 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:13.100 00:45:36 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:13.100 00:45:36 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:13.100 00:45:36 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:13.100 00:45:36 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:13.100 00:45:36 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:40:13.100 00:45:36 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:13.100 00:45:36 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:13.100 00:45:36 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:13.100 00:45:36 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:13.100 00:45:36 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:13.100 00:45:36 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:13.100 00:45:36 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:40:13.100 00:45:36 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:13.100 00:45:36 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:40:13.100 00:45:36 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:13.100 00:45:36 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:13.100 00:45:36 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:13.100 00:45:36 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:13.100 00:45:36 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:13.100 00:45:36 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:13.100 00:45:36 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:13.100 00:45:36 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:13.100 00:45:36 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:13.100 00:45:36 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:13.100 00:45:36 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:40:13.100 00:45:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:15.636 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:15.636 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:40:15.636 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:15.636 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:15.636 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:15.636 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:15.636 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:15.636 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:40:15.636 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:15.636 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:40:15.636 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:40:15.636 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:40:15.636 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:40:15.636 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:40:15.636 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:40:15.636 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:15.636 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:15.636 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:15.636 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:15.636 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:15.636 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:15.636 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:15.637 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:15.637 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:15.637 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:15.637 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:15.637 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:15.637 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:15.637 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:15.637 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:15.637 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:15.637 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:15.637 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:15.637 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:15.637 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:15.637 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:15.637 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:15.637 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:15.637 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:15.637 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:15.637 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:15.637 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:15.637 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:15.637 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:15.637 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:15.637 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:15.637 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:15.637 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:15.637 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:15.637 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:15.637 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:15.637 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:15.637 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:15.638 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:15.638 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:15.638 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:15.638 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:15.638 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:15.638 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:15.638 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:15.638 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:15.638 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:15.638 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:15.638 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:15.638 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:15.638 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:15.638 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:15.638 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:15.638 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:15.638 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:15.638 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:15.638 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:15.638 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:15.638 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:40:15.638 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:15.638 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:15.638 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:15.638 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:15.638 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:15.638 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:15.638 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:15.638 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:15.638 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:15.638 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:15.638 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:15.638 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:15.638 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:15.638 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:15.638 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:15.638 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:15.638 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:15.638 00:45:38 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:15.638 00:45:39 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:15.638 00:45:39 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:15.639 00:45:39 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:15.639 00:45:39 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:15.639 00:45:39 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:15.639 00:45:39 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:15.639 00:45:39 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:15.639 00:45:39 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:15.639 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:15.639 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.300 ms 00:40:15.639 00:40:15.639 --- 10.0.0.2 ping statistics --- 00:40:15.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:15.639 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:40:15.639 00:45:39 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:15.639 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:15.639 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:40:15.639 00:40:15.639 --- 10.0.0.1 ping statistics --- 00:40:15.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:15.639 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:40:15.639 00:45:39 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:15.639 00:45:39 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:40:15.639 00:45:39 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:15.639 00:45:39 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:15.639 00:45:39 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:15.640 00:45:39 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:15.640 00:45:39 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:15.640 00:45:39 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:15.640 00:45:39 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:15.640 00:45:39 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:40:15.640 00:45:39 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:15.640 00:45:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:15.640 00:45:39 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:40:15.640 00:45:39 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:40:15.640 00:45:39 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:40:15.640 00:45:39 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:40:15.640 00:45:39 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:40:15.640 00:45:39 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:40:15.640 00:45:39 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:40:15.640 00:45:39 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:40:15.640 00:45:39 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:40:15.640 00:45:39 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:40:15.640 00:45:39 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:40:15.640 00:45:39 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:40:15.640 00:45:39 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:88:00.0 00:40:15.640 00:45:39 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:40:15.640 00:45:39 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:40:15.641 00:45:39 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:40:15.641 00:45:39 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:40:15.641 00:45:39 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:40:19.831 00:45:43 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:40:19.831 00:45:43 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:40:19.831 00:45:43 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:40:19.831 00:45:43 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:40:24.014 00:45:47 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:40:24.014 00:45:47 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:40:24.014 00:45:47 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:24.014 00:45:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:24.014 00:45:47 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:40:24.014 00:45:47 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:24.014 00:45:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:24.014 00:45:47 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=462195 00:40:24.014 00:45:47 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:40:24.014 00:45:47 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:24.014 00:45:47 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 462195 00:40:24.014 00:45:47 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 462195 ']' 00:40:24.014 00:45:47 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:24.014 00:45:47 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:24.014 00:45:47 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:24.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:24.014 00:45:47 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:24.014 00:45:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:24.014 [2024-11-18 00:45:47.710773] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:40:24.014 [2024-11-18 00:45:47.710857] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:24.014 [2024-11-18 00:45:47.780331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:24.014 [2024-11-18 00:45:47.824376] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:24.014 [2024-11-18 00:45:47.824427] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:24.014 [2024-11-18 00:45:47.824454] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:24.014 [2024-11-18 00:45:47.824465] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:24.014 [2024-11-18 00:45:47.824474] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:24.014 [2024-11-18 00:45:47.825857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:24.014 [2024-11-18 00:45:47.825919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:24.014 [2024-11-18 00:45:47.826028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:40:24.014 [2024-11-18 00:45:47.826036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:24.272 00:45:47 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:24.272 00:45:47 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:40:24.272 00:45:47 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:40:24.272 00:45:47 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:24.272 00:45:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:24.272 INFO: Log level set to 20 00:40:24.272 INFO: Requests: 00:40:24.272 { 00:40:24.272 "jsonrpc": "2.0", 00:40:24.272 "method": "nvmf_set_config", 00:40:24.272 "id": 1, 00:40:24.272 "params": { 00:40:24.272 "admin_cmd_passthru": { 00:40:24.272 "identify_ctrlr": true 00:40:24.272 } 00:40:24.272 } 00:40:24.272 } 00:40:24.272 00:40:24.272 INFO: response: 00:40:24.272 { 00:40:24.272 "jsonrpc": "2.0", 00:40:24.272 "id": 1, 00:40:24.272 "result": true 00:40:24.272 } 00:40:24.272 00:40:24.272 00:45:47 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:24.272 00:45:47 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:40:24.272 00:45:47 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:24.272 00:45:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:24.272 INFO: Setting log level to 20 00:40:24.272 INFO: Setting log level to 20 00:40:24.272 INFO: Log level set to 20 00:40:24.272 INFO: Log level set to 20 00:40:24.272 INFO: Requests: 00:40:24.272 { 00:40:24.272 "jsonrpc": "2.0", 00:40:24.272 "method": "framework_start_init", 00:40:24.272 "id": 1 00:40:24.272 } 00:40:24.272 00:40:24.272 INFO: Requests: 00:40:24.272 { 00:40:24.272 "jsonrpc": "2.0", 00:40:24.272 "method": "framework_start_init", 00:40:24.272 "id": 1 00:40:24.272 } 00:40:24.272 00:40:24.272 [2024-11-18 00:45:48.037125] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:40:24.272 INFO: response: 00:40:24.272 { 00:40:24.272 "jsonrpc": "2.0", 00:40:24.272 "id": 1, 00:40:24.272 "result": true 00:40:24.272 } 00:40:24.272 00:40:24.272 INFO: response: 00:40:24.272 { 00:40:24.272 "jsonrpc": "2.0", 00:40:24.272 "id": 1, 00:40:24.272 "result": true 00:40:24.272 } 00:40:24.272 00:40:24.272 00:45:48 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:24.272 00:45:48 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:24.272 00:45:48 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:24.272 00:45:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:24.272 INFO: Setting log level to 40 00:40:24.272 INFO: Setting log level to 40 00:40:24.272 INFO: Setting log level to 40 00:40:24.272 [2024-11-18 00:45:48.047338] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:24.272 00:45:48 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:24.272 00:45:48 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:40:24.272 00:45:48 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:24.272 00:45:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:24.272 00:45:48 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:40:24.272 00:45:48 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:24.272 00:45:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:27.550 Nvme0n1 00:40:27.550 00:45:50 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:27.550 00:45:50 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:40:27.551 00:45:50 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:27.551 00:45:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:27.551 00:45:50 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:27.551 00:45:50 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:40:27.551 00:45:50 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:27.551 00:45:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:27.551 00:45:50 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:27.551 00:45:50 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:27.551 00:45:50 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:27.551 00:45:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:27.551 [2024-11-18 00:45:50.950570] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:27.551 00:45:50 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:27.551 00:45:50 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:40:27.551 00:45:50 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:27.551 00:45:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:27.551 [ 00:40:27.551 { 00:40:27.551 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:40:27.551 "subtype": "Discovery", 00:40:27.551 "listen_addresses": [], 00:40:27.551 "allow_any_host": true, 00:40:27.551 "hosts": [] 00:40:27.551 }, 00:40:27.551 { 00:40:27.551 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:40:27.551 "subtype": "NVMe", 00:40:27.551 "listen_addresses": [ 00:40:27.551 { 00:40:27.551 "trtype": "TCP", 00:40:27.551 "adrfam": "IPv4", 00:40:27.551 "traddr": "10.0.0.2", 00:40:27.551 "trsvcid": "4420" 00:40:27.551 } 00:40:27.551 ], 00:40:27.551 "allow_any_host": true, 00:40:27.551 "hosts": [], 00:40:27.551 "serial_number": "SPDK00000000000001", 00:40:27.551 "model_number": "SPDK bdev Controller", 00:40:27.551 "max_namespaces": 1, 00:40:27.551 "min_cntlid": 1, 00:40:27.551 "max_cntlid": 65519, 00:40:27.551 "namespaces": [ 00:40:27.551 { 00:40:27.551 "nsid": 1, 00:40:27.551 "bdev_name": "Nvme0n1", 00:40:27.551 "name": "Nvme0n1", 00:40:27.551 "nguid": "C4E67B0A71054EA1B374AB5194270B92", 00:40:27.551 "uuid": "c4e67b0a-7105-4ea1-b374-ab5194270b92" 00:40:27.551 } 00:40:27.551 ] 00:40:27.551 } 00:40:27.551 ] 00:40:27.551 00:45:50 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:27.551 00:45:50 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:40:27.551 00:45:50 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:40:27.551 00:45:50 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:40:27.551 00:45:51 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:40:27.551 00:45:51 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:40:27.551 00:45:51 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:40:27.551 00:45:51 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:40:27.551 00:45:51 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:40:27.551 00:45:51 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:40:27.551 00:45:51 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:40:27.551 00:45:51 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:27.551 00:45:51 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:27.551 00:45:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:27.551 00:45:51 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:27.551 00:45:51 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:40:27.551 00:45:51 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:40:27.551 00:45:51 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:27.551 00:45:51 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:40:27.551 00:45:51 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:27.551 00:45:51 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:40:27.551 00:45:51 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:27.551 00:45:51 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:27.551 rmmod nvme_tcp 00:40:27.551 rmmod nvme_fabrics 00:40:27.551 rmmod nvme_keyring 00:40:27.551 00:45:51 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:27.551 00:45:51 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:40:27.551 00:45:51 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:40:27.551 00:45:51 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 462195 ']' 00:40:27.551 00:45:51 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 462195 00:40:27.551 00:45:51 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 462195 ']' 00:40:27.551 00:45:51 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 462195 00:40:27.551 00:45:51 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:40:27.551 00:45:51 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:27.551 00:45:51 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 462195 00:40:27.551 00:45:51 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:27.551 00:45:51 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:27.551 00:45:51 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 462195' 00:40:27.551 killing process with pid 462195 00:40:27.551 00:45:51 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 462195 00:40:27.551 00:45:51 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 462195 00:40:29.454 00:45:52 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:29.454 00:45:52 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:29.454 00:45:52 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:29.454 00:45:52 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:40:29.454 00:45:52 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:40:29.454 00:45:52 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:29.454 00:45:52 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:40:29.454 00:45:52 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:29.454 00:45:52 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:29.454 00:45:52 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:29.454 00:45:52 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:29.454 00:45:52 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:31.355 00:45:54 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:31.355 00:40:31.355 real 0m18.181s 00:40:31.355 user 0m26.677s 00:40:31.355 sys 0m2.415s 00:40:31.355 00:45:54 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:31.355 00:45:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:31.355 ************************************ 00:40:31.355 END TEST nvmf_identify_passthru 00:40:31.355 ************************************ 00:40:31.355 00:45:54 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:40:31.355 00:45:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:31.355 00:45:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:31.355 00:45:54 -- common/autotest_common.sh@10 -- # set +x 00:40:31.355 ************************************ 00:40:31.355 START TEST nvmf_dif 00:40:31.355 ************************************ 00:40:31.355 00:45:54 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:40:31.355 * Looking for test storage... 00:40:31.355 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:31.355 00:45:54 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:31.355 00:45:54 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:40:31.355 00:45:54 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:31.355 00:45:55 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:31.355 00:45:55 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:31.355 00:45:55 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:31.355 00:45:55 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:31.355 00:45:55 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:40:31.355 00:45:55 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:40:31.355 00:45:55 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:40:31.355 00:45:55 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:40:31.355 00:45:55 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:40:31.355 00:45:55 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:40:31.355 00:45:55 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:40:31.355 00:45:55 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:31.355 00:45:55 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:40:31.355 00:45:55 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:40:31.355 00:45:55 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:31.355 00:45:55 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:31.355 00:45:55 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:40:31.355 00:45:55 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:40:31.355 00:45:55 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:31.355 00:45:55 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:40:31.355 00:45:55 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:40:31.355 00:45:55 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:40:31.355 00:45:55 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:40:31.355 00:45:55 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:31.355 00:45:55 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:40:31.355 00:45:55 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:40:31.355 00:45:55 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:31.355 00:45:55 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:31.355 00:45:55 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:40:31.355 00:45:55 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:31.355 00:45:55 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:31.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:31.355 --rc genhtml_branch_coverage=1 00:40:31.355 --rc genhtml_function_coverage=1 00:40:31.355 --rc genhtml_legend=1 00:40:31.355 --rc geninfo_all_blocks=1 00:40:31.355 --rc geninfo_unexecuted_blocks=1 00:40:31.355 00:40:31.355 ' 00:40:31.355 00:45:55 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:31.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:31.355 --rc genhtml_branch_coverage=1 00:40:31.355 --rc genhtml_function_coverage=1 00:40:31.355 --rc genhtml_legend=1 00:40:31.355 --rc geninfo_all_blocks=1 00:40:31.355 --rc geninfo_unexecuted_blocks=1 00:40:31.355 00:40:31.355 ' 00:40:31.355 00:45:55 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:31.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:31.355 --rc genhtml_branch_coverage=1 00:40:31.355 --rc genhtml_function_coverage=1 00:40:31.355 --rc genhtml_legend=1 00:40:31.355 --rc geninfo_all_blocks=1 00:40:31.355 --rc geninfo_unexecuted_blocks=1 00:40:31.355 00:40:31.355 ' 00:40:31.355 00:45:55 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:31.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:31.355 --rc genhtml_branch_coverage=1 00:40:31.355 --rc genhtml_function_coverage=1 00:40:31.355 --rc genhtml_legend=1 00:40:31.355 --rc geninfo_all_blocks=1 00:40:31.355 --rc geninfo_unexecuted_blocks=1 00:40:31.355 00:40:31.355 ' 00:40:31.355 00:45:55 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:31.355 00:45:55 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:40:31.355 00:45:55 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:31.355 00:45:55 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:31.355 00:45:55 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:31.355 00:45:55 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:31.355 00:45:55 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:31.355 00:45:55 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:31.355 00:45:55 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:31.355 00:45:55 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:31.355 00:45:55 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:31.355 00:45:55 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:31.355 00:45:55 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:31.355 00:45:55 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:31.355 00:45:55 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:31.355 00:45:55 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:31.355 00:45:55 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:31.355 00:45:55 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:31.355 00:45:55 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:31.355 00:45:55 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:40:31.355 00:45:55 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:31.355 00:45:55 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:31.355 00:45:55 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:31.355 00:45:55 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:31.355 00:45:55 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:31.355 00:45:55 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:31.355 00:45:55 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:40:31.355 00:45:55 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:31.355 00:45:55 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:40:31.355 00:45:55 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:31.356 00:45:55 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:31.356 00:45:55 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:31.356 00:45:55 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:31.356 00:45:55 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:31.356 00:45:55 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:31.356 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:31.356 00:45:55 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:31.356 00:45:55 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:31.356 00:45:55 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:31.356 00:45:55 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:40:31.356 00:45:55 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:40:31.356 00:45:55 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:40:31.356 00:45:55 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:40:31.356 00:45:55 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:40:31.356 00:45:55 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:31.356 00:45:55 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:31.356 00:45:55 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:31.356 00:45:55 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:31.356 00:45:55 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:31.356 00:45:55 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:31.356 00:45:55 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:31.356 00:45:55 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:31.356 00:45:55 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:31.356 00:45:55 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:31.356 00:45:55 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:40:31.356 00:45:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:33.887 00:45:57 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:33.887 00:45:57 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:40:33.887 00:45:57 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:33.887 00:45:57 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:33.887 00:45:57 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:33.887 00:45:57 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:33.887 00:45:57 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:33.887 00:45:57 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:40:33.887 00:45:57 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:33.887 00:45:57 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:40:33.887 00:45:57 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:40:33.887 00:45:57 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:40:33.887 00:45:57 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:40:33.887 00:45:57 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:40:33.887 00:45:57 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:40:33.887 00:45:57 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:33.887 00:45:57 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:33.887 00:45:57 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:33.887 00:45:57 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:33.887 00:45:57 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:33.887 00:45:57 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:33.887 00:45:57 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:33.887 00:45:57 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:33.887 00:45:57 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:33.887 00:45:57 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:33.887 00:45:57 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:33.887 00:45:57 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:33.887 00:45:57 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:33.887 00:45:57 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:33.887 00:45:57 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:33.887 00:45:57 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:33.887 00:45:57 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:33.887 00:45:57 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:33.887 00:45:57 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:33.887 00:45:57 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:33.887 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:33.887 00:45:57 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:33.887 00:45:57 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:33.887 00:45:57 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:33.887 00:45:57 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:33.887 00:45:57 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:33.887 00:45:57 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:33.887 00:45:57 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:33.887 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:33.887 00:45:57 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:33.887 00:45:57 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:33.887 00:45:57 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:33.888 00:45:57 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:33.888 00:45:57 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:33.888 00:45:57 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:33.888 00:45:57 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:33.888 00:45:57 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:33.888 00:45:57 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:33.888 00:45:57 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:33.888 00:45:57 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:33.888 00:45:57 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:33.888 00:45:57 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:33.888 00:45:57 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:33.888 00:45:57 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:33.888 00:45:57 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:33.888 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:33.888 00:45:57 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:33.888 00:45:57 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:33.888 00:45:57 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:33.888 00:45:57 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:33.888 00:45:57 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:33.888 00:45:57 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:33.888 00:45:57 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:33.888 00:45:57 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:33.888 00:45:57 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:33.888 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:33.888 00:45:57 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:33.888 00:45:57 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:33.888 00:45:57 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:40:33.888 00:45:57 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:33.888 00:45:57 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:33.888 00:45:57 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:33.888 00:45:57 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:33.888 00:45:57 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:33.888 00:45:57 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:33.888 00:45:57 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:33.888 00:45:57 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:33.888 00:45:57 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:33.888 00:45:57 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:33.888 00:45:57 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:33.888 00:45:57 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:33.888 00:45:57 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:33.888 00:45:57 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:33.888 00:45:57 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:33.888 00:45:57 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:33.888 00:45:57 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:33.888 00:45:57 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:33.888 00:45:57 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:33.888 00:45:57 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:33.888 00:45:57 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:33.888 00:45:57 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:33.888 00:45:57 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:33.888 00:45:57 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:33.888 00:45:57 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:33.888 00:45:57 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:33.888 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:33.888 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:40:33.888 00:40:33.888 --- 10.0.0.2 ping statistics --- 00:40:33.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:33.888 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:40:33.888 00:45:57 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:33.888 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:33.888 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:40:33.888 00:40:33.888 --- 10.0.0.1 ping statistics --- 00:40:33.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:33.888 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:40:33.888 00:45:57 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:33.888 00:45:57 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:40:33.888 00:45:57 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:40:33.888 00:45:57 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:40:34.822 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:40:34.822 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:40:34.822 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:40:34.822 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:40:34.822 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:40:34.822 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:40:34.822 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:40:34.822 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:40:34.822 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:40:34.822 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:40:34.822 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:40:34.822 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:40:34.822 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:40:34.822 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:40:34.822 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:40:34.822 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:40:34.822 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:40:35.081 00:45:58 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:35.081 00:45:58 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:35.081 00:45:58 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:35.081 00:45:58 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:35.081 00:45:58 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:35.081 00:45:58 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:35.081 00:45:58 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:40:35.081 00:45:58 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:40:35.081 00:45:58 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:35.081 00:45:58 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:35.081 00:45:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:35.081 00:45:58 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=465345 00:40:35.081 00:45:58 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:40:35.081 00:45:58 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 465345 00:40:35.081 00:45:58 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 465345 ']' 00:40:35.081 00:45:58 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:35.081 00:45:58 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:35.081 00:45:58 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:35.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:35.081 00:45:58 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:35.081 00:45:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:35.081 [2024-11-18 00:45:58.782077] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:40:35.081 [2024-11-18 00:45:58.782160] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:35.081 [2024-11-18 00:45:58.858096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:35.339 [2024-11-18 00:45:58.905955] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:35.339 [2024-11-18 00:45:58.906001] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:35.339 [2024-11-18 00:45:58.906017] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:35.339 [2024-11-18 00:45:58.906029] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:35.339 [2024-11-18 00:45:58.906039] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:35.339 [2024-11-18 00:45:58.906640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:35.339 00:45:59 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:35.339 00:45:59 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:40:35.339 00:45:59 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:35.339 00:45:59 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:35.339 00:45:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:35.339 00:45:59 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:35.339 00:45:59 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:40:35.340 00:45:59 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:40:35.340 00:45:59 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:35.340 00:45:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:35.340 [2024-11-18 00:45:59.036881] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:35.340 00:45:59 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:35.340 00:45:59 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:40:35.340 00:45:59 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:35.340 00:45:59 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:35.340 00:45:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:35.340 ************************************ 00:40:35.340 START TEST fio_dif_1_default 00:40:35.340 ************************************ 00:40:35.340 00:45:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:40:35.340 00:45:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:40:35.340 00:45:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:40:35.340 00:45:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:40:35.340 00:45:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:40:35.340 00:45:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:40:35.340 00:45:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:40:35.340 00:45:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:35.340 00:45:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:35.340 bdev_null0 00:40:35.340 00:45:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:35.340 00:45:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:35.340 00:45:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:35.340 00:45:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:35.340 00:45:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:35.340 00:45:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:35.340 00:45:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:35.340 00:45:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:35.340 00:45:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:35.340 00:45:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:35.340 00:45:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:35.340 00:45:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:35.340 [2024-11-18 00:45:59.093157] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:35.340 00:45:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:35.340 00:45:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:40:35.340 00:45:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:40:35.340 00:45:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:40:35.340 00:45:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:40:35.340 00:45:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:40:35.340 00:45:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:35.340 00:45:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:35.340 { 00:40:35.340 "params": { 00:40:35.340 "name": "Nvme$subsystem", 00:40:35.340 "trtype": "$TEST_TRANSPORT", 00:40:35.340 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:35.340 "adrfam": "ipv4", 00:40:35.340 "trsvcid": "$NVMF_PORT", 00:40:35.340 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:35.340 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:35.340 "hdgst": ${hdgst:-false}, 00:40:35.340 "ddgst": ${ddgst:-false} 00:40:35.340 }, 00:40:35.340 "method": "bdev_nvme_attach_controller" 00:40:35.340 } 00:40:35.340 EOF 00:40:35.340 )") 00:40:35.340 00:45:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:35.340 00:45:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:40:35.340 00:45:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:35.340 00:45:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:40:35.340 00:45:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:40:35.340 00:45:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:40:35.340 00:45:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:35.340 00:45:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:40:35.340 00:45:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:35.340 00:45:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:40:35.340 00:45:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:40:35.340 00:45:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:40:35.340 00:45:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:40:35.340 00:45:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:40:35.340 00:45:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:40:35.340 00:45:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:35.340 00:45:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:40:35.340 00:45:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:40:35.340 00:45:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:40:35.340 00:45:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:40:35.340 00:45:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:35.340 "params": { 00:40:35.340 "name": "Nvme0", 00:40:35.340 "trtype": "tcp", 00:40:35.340 "traddr": "10.0.0.2", 00:40:35.340 "adrfam": "ipv4", 00:40:35.340 "trsvcid": "4420", 00:40:35.340 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:35.340 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:35.340 "hdgst": false, 00:40:35.340 "ddgst": false 00:40:35.340 }, 00:40:35.340 "method": "bdev_nvme_attach_controller" 00:40:35.340 }' 00:40:35.340 00:45:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:40:35.340 00:45:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:40:35.340 00:45:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:40:35.340 00:45:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:35.340 00:45:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:40:35.340 00:45:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:40:35.340 00:45:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:40:35.340 00:45:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:40:35.340 00:45:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:35.340 00:45:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:35.611 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:40:35.611 fio-3.35 00:40:35.611 Starting 1 thread 00:40:47.812 00:40:47.812 filename0: (groupid=0, jobs=1): err= 0: pid=465572: Mon Nov 18 00:46:10 2024 00:40:47.812 read: IOPS=565, BW=2261KiB/s (2315kB/s)(22.2MiB/10034msec) 00:40:47.812 slat (nsec): min=4371, max=47893, avg=9390.80, stdev=2904.96 00:40:47.812 clat (usec): min=502, max=46511, avg=7046.85, stdev=14855.92 00:40:47.812 lat (usec): min=510, max=46525, avg=7056.24, stdev=14855.87 00:40:47.812 clat percentiles (usec): 00:40:47.812 | 1.00th=[ 545], 5.00th=[ 570], 10.00th=[ 586], 20.00th=[ 603], 00:40:47.812 | 30.00th=[ 611], 40.00th=[ 619], 50.00th=[ 635], 60.00th=[ 652], 00:40:47.812 | 70.00th=[ 676], 80.00th=[ 717], 90.00th=[41157], 95.00th=[41681], 00:40:47.812 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[46400], 00:40:47.812 | 99.99th=[46400] 00:40:47.812 bw ( KiB/s): min= 704, max= 4224, per=100.00%, avg=2267.20, stdev=968.73, samples=20 00:40:47.812 iops : min= 176, max= 1056, avg=566.80, stdev=242.18, samples=20 00:40:47.812 lat (usec) : 750=82.40%, 1000=1.87% 00:40:47.812 lat (msec) : 50=15.73% 00:40:47.812 cpu : usr=90.98%, sys=8.72%, ctx=15, majf=0, minf=243 00:40:47.812 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:47.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:47.812 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:47.812 issued rwts: total=5672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:47.812 latency : target=0, window=0, percentile=100.00%, depth=4 00:40:47.812 00:40:47.812 Run status group 0 (all jobs): 00:40:47.812 READ: bw=2261KiB/s (2315kB/s), 2261KiB/s-2261KiB/s (2315kB/s-2315kB/s), io=22.2MiB (23.2MB), run=10034-10034msec 00:40:47.812 00:46:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:40:47.812 00:46:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:40:47.812 00:46:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:40:47.812 00:46:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:47.812 00:46:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:40:47.812 00:46:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:47.812 00:46:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:47.812 00:46:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:47.812 00:46:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:47.812 00:46:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:47.813 00:40:47.813 real 0m11.204s 00:40:47.813 user 0m10.308s 00:40:47.813 sys 0m1.117s 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:47.813 ************************************ 00:40:47.813 END TEST fio_dif_1_default 00:40:47.813 ************************************ 00:40:47.813 00:46:10 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:40:47.813 00:46:10 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:47.813 00:46:10 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:47.813 00:46:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:47.813 ************************************ 00:40:47.813 START TEST fio_dif_1_multi_subsystems 00:40:47.813 ************************************ 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:47.813 bdev_null0 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:47.813 [2024-11-18 00:46:10.344017] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:47.813 bdev_null1 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:47.813 { 00:40:47.813 "params": { 00:40:47.813 "name": "Nvme$subsystem", 00:40:47.813 "trtype": "$TEST_TRANSPORT", 00:40:47.813 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:47.813 "adrfam": "ipv4", 00:40:47.813 "trsvcid": "$NVMF_PORT", 00:40:47.813 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:47.813 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:47.813 "hdgst": ${hdgst:-false}, 00:40:47.813 "ddgst": ${ddgst:-false} 00:40:47.813 }, 00:40:47.813 "method": "bdev_nvme_attach_controller" 00:40:47.813 } 00:40:47.813 EOF 00:40:47.813 )") 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:47.813 { 00:40:47.813 "params": { 00:40:47.813 "name": "Nvme$subsystem", 00:40:47.813 "trtype": "$TEST_TRANSPORT", 00:40:47.813 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:47.813 "adrfam": "ipv4", 00:40:47.813 "trsvcid": "$NVMF_PORT", 00:40:47.813 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:47.813 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:47.813 "hdgst": ${hdgst:-false}, 00:40:47.813 "ddgst": ${ddgst:-false} 00:40:47.813 }, 00:40:47.813 "method": "bdev_nvme_attach_controller" 00:40:47.813 } 00:40:47.813 EOF 00:40:47.813 )") 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:40:47.813 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:47.813 "params": { 00:40:47.813 "name": "Nvme0", 00:40:47.813 "trtype": "tcp", 00:40:47.813 "traddr": "10.0.0.2", 00:40:47.813 "adrfam": "ipv4", 00:40:47.813 "trsvcid": "4420", 00:40:47.813 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:47.813 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:47.813 "hdgst": false, 00:40:47.813 "ddgst": false 00:40:47.813 }, 00:40:47.814 "method": "bdev_nvme_attach_controller" 00:40:47.814 },{ 00:40:47.814 "params": { 00:40:47.814 "name": "Nvme1", 00:40:47.814 "trtype": "tcp", 00:40:47.814 "traddr": "10.0.0.2", 00:40:47.814 "adrfam": "ipv4", 00:40:47.814 "trsvcid": "4420", 00:40:47.814 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:47.814 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:47.814 "hdgst": false, 00:40:47.814 "ddgst": false 00:40:47.814 }, 00:40:47.814 "method": "bdev_nvme_attach_controller" 00:40:47.814 }' 00:40:47.814 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:40:47.814 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:40:47.814 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:40:47.814 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:47.814 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:40:47.814 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:40:47.814 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:40:47.814 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:40:47.814 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:47.814 00:46:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:47.814 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:40:47.814 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:40:47.814 fio-3.35 00:40:47.814 Starting 2 threads 00:40:57.782 00:40:57.782 filename0: (groupid=0, jobs=1): err= 0: pid=466968: Mon Nov 18 00:46:21 2024 00:40:57.782 read: IOPS=145, BW=582KiB/s (596kB/s)(5824KiB/10012msec) 00:40:57.782 slat (nsec): min=7821, max=27567, avg=9363.96, stdev=2341.67 00:40:57.782 clat (usec): min=499, max=44400, avg=27474.79, stdev=19194.06 00:40:57.782 lat (usec): min=507, max=44413, avg=27484.16, stdev=19193.96 00:40:57.782 clat percentiles (usec): 00:40:57.782 | 1.00th=[ 545], 5.00th=[ 578], 10.00th=[ 586], 20.00th=[ 627], 00:40:57.782 | 30.00th=[ 660], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:57.782 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:57.782 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 00:40:57.782 | 99.99th=[44303] 00:40:57.782 bw ( KiB/s): min= 384, max= 832, per=59.42%, avg=580.80, stdev=191.80, samples=20 00:40:57.782 iops : min= 96, max= 208, avg=145.20, stdev=47.95, samples=20 00:40:57.782 lat (usec) : 500=0.07%, 750=32.62%, 1000=1.10% 00:40:57.782 lat (msec) : 50=66.21% 00:40:57.782 cpu : usr=94.59%, sys=5.14%, ctx=16, majf=0, minf=168 00:40:57.782 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:57.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:57.782 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:57.782 issued rwts: total=1456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:57.782 latency : target=0, window=0, percentile=100.00%, depth=4 00:40:57.782 filename1: (groupid=0, jobs=1): err= 0: pid=466969: Mon Nov 18 00:46:21 2024 00:40:57.782 read: IOPS=98, BW=395KiB/s (404kB/s)(3952KiB/10016msec) 00:40:57.782 slat (nsec): min=6979, max=23580, avg=9417.25, stdev=2429.02 00:40:57.782 clat (usec): min=564, max=45464, avg=40519.01, stdev=4440.82 00:40:57.782 lat (usec): min=572, max=45479, avg=40528.43, stdev=4440.70 00:40:57.782 clat percentiles (usec): 00:40:57.782 | 1.00th=[ 652], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:40:57.782 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:57.782 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:57.782 | 99.00th=[43254], 99.50th=[43254], 99.90th=[45351], 99.95th=[45351], 00:40:57.782 | 99.99th=[45351] 00:40:57.782 bw ( KiB/s): min= 384, max= 416, per=40.26%, avg=393.60, stdev=15.05, samples=20 00:40:57.782 iops : min= 96, max= 104, avg=98.40, stdev= 3.76, samples=20 00:40:57.782 lat (usec) : 750=1.21% 00:40:57.782 lat (msec) : 50=98.79% 00:40:57.782 cpu : usr=95.00%, sys=4.73%, ctx=12, majf=0, minf=90 00:40:57.782 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:57.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:57.783 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:57.783 issued rwts: total=988,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:57.783 latency : target=0, window=0, percentile=100.00%, depth=4 00:40:57.783 00:40:57.783 Run status group 0 (all jobs): 00:40:57.783 READ: bw=976KiB/s (999kB/s), 395KiB/s-582KiB/s (404kB/s-596kB/s), io=9776KiB (10.0MB), run=10012-10016msec 00:40:58.042 00:46:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:40:58.042 00:46:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:40:58.042 00:46:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:40:58.042 00:46:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:58.042 00:46:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:40:58.042 00:46:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:58.042 00:46:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:58.042 00:46:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:58.042 00:46:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:58.042 00:46:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:58.042 00:46:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:58.042 00:46:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:58.042 00:46:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:58.042 00:46:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:40:58.042 00:46:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:40:58.042 00:46:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:40:58.042 00:46:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:58.042 00:46:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:58.042 00:46:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:58.042 00:46:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:58.042 00:46:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:40:58.042 00:46:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:58.042 00:46:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:58.042 00:46:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:58.042 00:40:58.042 real 0m11.373s 00:40:58.042 user 0m20.307s 00:40:58.042 sys 0m1.262s 00:40:58.042 00:46:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:58.042 00:46:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:58.042 ************************************ 00:40:58.042 END TEST fio_dif_1_multi_subsystems 00:40:58.042 ************************************ 00:40:58.042 00:46:21 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:40:58.042 00:46:21 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:58.042 00:46:21 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:58.042 00:46:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:58.042 ************************************ 00:40:58.042 START TEST fio_dif_rand_params 00:40:58.042 ************************************ 00:40:58.042 00:46:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:40:58.042 00:46:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:40:58.042 00:46:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:40:58.042 00:46:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:40:58.042 00:46:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:40:58.042 00:46:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:40:58.042 00:46:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:40:58.042 00:46:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:40:58.042 00:46:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:40:58.042 00:46:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:40:58.042 00:46:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:58.042 00:46:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:40:58.042 00:46:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:40:58.042 00:46:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:40:58.042 00:46:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:58.042 00:46:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:58.042 bdev_null0 00:40:58.042 00:46:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:58.042 00:46:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:58.042 00:46:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:58.042 00:46:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:58.042 00:46:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:58.042 00:46:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:58.042 00:46:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:58.042 00:46:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:58.042 00:46:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:58.042 00:46:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:58.042 00:46:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:58.042 00:46:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:58.042 [2024-11-18 00:46:21.765567] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:58.042 00:46:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:58.042 00:46:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:40:58.042 00:46:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:40:58.042 00:46:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:40:58.042 00:46:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:40:58.042 00:46:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:40:58.042 00:46:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:58.042 00:46:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:58.042 00:46:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:58.042 { 00:40:58.042 "params": { 00:40:58.042 "name": "Nvme$subsystem", 00:40:58.042 "trtype": "$TEST_TRANSPORT", 00:40:58.042 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:58.042 "adrfam": "ipv4", 00:40:58.042 "trsvcid": "$NVMF_PORT", 00:40:58.042 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:58.042 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:58.042 "hdgst": ${hdgst:-false}, 00:40:58.042 "ddgst": ${ddgst:-false} 00:40:58.042 }, 00:40:58.042 "method": "bdev_nvme_attach_controller" 00:40:58.042 } 00:40:58.042 EOF 00:40:58.042 )") 00:40:58.042 00:46:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:40:58.042 00:46:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:58.042 00:46:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:40:58.042 00:46:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:40:58.042 00:46:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:40:58.042 00:46:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:58.042 00:46:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:40:58.043 00:46:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:58.043 00:46:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:40:58.043 00:46:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:40:58.043 00:46:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:40:58.043 00:46:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:40:58.043 00:46:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:40:58.043 00:46:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:58.043 00:46:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:58.043 00:46:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:40:58.043 00:46:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:40:58.043 00:46:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:40:58.043 00:46:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:40:58.043 00:46:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:58.043 "params": { 00:40:58.043 "name": "Nvme0", 00:40:58.043 "trtype": "tcp", 00:40:58.043 "traddr": "10.0.0.2", 00:40:58.043 "adrfam": "ipv4", 00:40:58.043 "trsvcid": "4420", 00:40:58.043 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:58.043 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:58.043 "hdgst": false, 00:40:58.043 "ddgst": false 00:40:58.043 }, 00:40:58.043 "method": "bdev_nvme_attach_controller" 00:40:58.043 }' 00:40:58.043 00:46:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:40:58.043 00:46:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:40:58.043 00:46:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:40:58.043 00:46:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:58.043 00:46:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:40:58.043 00:46:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:40:58.043 00:46:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:40:58.043 00:46:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:40:58.043 00:46:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:58.043 00:46:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:58.300 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:40:58.300 ... 00:40:58.300 fio-3.35 00:40:58.300 Starting 3 threads 00:41:04.987 00:41:04.987 filename0: (groupid=0, jobs=1): err= 0: pid=468364: Mon Nov 18 00:46:27 2024 00:41:04.987 read: IOPS=210, BW=26.3MiB/s (27.6MB/s)(132MiB/5008msec) 00:41:04.987 slat (nsec): min=4512, max=63625, avg=14670.80, stdev=5508.56 00:41:04.987 clat (usec): min=7101, max=54894, avg=14233.20, stdev=9988.55 00:41:04.987 lat (usec): min=7113, max=54907, avg=14247.87, stdev=9988.36 00:41:04.987 clat percentiles (usec): 00:41:04.987 | 1.00th=[ 8455], 5.00th=[ 9634], 10.00th=[10159], 20.00th=[10683], 00:41:04.987 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11731], 60.00th=[12125], 00:41:04.987 | 70.00th=[12518], 80.00th=[12911], 90.00th=[13566], 95.00th=[51119], 00:41:04.987 | 99.00th=[52691], 99.50th=[53740], 99.90th=[54264], 99.95th=[54789], 00:41:04.987 | 99.99th=[54789] 00:41:04.987 bw ( KiB/s): min=23552, max=32000, per=32.38%, avg=26905.60, stdev=2569.79, samples=10 00:41:04.987 iops : min= 184, max= 250, avg=210.20, stdev=20.08, samples=10 00:41:04.987 lat (msec) : 10=7.97%, 20=85.48%, 50=0.47%, 100=6.07% 00:41:04.987 cpu : usr=93.49%, sys=6.01%, ctx=13, majf=0, minf=120 00:41:04.987 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:04.987 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:04.987 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:04.987 issued rwts: total=1054,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:04.987 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:04.987 filename0: (groupid=0, jobs=1): err= 0: pid=468365: Mon Nov 18 00:46:27 2024 00:41:04.987 read: IOPS=211, BW=26.4MiB/s (27.7MB/s)(134MiB/5048msec) 00:41:04.987 slat (nsec): min=5003, max=43138, avg=15685.93, stdev=4582.71 00:41:04.987 clat (usec): min=4841, max=51094, avg=14119.98, stdev=3861.67 00:41:04.987 lat (usec): min=4854, max=51114, avg=14135.67, stdev=3861.62 00:41:04.987 clat percentiles (usec): 00:41:04.987 | 1.00th=[ 5604], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[10290], 00:41:04.987 | 30.00th=[11994], 40.00th=[13566], 50.00th=[14353], 60.00th=[15139], 00:41:04.987 | 70.00th=[16319], 80.00th=[17433], 90.00th=[18482], 95.00th=[19268], 00:41:04.987 | 99.00th=[20841], 99.50th=[21365], 99.90th=[47973], 99.95th=[51119], 00:41:04.987 | 99.99th=[51119] 00:41:04.987 bw ( KiB/s): min=23808, max=30720, per=32.81%, avg=27264.00, stdev=2159.62, samples=10 00:41:04.987 iops : min= 186, max= 240, avg=213.00, stdev=16.87, samples=10 00:41:04.987 lat (msec) : 10=17.42%, 20=80.34%, 50=2.15%, 100=0.09% 00:41:04.987 cpu : usr=93.62%, sys=5.49%, ctx=80, majf=0, minf=86 00:41:04.987 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:04.987 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:04.988 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:04.988 issued rwts: total=1068,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:04.988 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:04.988 filename0: (groupid=0, jobs=1): err= 0: pid=468366: Mon Nov 18 00:46:27 2024 00:41:04.988 read: IOPS=230, BW=28.8MiB/s (30.2MB/s)(144MiB/5006msec) 00:41:04.988 slat (nsec): min=4966, max=56626, avg=16402.83, stdev=4999.07 00:41:04.988 clat (usec): min=6517, max=57641, avg=12979.04, stdev=4774.18 00:41:04.988 lat (usec): min=6542, max=57649, avg=12995.44, stdev=4773.97 00:41:04.988 clat percentiles (usec): 00:41:04.988 | 1.00th=[ 7046], 5.00th=[ 7635], 10.00th=[ 8094], 20.00th=[ 8717], 00:41:04.988 | 30.00th=[11600], 40.00th=[12911], 50.00th=[13435], 60.00th=[13829], 00:41:04.988 | 70.00th=[14353], 80.00th=[15008], 90.00th=[16188], 95.00th=[17171], 00:41:04.988 | 99.00th=[19530], 99.50th=[54264], 99.90th=[55313], 99.95th=[57410], 00:41:04.988 | 99.99th=[57410] 00:41:04.988 bw ( KiB/s): min=23296, max=32000, per=35.49%, avg=29491.20, stdev=2798.62, samples=10 00:41:04.988 iops : min= 182, max= 250, avg=230.40, stdev=21.86, samples=10 00:41:04.988 lat (msec) : 10=26.41%, 20=72.73%, 50=0.09%, 100=0.78% 00:41:04.988 cpu : usr=93.45%, sys=6.01%, ctx=19, majf=0, minf=90 00:41:04.988 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:04.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:04.988 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:04.988 issued rwts: total=1155,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:04.988 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:04.988 00:41:04.988 Run status group 0 (all jobs): 00:41:04.988 READ: bw=81.1MiB/s (85.1MB/s), 26.3MiB/s-28.8MiB/s (27.6MB/s-30.2MB/s), io=410MiB (430MB), run=5006-5048msec 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:04.988 bdev_null0 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:04.988 [2024-11-18 00:46:27.821857] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:04.988 bdev_null1 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:04.988 bdev_null2 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:41:04.988 00:46:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:04.989 00:46:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:04.989 00:46:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:04.989 { 00:41:04.989 "params": { 00:41:04.989 "name": "Nvme$subsystem", 00:41:04.989 "trtype": "$TEST_TRANSPORT", 00:41:04.989 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:04.989 "adrfam": "ipv4", 00:41:04.989 "trsvcid": "$NVMF_PORT", 00:41:04.989 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:04.989 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:04.989 "hdgst": ${hdgst:-false}, 00:41:04.989 "ddgst": ${ddgst:-false} 00:41:04.989 }, 00:41:04.989 "method": "bdev_nvme_attach_controller" 00:41:04.989 } 00:41:04.989 EOF 00:41:04.989 )") 00:41:04.989 00:46:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:04.989 00:46:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:04.989 00:46:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:04.989 00:46:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:04.989 00:46:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:04.989 00:46:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:04.989 00:46:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:04.989 00:46:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:41:04.989 00:46:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:04.989 00:46:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:04.989 00:46:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:04.989 00:46:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:04.989 00:46:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:04.989 00:46:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:41:04.989 00:46:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:04.989 00:46:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:04.989 00:46:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:04.989 00:46:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:04.989 00:46:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:04.989 00:46:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:04.989 { 00:41:04.989 "params": { 00:41:04.989 "name": "Nvme$subsystem", 00:41:04.989 "trtype": "$TEST_TRANSPORT", 00:41:04.989 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:04.989 "adrfam": "ipv4", 00:41:04.989 "trsvcid": "$NVMF_PORT", 00:41:04.989 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:04.989 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:04.989 "hdgst": ${hdgst:-false}, 00:41:04.989 "ddgst": ${ddgst:-false} 00:41:04.989 }, 00:41:04.989 "method": "bdev_nvme_attach_controller" 00:41:04.989 } 00:41:04.989 EOF 00:41:04.989 )") 00:41:04.989 00:46:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:04.989 00:46:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:04.989 00:46:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:04.989 00:46:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:04.989 00:46:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:04.989 00:46:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:04.989 00:46:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:04.989 { 00:41:04.989 "params": { 00:41:04.989 "name": "Nvme$subsystem", 00:41:04.989 "trtype": "$TEST_TRANSPORT", 00:41:04.989 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:04.989 "adrfam": "ipv4", 00:41:04.989 "trsvcid": "$NVMF_PORT", 00:41:04.989 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:04.989 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:04.989 "hdgst": ${hdgst:-false}, 00:41:04.989 "ddgst": ${ddgst:-false} 00:41:04.989 }, 00:41:04.989 "method": "bdev_nvme_attach_controller" 00:41:04.989 } 00:41:04.989 EOF 00:41:04.989 )") 00:41:04.989 00:46:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:04.989 00:46:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:04.989 00:46:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:41:04.989 00:46:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:41:04.989 00:46:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:04.989 "params": { 00:41:04.989 "name": "Nvme0", 00:41:04.989 "trtype": "tcp", 00:41:04.989 "traddr": "10.0.0.2", 00:41:04.989 "adrfam": "ipv4", 00:41:04.989 "trsvcid": "4420", 00:41:04.989 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:04.989 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:04.989 "hdgst": false, 00:41:04.989 "ddgst": false 00:41:04.989 }, 00:41:04.989 "method": "bdev_nvme_attach_controller" 00:41:04.989 },{ 00:41:04.989 "params": { 00:41:04.989 "name": "Nvme1", 00:41:04.989 "trtype": "tcp", 00:41:04.989 "traddr": "10.0.0.2", 00:41:04.989 "adrfam": "ipv4", 00:41:04.989 "trsvcid": "4420", 00:41:04.989 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:04.989 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:04.989 "hdgst": false, 00:41:04.989 "ddgst": false 00:41:04.989 }, 00:41:04.989 "method": "bdev_nvme_attach_controller" 00:41:04.989 },{ 00:41:04.989 "params": { 00:41:04.989 "name": "Nvme2", 00:41:04.989 "trtype": "tcp", 00:41:04.989 "traddr": "10.0.0.2", 00:41:04.989 "adrfam": "ipv4", 00:41:04.989 "trsvcid": "4420", 00:41:04.989 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:41:04.989 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:41:04.989 "hdgst": false, 00:41:04.989 "ddgst": false 00:41:04.989 }, 00:41:04.989 "method": "bdev_nvme_attach_controller" 00:41:04.989 }' 00:41:04.989 00:46:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:04.989 00:46:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:04.989 00:46:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:04.989 00:46:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:04.989 00:46:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:41:04.989 00:46:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:04.989 00:46:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:04.989 00:46:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:04.989 00:46:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:04.989 00:46:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:04.989 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:04.989 ... 00:41:04.989 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:04.989 ... 00:41:04.989 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:04.989 ... 00:41:04.989 fio-3.35 00:41:04.990 Starting 24 threads 00:41:17.198 00:41:17.198 filename0: (groupid=0, jobs=1): err= 0: pid=469231: Mon Nov 18 00:46:39 2024 00:41:17.198 read: IOPS=74, BW=298KiB/s (305kB/s)(3008KiB/10092msec) 00:41:17.198 slat (nsec): min=7159, max=43641, avg=10034.43, stdev=3088.80 00:41:17.198 clat (msec): min=105, max=309, avg=213.73, stdev=42.38 00:41:17.198 lat (msec): min=105, max=309, avg=213.74, stdev=42.38 00:41:17.198 clat percentiles (msec): 00:41:17.198 | 1.00th=[ 106], 5.00th=[ 127], 10.00th=[ 155], 20.00th=[ 182], 00:41:17.198 | 30.00th=[ 197], 40.00th=[ 201], 50.00th=[ 215], 60.00th=[ 236], 00:41:17.198 | 70.00th=[ 245], 80.00th=[ 253], 90.00th=[ 262], 95.00th=[ 268], 00:41:17.198 | 99.00th=[ 271], 99.50th=[ 271], 99.90th=[ 309], 99.95th=[ 309], 00:41:17.198 | 99.99th=[ 309] 00:41:17.198 bw ( KiB/s): min= 256, max= 384, per=5.10%, avg=294.40, stdev=60.18, samples=20 00:41:17.198 iops : min= 64, max= 96, avg=73.60, stdev=15.05, samples=20 00:41:17.198 lat (msec) : 250=72.87%, 500=27.13% 00:41:17.198 cpu : usr=98.14%, sys=1.33%, ctx=58, majf=0, minf=51 00:41:17.198 IO depths : 1=5.3%, 2=11.6%, 4=25.0%, 8=50.9%, 16=7.2%, 32=0.0%, >=64=0.0% 00:41:17.198 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.198 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.198 issued rwts: total=752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:17.198 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:17.198 filename0: (groupid=0, jobs=1): err= 0: pid=469232: Mon Nov 18 00:46:39 2024 00:41:17.198 read: IOPS=52, BW=210KiB/s (215kB/s)(2112KiB/10068msec) 00:41:17.198 slat (usec): min=8, max=106, avg=50.33, stdev=31.68 00:41:17.198 clat (msec): min=167, max=446, avg=304.65, stdev=58.43 00:41:17.198 lat (msec): min=167, max=446, avg=304.70, stdev=58.44 00:41:17.198 clat percentiles (msec): 00:41:17.198 | 1.00th=[ 167], 5.00th=[ 230], 10.00th=[ 239], 20.00th=[ 259], 00:41:17.198 | 30.00th=[ 275], 40.00th=[ 279], 50.00th=[ 292], 60.00th=[ 317], 00:41:17.198 | 70.00th=[ 334], 80.00th=[ 363], 90.00th=[ 384], 95.00th=[ 397], 00:41:17.198 | 99.00th=[ 443], 99.50th=[ 443], 99.90th=[ 447], 99.95th=[ 447], 00:41:17.198 | 99.99th=[ 447] 00:41:17.198 bw ( KiB/s): min= 128, max= 256, per=3.54%, avg=204.80, stdev=62.85, samples=20 00:41:17.198 iops : min= 32, max= 64, avg=51.20, stdev=15.71, samples=20 00:41:17.198 lat (msec) : 250=12.88%, 500=87.12% 00:41:17.198 cpu : usr=97.85%, sys=1.62%, ctx=36, majf=0, minf=26 00:41:17.198 IO depths : 1=4.4%, 2=10.6%, 4=25.0%, 8=51.9%, 16=8.1%, 32=0.0%, >=64=0.0% 00:41:17.198 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.198 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.198 issued rwts: total=528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:17.198 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:17.198 filename0: (groupid=0, jobs=1): err= 0: pid=469233: Mon Nov 18 00:46:39 2024 00:41:17.198 read: IOPS=49, BW=197KiB/s (202kB/s)(1984KiB/10070msec) 00:41:17.198 slat (nsec): min=3844, max=69480, avg=19018.04, stdev=10128.16 00:41:17.198 clat (msec): min=166, max=517, avg=324.66, stdev=62.66 00:41:17.198 lat (msec): min=166, max=517, avg=324.68, stdev=62.65 00:41:17.198 clat percentiles (msec): 00:41:17.198 | 1.00th=[ 190], 5.00th=[ 239], 10.00th=[ 251], 20.00th=[ 271], 00:41:17.198 | 30.00th=[ 288], 40.00th=[ 300], 50.00th=[ 313], 60.00th=[ 334], 00:41:17.198 | 70.00th=[ 359], 80.00th=[ 384], 90.00th=[ 397], 95.00th=[ 443], 00:41:17.198 | 99.00th=[ 498], 99.50th=[ 502], 99.90th=[ 518], 99.95th=[ 518], 00:41:17.198 | 99.99th=[ 518] 00:41:17.198 bw ( KiB/s): min= 128, max= 256, per=3.33%, avg=192.00, stdev=64.21, samples=20 00:41:17.198 iops : min= 32, max= 64, avg=48.00, stdev=16.05, samples=20 00:41:17.198 lat (msec) : 250=7.66%, 500=91.53%, 750=0.81% 00:41:17.198 cpu : usr=98.65%, sys=0.96%, ctx=21, majf=0, minf=38 00:41:17.198 IO depths : 1=3.2%, 2=9.5%, 4=25.0%, 8=53.0%, 16=9.3%, 32=0.0%, >=64=0.0% 00:41:17.198 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.198 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.198 issued rwts: total=496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:17.198 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:17.198 filename0: (groupid=0, jobs=1): err= 0: pid=469234: Mon Nov 18 00:46:39 2024 00:41:17.198 read: IOPS=64, BW=259KiB/s (265kB/s)(2608KiB/10076msec) 00:41:17.198 slat (nsec): min=7984, max=70603, avg=17941.67, stdev=11574.61 00:41:17.198 clat (msec): min=88, max=370, avg=246.96, stdev=39.47 00:41:17.198 lat (msec): min=88, max=370, avg=246.98, stdev=39.47 00:41:17.198 clat percentiles (msec): 00:41:17.198 | 1.00th=[ 88], 5.00th=[ 197], 10.00th=[ 201], 20.00th=[ 211], 00:41:17.198 | 30.00th=[ 230], 40.00th=[ 245], 50.00th=[ 251], 60.00th=[ 259], 00:41:17.198 | 70.00th=[ 264], 80.00th=[ 275], 90.00th=[ 292], 95.00th=[ 300], 00:41:17.198 | 99.00th=[ 351], 99.50th=[ 372], 99.90th=[ 372], 99.95th=[ 372], 00:41:17.198 | 99.99th=[ 372] 00:41:17.198 bw ( KiB/s): min= 128, max= 384, per=4.41%, avg=254.40, stdev=55.64, samples=20 00:41:17.198 iops : min= 32, max= 96, avg=63.60, stdev=13.91, samples=20 00:41:17.198 lat (msec) : 100=1.53%, 250=45.09%, 500=53.37% 00:41:17.198 cpu : usr=98.41%, sys=1.18%, ctx=15, majf=0, minf=30 00:41:17.198 IO depths : 1=2.6%, 2=5.7%, 4=15.3%, 8=66.4%, 16=10.0%, 32=0.0%, >=64=0.0% 00:41:17.198 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.198 complete : 0=0.0%, 4=91.2%, 8=3.2%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.198 issued rwts: total=652,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:17.198 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:17.198 filename0: (groupid=0, jobs=1): err= 0: pid=469235: Mon Nov 18 00:46:39 2024 00:41:17.198 read: IOPS=60, BW=243KiB/s (248kB/s)(2432KiB/10025msec) 00:41:17.198 slat (nsec): min=8173, max=65497, avg=22087.68, stdev=10945.41 00:41:17.198 clat (msec): min=159, max=392, avg=263.62, stdev=45.19 00:41:17.198 lat (msec): min=159, max=392, avg=263.64, stdev=45.19 00:41:17.198 clat percentiles (msec): 00:41:17.198 | 1.00th=[ 167], 5.00th=[ 201], 10.00th=[ 213], 20.00th=[ 236], 00:41:17.198 | 30.00th=[ 245], 40.00th=[ 251], 50.00th=[ 257], 60.00th=[ 268], 00:41:17.198 | 70.00th=[ 275], 80.00th=[ 292], 90.00th=[ 330], 95.00th=[ 359], 00:41:17.198 | 99.00th=[ 393], 99.50th=[ 393], 99.90th=[ 393], 99.95th=[ 393], 00:41:17.198 | 99.99th=[ 393] 00:41:17.198 bw ( KiB/s): min= 128, max= 384, per=4.10%, avg=236.80, stdev=71.10, samples=20 00:41:17.198 iops : min= 32, max= 96, avg=59.20, stdev=17.78, samples=20 00:41:17.198 lat (msec) : 250=39.47%, 500=60.53% 00:41:17.198 cpu : usr=98.50%, sys=1.07%, ctx=31, majf=0, minf=53 00:41:17.198 IO depths : 1=2.8%, 2=9.0%, 4=25.0%, 8=53.5%, 16=9.7%, 32=0.0%, >=64=0.0% 00:41:17.198 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.198 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.199 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:17.199 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:17.199 filename0: (groupid=0, jobs=1): err= 0: pid=469236: Mon Nov 18 00:46:39 2024 00:41:17.199 read: IOPS=60, BW=241KiB/s (247kB/s)(2432KiB/10092msec) 00:41:17.199 slat (usec): min=6, max=104, avg=44.59, stdev=27.80 00:41:17.199 clat (msec): min=127, max=439, avg=264.54, stdev=48.82 00:41:17.199 lat (msec): min=127, max=439, avg=264.59, stdev=48.83 00:41:17.199 clat percentiles (msec): 00:41:17.199 | 1.00th=[ 128], 5.00th=[ 180], 10.00th=[ 211], 20.00th=[ 245], 00:41:17.199 | 30.00th=[ 247], 40.00th=[ 255], 50.00th=[ 259], 60.00th=[ 266], 00:41:17.199 | 70.00th=[ 275], 80.00th=[ 292], 90.00th=[ 330], 95.00th=[ 363], 00:41:17.199 | 99.00th=[ 384], 99.50th=[ 422], 99.90th=[ 439], 99.95th=[ 439], 00:41:17.199 | 99.99th=[ 439] 00:41:17.199 bw ( KiB/s): min= 128, max= 304, per=4.10%, avg=236.80, stdev=47.46, samples=20 00:41:17.199 iops : min= 32, max= 76, avg=59.20, stdev=11.87, samples=20 00:41:17.199 lat (msec) : 250=33.88%, 500=66.12% 00:41:17.199 cpu : usr=98.33%, sys=1.20%, ctx=30, majf=0, minf=39 00:41:17.199 IO depths : 1=2.3%, 2=6.1%, 4=17.6%, 8=63.8%, 16=10.2%, 32=0.0%, >=64=0.0% 00:41:17.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.199 complete : 0=0.0%, 4=92.0%, 8=2.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.199 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:17.199 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:17.199 filename0: (groupid=0, jobs=1): err= 0: pid=469237: Mon Nov 18 00:46:39 2024 00:41:17.199 read: IOPS=57, BW=231KiB/s (236kB/s)(2328KiB/10093msec) 00:41:17.199 slat (usec): min=7, max=142, avg=47.69, stdev=26.27 00:41:17.199 clat (msec): min=125, max=527, avg=275.99, stdev=71.23 00:41:17.199 lat (msec): min=126, max=527, avg=276.04, stdev=71.25 00:41:17.199 clat percentiles (msec): 00:41:17.199 | 1.00th=[ 161], 5.00th=[ 171], 10.00th=[ 199], 20.00th=[ 207], 00:41:17.199 | 30.00th=[ 236], 40.00th=[ 255], 50.00th=[ 266], 60.00th=[ 279], 00:41:17.199 | 70.00th=[ 300], 80.00th=[ 355], 90.00th=[ 384], 95.00th=[ 397], 00:41:17.199 | 99.00th=[ 481], 99.50th=[ 498], 99.90th=[ 527], 99.95th=[ 527], 00:41:17.199 | 99.99th=[ 527] 00:41:17.199 bw ( KiB/s): min= 128, max= 368, per=3.92%, avg=226.40, stdev=65.31, samples=20 00:41:17.199 iops : min= 32, max= 92, avg=56.60, stdev=16.33, samples=20 00:41:17.199 lat (msec) : 250=35.40%, 500=64.26%, 750=0.34% 00:41:17.199 cpu : usr=98.19%, sys=1.35%, ctx=16, majf=0, minf=31 00:41:17.199 IO depths : 1=2.9%, 2=8.2%, 4=22.2%, 8=57.0%, 16=9.6%, 32=0.0%, >=64=0.0% 00:41:17.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.199 complete : 0=0.0%, 4=93.4%, 8=0.9%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.199 issued rwts: total=582,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:17.199 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:17.199 filename0: (groupid=0, jobs=1): err= 0: pid=469238: Mon Nov 18 00:46:39 2024 00:41:17.199 read: IOPS=65, BW=260KiB/s (266kB/s)(2624KiB/10087msec) 00:41:17.199 slat (usec): min=3, max=112, avg=23.23, stdev=18.49 00:41:17.199 clat (msec): min=100, max=439, avg=245.78, stdev=43.67 00:41:17.199 lat (msec): min=100, max=439, avg=245.80, stdev=43.68 00:41:17.199 clat percentiles (msec): 00:41:17.199 | 1.00th=[ 153], 5.00th=[ 190], 10.00th=[ 199], 20.00th=[ 207], 00:41:17.199 | 30.00th=[ 230], 40.00th=[ 239], 50.00th=[ 247], 60.00th=[ 255], 00:41:17.199 | 70.00th=[ 266], 80.00th=[ 275], 90.00th=[ 292], 95.00th=[ 300], 00:41:17.199 | 99.00th=[ 384], 99.50th=[ 401], 99.90th=[ 439], 99.95th=[ 439], 00:41:17.199 | 99.99th=[ 439] 00:41:17.199 bw ( KiB/s): min= 128, max= 368, per=4.44%, avg=256.00, stdev=53.45, samples=20 00:41:17.199 iops : min= 32, max= 92, avg=64.00, stdev=13.36, samples=20 00:41:17.199 lat (msec) : 250=53.35%, 500=46.65% 00:41:17.199 cpu : usr=98.68%, sys=0.92%, ctx=26, majf=0, minf=47 00:41:17.199 IO depths : 1=1.8%, 2=8.1%, 4=25.0%, 8=54.4%, 16=10.7%, 32=0.0%, >=64=0.0% 00:41:17.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.199 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.199 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:17.199 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:17.199 filename1: (groupid=0, jobs=1): err= 0: pid=469239: Mon Nov 18 00:46:39 2024 00:41:17.199 read: IOPS=74, BW=297KiB/s (304kB/s)(3000KiB/10105msec) 00:41:17.199 slat (nsec): min=3998, max=79419, avg=12645.43, stdev=10481.49 00:41:17.199 clat (msec): min=32, max=399, avg=214.65, stdev=54.21 00:41:17.199 lat (msec): min=32, max=399, avg=214.66, stdev=54.21 00:41:17.199 clat percentiles (msec): 00:41:17.199 | 1.00th=[ 33], 5.00th=[ 129], 10.00th=[ 148], 20.00th=[ 188], 00:41:17.199 | 30.00th=[ 197], 40.00th=[ 203], 50.00th=[ 228], 60.00th=[ 236], 00:41:17.199 | 70.00th=[ 247], 80.00th=[ 251], 90.00th=[ 262], 95.00th=[ 271], 00:41:17.199 | 99.00th=[ 363], 99.50th=[ 401], 99.90th=[ 401], 99.95th=[ 401], 00:41:17.199 | 99.99th=[ 401] 00:41:17.199 bw ( KiB/s): min= 176, max= 512, per=5.14%, avg=296.00, stdev=66.98, samples=20 00:41:17.199 iops : min= 44, max= 128, avg=74.00, stdev=16.75, samples=20 00:41:17.199 lat (msec) : 50=2.13%, 100=1.87%, 250=74.40%, 500=21.60% 00:41:17.199 cpu : usr=98.40%, sys=1.19%, ctx=19, majf=0, minf=74 00:41:17.199 IO depths : 1=0.7%, 2=1.6%, 4=8.8%, 8=76.9%, 16=12.0%, 32=0.0%, >=64=0.0% 00:41:17.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.199 complete : 0=0.0%, 4=89.4%, 8=5.2%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.199 issued rwts: total=750,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:17.199 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:17.199 filename1: (groupid=0, jobs=1): err= 0: pid=469240: Mon Nov 18 00:46:39 2024 00:41:17.199 read: IOPS=61, BW=247KiB/s (253kB/s)(2496KiB/10089msec) 00:41:17.199 slat (usec): min=3, max=106, avg=33.05, stdev=28.38 00:41:17.199 clat (msec): min=92, max=393, avg=258.37, stdev=40.29 00:41:17.199 lat (msec): min=92, max=393, avg=258.40, stdev=40.30 00:41:17.199 clat percentiles (msec): 00:41:17.199 | 1.00th=[ 157], 5.00th=[ 167], 10.00th=[ 213], 20.00th=[ 236], 00:41:17.199 | 30.00th=[ 245], 40.00th=[ 253], 50.00th=[ 257], 60.00th=[ 268], 00:41:17.199 | 70.00th=[ 271], 80.00th=[ 284], 90.00th=[ 300], 95.00th=[ 330], 00:41:17.199 | 99.00th=[ 359], 99.50th=[ 384], 99.90th=[ 393], 99.95th=[ 393], 00:41:17.199 | 99.99th=[ 393] 00:41:17.199 bw ( KiB/s): min= 128, max= 256, per=4.22%, avg=243.20, stdev=36.93, samples=20 00:41:17.199 iops : min= 32, max= 64, avg=60.80, stdev= 9.23, samples=20 00:41:17.199 lat (msec) : 100=0.32%, 250=36.54%, 500=63.14% 00:41:17.199 cpu : usr=98.34%, sys=1.21%, ctx=27, majf=0, minf=38 00:41:17.199 IO depths : 1=3.2%, 2=9.5%, 4=25.0%, 8=53.0%, 16=9.3%, 32=0.0%, >=64=0.0% 00:41:17.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.199 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.199 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:17.199 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:17.199 filename1: (groupid=0, jobs=1): err= 0: pid=469241: Mon Nov 18 00:46:39 2024 00:41:17.199 read: IOPS=56, BW=225KiB/s (230kB/s)(2264KiB/10068msec) 00:41:17.199 slat (nsec): min=7980, max=93632, avg=43994.43, stdev=31434.73 00:41:17.199 clat (msec): min=128, max=467, avg=283.08, stdev=52.73 00:41:17.199 lat (msec): min=128, max=468, avg=283.12, stdev=52.75 00:41:17.199 clat percentiles (msec): 00:41:17.199 | 1.00th=[ 161], 5.00th=[ 226], 10.00th=[ 234], 20.00th=[ 249], 00:41:17.199 | 30.00th=[ 255], 40.00th=[ 262], 50.00th=[ 268], 60.00th=[ 279], 00:41:17.199 | 70.00th=[ 292], 80.00th=[ 326], 90.00th=[ 342], 95.00th=[ 384], 00:41:17.199 | 99.00th=[ 447], 99.50th=[ 456], 99.90th=[ 468], 99.95th=[ 468], 00:41:17.199 | 99.99th=[ 468] 00:41:17.199 bw ( KiB/s): min= 128, max= 336, per=3.87%, avg=224.00, stdev=64.84, samples=20 00:41:17.199 iops : min= 32, max= 84, avg=56.00, stdev=16.21, samples=20 00:41:17.199 lat (msec) : 250=24.73%, 500=75.27% 00:41:17.199 cpu : usr=97.82%, sys=1.42%, ctx=154, majf=0, minf=31 00:41:17.199 IO depths : 1=1.9%, 2=5.8%, 4=17.8%, 8=63.8%, 16=10.6%, 32=0.0%, >=64=0.0% 00:41:17.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.200 complete : 0=0.0%, 4=92.0%, 8=2.4%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.200 issued rwts: total=566,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:17.200 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:17.200 filename1: (groupid=0, jobs=1): err= 0: pid=469242: Mon Nov 18 00:46:39 2024 00:41:17.200 read: IOPS=62, BW=250KiB/s (256kB/s)(2520KiB/10076msec) 00:41:17.200 slat (nsec): min=4062, max=66003, avg=17953.27, stdev=11022.83 00:41:17.200 clat (msec): min=78, max=391, avg=255.62, stdev=46.10 00:41:17.200 lat (msec): min=78, max=391, avg=255.64, stdev=46.10 00:41:17.200 clat percentiles (msec): 00:41:17.200 | 1.00th=[ 79], 5.00th=[ 192], 10.00th=[ 211], 20.00th=[ 224], 00:41:17.200 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 257], 60.00th=[ 262], 00:41:17.200 | 70.00th=[ 271], 80.00th=[ 279], 90.00th=[ 330], 95.00th=[ 334], 00:41:17.200 | 99.00th=[ 388], 99.50th=[ 388], 99.90th=[ 393], 99.95th=[ 393], 00:41:17.200 | 99.99th=[ 393] 00:41:17.200 bw ( KiB/s): min= 128, max= 384, per=4.25%, avg=245.60, stdev=69.89, samples=20 00:41:17.200 iops : min= 32, max= 96, avg=61.40, stdev=17.47, samples=20 00:41:17.200 lat (msec) : 100=1.59%, 250=40.63%, 500=57.78% 00:41:17.200 cpu : usr=98.44%, sys=1.05%, ctx=23, majf=0, minf=37 00:41:17.200 IO depths : 1=3.0%, 2=6.8%, 4=17.6%, 8=63.0%, 16=9.5%, 32=0.0%, >=64=0.0% 00:41:17.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.200 complete : 0=0.0%, 4=91.9%, 8=2.5%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.200 issued rwts: total=630,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:17.200 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:17.200 filename1: (groupid=0, jobs=1): err= 0: pid=469243: Mon Nov 18 00:46:39 2024 00:41:17.200 read: IOPS=49, BW=196KiB/s (201kB/s)(1976KiB/10069msec) 00:41:17.200 slat (usec): min=8, max=106, avg=47.38, stdev=28.97 00:41:17.200 clat (msec): min=166, max=505, avg=325.56, stdev=65.67 00:41:17.200 lat (msec): min=166, max=505, avg=325.60, stdev=65.67 00:41:17.200 clat percentiles (msec): 00:41:17.200 | 1.00th=[ 169], 5.00th=[ 230], 10.00th=[ 257], 20.00th=[ 271], 00:41:17.200 | 30.00th=[ 288], 40.00th=[ 300], 50.00th=[ 326], 60.00th=[ 351], 00:41:17.200 | 70.00th=[ 372], 80.00th=[ 384], 90.00th=[ 397], 95.00th=[ 447], 00:41:17.200 | 99.00th=[ 498], 99.50th=[ 498], 99.90th=[ 506], 99.95th=[ 506], 00:41:17.200 | 99.99th=[ 506] 00:41:17.200 bw ( KiB/s): min= 128, max= 256, per=3.31%, avg=191.20, stdev=60.42, samples=20 00:41:17.200 iops : min= 32, max= 64, avg=47.80, stdev=15.11, samples=20 00:41:17.200 lat (msec) : 250=6.48%, 500=93.12%, 750=0.40% 00:41:17.200 cpu : usr=98.17%, sys=1.20%, ctx=52, majf=0, minf=30 00:41:17.200 IO depths : 1=3.0%, 2=9.3%, 4=25.1%, 8=53.2%, 16=9.3%, 32=0.0%, >=64=0.0% 00:41:17.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.200 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.200 issued rwts: total=494,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:17.200 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:17.200 filename1: (groupid=0, jobs=1): err= 0: pid=469244: Mon Nov 18 00:46:39 2024 00:41:17.200 read: IOPS=56, BW=227KiB/s (233kB/s)(2296KiB/10096msec) 00:41:17.200 slat (usec): min=7, max=104, avg=46.83, stdev=26.61 00:41:17.200 clat (msec): min=160, max=511, avg=280.82, stdev=78.35 00:41:17.200 lat (msec): min=160, max=511, avg=280.87, stdev=78.37 00:41:17.200 clat percentiles (msec): 00:41:17.200 | 1.00th=[ 161], 5.00th=[ 169], 10.00th=[ 197], 20.00th=[ 205], 00:41:17.200 | 30.00th=[ 215], 40.00th=[ 259], 50.00th=[ 268], 60.00th=[ 292], 00:41:17.200 | 70.00th=[ 300], 80.00th=[ 359], 90.00th=[ 393], 95.00th=[ 397], 00:41:17.200 | 99.00th=[ 498], 99.50th=[ 510], 99.90th=[ 510], 99.95th=[ 510], 00:41:17.200 | 99.99th=[ 510] 00:41:17.200 bw ( KiB/s): min= 128, max= 368, per=3.87%, avg=223.20, stdev=67.78, samples=20 00:41:17.200 iops : min= 32, max= 92, avg=55.80, stdev=16.94, samples=20 00:41:17.200 lat (msec) : 250=35.54%, 500=63.76%, 750=0.70% 00:41:17.200 cpu : usr=98.29%, sys=1.18%, ctx=16, majf=0, minf=52 00:41:17.200 IO depths : 1=3.3%, 2=9.6%, 4=25.1%, 8=53.0%, 16=9.1%, 32=0.0%, >=64=0.0% 00:41:17.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.200 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.200 issued rwts: total=574,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:17.200 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:17.200 filename1: (groupid=0, jobs=1): err= 0: pid=469245: Mon Nov 18 00:46:39 2024 00:41:17.200 read: IOPS=60, BW=244KiB/s (250kB/s)(2456KiB/10073msec) 00:41:17.200 slat (nsec): min=8045, max=96498, avg=33120.90, stdev=27388.51 00:41:17.200 clat (msec): min=95, max=420, avg=262.03, stdev=42.93 00:41:17.200 lat (msec): min=95, max=420, avg=262.06, stdev=42.94 00:41:17.200 clat percentiles (msec): 00:41:17.200 | 1.00th=[ 96], 5.00th=[ 201], 10.00th=[ 222], 20.00th=[ 236], 00:41:17.200 | 30.00th=[ 247], 40.00th=[ 251], 50.00th=[ 257], 60.00th=[ 266], 00:41:17.200 | 70.00th=[ 275], 80.00th=[ 288], 90.00th=[ 326], 95.00th=[ 334], 00:41:17.200 | 99.00th=[ 393], 99.50th=[ 401], 99.90th=[ 422], 99.95th=[ 422], 00:41:17.200 | 99.99th=[ 422] 00:41:17.200 bw ( KiB/s): min= 128, max= 368, per=4.15%, avg=239.20, stdev=65.55, samples=20 00:41:17.200 iops : min= 32, max= 92, avg=59.80, stdev=16.39, samples=20 00:41:17.200 lat (msec) : 100=1.63%, 250=31.60%, 500=66.78% 00:41:17.200 cpu : usr=98.30%, sys=1.22%, ctx=32, majf=0, minf=48 00:41:17.200 IO depths : 1=1.5%, 2=5.4%, 4=17.9%, 8=64.2%, 16=11.1%, 32=0.0%, >=64=0.0% 00:41:17.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.200 complete : 0=0.0%, 4=92.1%, 8=2.3%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.200 issued rwts: total=614,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:17.200 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:17.200 filename1: (groupid=0, jobs=1): err= 0: pid=469246: Mon Nov 18 00:46:39 2024 00:41:17.200 read: IOPS=64, BW=258KiB/s (265kB/s)(2608KiB/10092msec) 00:41:17.200 slat (nsec): min=6013, max=43863, avg=20224.85, stdev=5048.13 00:41:17.200 clat (msec): min=125, max=400, avg=247.23, stdev=48.53 00:41:17.200 lat (msec): min=125, max=400, avg=247.25, stdev=48.53 00:41:17.200 clat percentiles (msec): 00:41:17.200 | 1.00th=[ 127], 5.00th=[ 178], 10.00th=[ 190], 20.00th=[ 203], 00:41:17.200 | 30.00th=[ 224], 40.00th=[ 241], 50.00th=[ 251], 60.00th=[ 262], 00:41:17.200 | 70.00th=[ 268], 80.00th=[ 284], 90.00th=[ 296], 95.00th=[ 309], 00:41:17.200 | 99.00th=[ 388], 99.50th=[ 401], 99.90th=[ 401], 99.95th=[ 401], 00:41:17.200 | 99.99th=[ 401] 00:41:17.200 bw ( KiB/s): min= 176, max= 368, per=4.41%, avg=254.40, stdev=45.82, samples=20 00:41:17.200 iops : min= 44, max= 92, avg=63.60, stdev=11.45, samples=20 00:41:17.200 lat (msec) : 250=49.08%, 500=50.92% 00:41:17.200 cpu : usr=98.14%, sys=1.25%, ctx=16, majf=0, minf=45 00:41:17.200 IO depths : 1=1.7%, 2=5.7%, 4=17.9%, 8=63.8%, 16=10.9%, 32=0.0%, >=64=0.0% 00:41:17.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.200 complete : 0=0.0%, 4=92.1%, 8=2.5%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.200 issued rwts: total=652,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:17.200 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:17.200 filename2: (groupid=0, jobs=1): err= 0: pid=469247: Mon Nov 18 00:46:39 2024 00:41:17.200 read: IOPS=57, BW=231KiB/s (237kB/s)(2328KiB/10074msec) 00:41:17.200 slat (usec): min=8, max=104, avg=35.89, stdev=30.29 00:41:17.200 clat (msec): min=144, max=440, avg=275.94, stdev=43.80 00:41:17.200 lat (msec): min=144, max=440, avg=275.98, stdev=43.81 00:41:17.200 clat percentiles (msec): 00:41:17.200 | 1.00th=[ 197], 5.00th=[ 222], 10.00th=[ 234], 20.00th=[ 245], 00:41:17.200 | 30.00th=[ 253], 40.00th=[ 259], 50.00th=[ 264], 60.00th=[ 275], 00:41:17.200 | 70.00th=[ 288], 80.00th=[ 300], 90.00th=[ 342], 95.00th=[ 372], 00:41:17.200 | 99.00th=[ 376], 99.50th=[ 401], 99.90th=[ 443], 99.95th=[ 443], 00:41:17.200 | 99.99th=[ 443] 00:41:17.200 bw ( KiB/s): min= 128, max= 304, per=3.96%, avg=228.80, stdev=56.41, samples=20 00:41:17.200 iops : min= 32, max= 76, avg=57.20, stdev=14.10, samples=20 00:41:17.200 lat (msec) : 250=27.84%, 500=72.16% 00:41:17.201 cpu : usr=98.18%, sys=1.32%, ctx=17, majf=0, minf=47 00:41:17.201 IO depths : 1=2.7%, 2=6.7%, 4=18.0%, 8=62.7%, 16=9.8%, 32=0.0%, >=64=0.0% 00:41:17.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.201 complete : 0=0.0%, 4=92.0%, 8=2.3%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.201 issued rwts: total=582,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:17.201 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:17.201 filename2: (groupid=0, jobs=1): err= 0: pid=469248: Mon Nov 18 00:46:39 2024 00:41:17.201 read: IOPS=51, BW=204KiB/s (209kB/s)(2048KiB/10022msec) 00:41:17.201 slat (usec): min=6, max=108, avg=48.16, stdev=29.05 00:41:17.201 clat (msec): min=144, max=523, avg=312.75, stdev=61.89 00:41:17.201 lat (msec): min=144, max=523, avg=312.80, stdev=61.89 00:41:17.201 clat percentiles (msec): 00:41:17.201 | 1.00th=[ 201], 5.00th=[ 222], 10.00th=[ 251], 20.00th=[ 259], 00:41:17.201 | 30.00th=[ 275], 40.00th=[ 284], 50.00th=[ 296], 60.00th=[ 334], 00:41:17.201 | 70.00th=[ 359], 80.00th=[ 372], 90.00th=[ 384], 95.00th=[ 405], 00:41:17.201 | 99.00th=[ 489], 99.50th=[ 523], 99.90th=[ 523], 99.95th=[ 523], 00:41:17.201 | 99.99th=[ 523] 00:41:17.201 bw ( KiB/s): min= 128, max= 256, per=3.44%, avg=198.40, stdev=60.85, samples=20 00:41:17.201 iops : min= 32, max= 64, avg=49.60, stdev=15.21, samples=20 00:41:17.201 lat (msec) : 250=7.42%, 500=91.80%, 750=0.78% 00:41:17.201 cpu : usr=98.37%, sys=1.12%, ctx=44, majf=0, minf=41 00:41:17.201 IO depths : 1=2.9%, 2=9.2%, 4=25.0%, 8=53.3%, 16=9.6%, 32=0.0%, >=64=0.0% 00:41:17.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.201 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.201 issued rwts: total=512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:17.201 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:17.201 filename2: (groupid=0, jobs=1): err= 0: pid=469249: Mon Nov 18 00:46:39 2024 00:41:17.201 read: IOPS=58, BW=235KiB/s (241kB/s)(2368KiB/10068msec) 00:41:17.201 slat (usec): min=8, max=110, avg=36.68, stdev=28.40 00:41:17.201 clat (msec): min=207, max=433, avg=270.66, stdev=39.65 00:41:17.201 lat (msec): min=207, max=433, avg=270.70, stdev=39.66 00:41:17.201 clat percentiles (msec): 00:41:17.201 | 1.00th=[ 213], 5.00th=[ 220], 10.00th=[ 234], 20.00th=[ 241], 00:41:17.201 | 30.00th=[ 247], 40.00th=[ 253], 50.00th=[ 259], 60.00th=[ 271], 00:41:17.201 | 70.00th=[ 279], 80.00th=[ 292], 90.00th=[ 330], 95.00th=[ 359], 00:41:17.201 | 99.00th=[ 393], 99.50th=[ 393], 99.90th=[ 435], 99.95th=[ 435], 00:41:17.201 | 99.99th=[ 435] 00:41:17.201 bw ( KiB/s): min= 128, max= 368, per=3.99%, avg=230.40, stdev=70.87, samples=20 00:41:17.201 iops : min= 32, max= 92, avg=57.60, stdev=17.72, samples=20 00:41:17.201 lat (msec) : 250=35.14%, 500=64.86% 00:41:17.201 cpu : usr=98.36%, sys=1.09%, ctx=39, majf=0, minf=37 00:41:17.201 IO depths : 1=2.4%, 2=8.6%, 4=25.0%, 8=53.9%, 16=10.1%, 32=0.0%, >=64=0.0% 00:41:17.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.201 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.201 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:17.201 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:17.201 filename2: (groupid=0, jobs=1): err= 0: pid=469250: Mon Nov 18 00:46:39 2024 00:41:17.201 read: IOPS=71, BW=287KiB/s (294kB/s)(2904KiB/10110msec) 00:41:17.201 slat (usec): min=4, max=125, avg=34.65, stdev=30.93 00:41:17.201 clat (msec): min=3, max=361, avg=221.87, stdev=75.01 00:41:17.201 lat (msec): min=3, max=361, avg=221.90, stdev=75.01 00:41:17.201 clat percentiles (msec): 00:41:17.201 | 1.00th=[ 4], 5.00th=[ 22], 10.00th=[ 110], 20.00th=[ 201], 00:41:17.201 | 30.00th=[ 213], 40.00th=[ 230], 50.00th=[ 239], 60.00th=[ 249], 00:41:17.201 | 70.00th=[ 259], 80.00th=[ 268], 90.00th=[ 284], 95.00th=[ 300], 00:41:17.201 | 99.00th=[ 355], 99.50th=[ 363], 99.90th=[ 363], 99.95th=[ 363], 00:41:17.201 | 99.99th=[ 363] 00:41:17.201 bw ( KiB/s): min= 128, max= 768, per=4.93%, avg=284.00, stdev=131.62, samples=20 00:41:17.201 iops : min= 32, max= 192, avg=71.00, stdev=32.90, samples=20 00:41:17.201 lat (msec) : 4=2.20%, 10=2.20%, 50=4.41%, 250=51.24%, 500=39.94% 00:41:17.201 cpu : usr=98.03%, sys=1.38%, ctx=78, majf=0, minf=55 00:41:17.201 IO depths : 1=2.9%, 2=8.1%, 4=21.6%, 8=57.7%, 16=9.6%, 32=0.0%, >=64=0.0% 00:41:17.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.201 complete : 0=0.0%, 4=93.2%, 8=1.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.201 issued rwts: total=726,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:17.201 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:17.201 filename2: (groupid=0, jobs=1): err= 0: pid=469251: Mon Nov 18 00:46:39 2024 00:41:17.201 read: IOPS=58, BW=234KiB/s (239kB/s)(2360KiB/10093msec) 00:41:17.201 slat (usec): min=6, max=120, avg=34.51, stdev=24.30 00:41:17.201 clat (msec): min=84, max=415, avg=273.16, stdev=59.07 00:41:17.201 lat (msec): min=84, max=415, avg=273.20, stdev=59.08 00:41:17.201 clat percentiles (msec): 00:41:17.201 | 1.00th=[ 85], 5.00th=[ 148], 10.00th=[ 234], 20.00th=[ 247], 00:41:17.201 | 30.00th=[ 253], 40.00th=[ 259], 50.00th=[ 271], 60.00th=[ 279], 00:41:17.201 | 70.00th=[ 296], 80.00th=[ 309], 90.00th=[ 355], 95.00th=[ 372], 00:41:17.201 | 99.00th=[ 405], 99.50th=[ 405], 99.90th=[ 418], 99.95th=[ 418], 00:41:17.201 | 99.99th=[ 418] 00:41:17.201 bw ( KiB/s): min= 128, max= 384, per=3.97%, avg=229.60, stdev=66.74, samples=20 00:41:17.201 iops : min= 32, max= 96, avg=57.40, stdev=16.68, samples=20 00:41:17.201 lat (msec) : 100=2.71%, 250=22.03%, 500=75.25% 00:41:17.201 cpu : usr=98.08%, sys=1.37%, ctx=14, majf=0, minf=33 00:41:17.201 IO depths : 1=4.4%, 2=10.7%, 4=25.1%, 8=51.9%, 16=8.0%, 32=0.0%, >=64=0.0% 00:41:17.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.201 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.201 issued rwts: total=590,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:17.201 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:17.201 filename2: (groupid=0, jobs=1): err= 0: pid=469252: Mon Nov 18 00:46:39 2024 00:41:17.201 read: IOPS=70, BW=284KiB/s (291kB/s)(2864KiB/10092msec) 00:41:17.201 slat (nsec): min=6558, max=78501, avg=11415.30, stdev=8024.63 00:41:17.201 clat (msec): min=117, max=379, avg=225.04, stdev=50.04 00:41:17.201 lat (msec): min=117, max=379, avg=225.05, stdev=50.04 00:41:17.201 clat percentiles (msec): 00:41:17.201 | 1.00th=[ 128], 5.00th=[ 133], 10.00th=[ 176], 20.00th=[ 190], 00:41:17.201 | 30.00th=[ 199], 40.00th=[ 207], 50.00th=[ 222], 60.00th=[ 236], 00:41:17.201 | 70.00th=[ 247], 80.00th=[ 262], 90.00th=[ 292], 95.00th=[ 321], 00:41:17.201 | 99.00th=[ 380], 99.50th=[ 380], 99.90th=[ 380], 99.95th=[ 380], 00:41:17.201 | 99.99th=[ 380] 00:41:17.201 bw ( KiB/s): min= 176, max= 384, per=4.84%, avg=280.00, stdev=54.07, samples=20 00:41:17.201 iops : min= 44, max= 96, avg=70.00, stdev=13.52, samples=20 00:41:17.201 lat (msec) : 250=73.18%, 500=26.82% 00:41:17.201 cpu : usr=98.45%, sys=1.10%, ctx=29, majf=0, minf=45 00:41:17.201 IO depths : 1=0.4%, 2=1.8%, 4=9.9%, 8=75.4%, 16=12.4%, 32=0.0%, >=64=0.0% 00:41:17.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.201 complete : 0=0.0%, 4=89.7%, 8=5.2%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.201 issued rwts: total=716,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:17.201 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:17.201 filename2: (groupid=0, jobs=1): err= 0: pid=469253: Mon Nov 18 00:46:39 2024 00:41:17.201 read: IOPS=57, BW=232KiB/s (238kB/s)(2328KiB/10035msec) 00:41:17.201 slat (nsec): min=8100, max=97706, avg=37547.38, stdev=28019.05 00:41:17.201 clat (msec): min=167, max=401, avg=275.55, stdev=43.55 00:41:17.201 lat (msec): min=167, max=401, avg=275.58, stdev=43.56 00:41:17.201 clat percentiles (msec): 00:41:17.201 | 1.00th=[ 209], 5.00th=[ 224], 10.00th=[ 236], 20.00th=[ 241], 00:41:17.201 | 30.00th=[ 249], 40.00th=[ 257], 50.00th=[ 266], 60.00th=[ 275], 00:41:17.201 | 70.00th=[ 288], 80.00th=[ 309], 90.00th=[ 334], 95.00th=[ 359], 00:41:17.201 | 99.00th=[ 393], 99.50th=[ 393], 99.90th=[ 401], 99.95th=[ 401], 00:41:17.201 | 99.99th=[ 401] 00:41:17.201 bw ( KiB/s): min= 128, max= 384, per=3.92%, avg=226.40, stdev=67.74, samples=20 00:41:17.202 iops : min= 32, max= 96, avg=56.60, stdev=16.93, samples=20 00:41:17.202 lat (msec) : 250=31.62%, 500=68.38% 00:41:17.202 cpu : usr=98.24%, sys=1.20%, ctx=41, majf=0, minf=49 00:41:17.202 IO depths : 1=3.4%, 2=8.8%, 4=22.2%, 8=56.5%, 16=9.1%, 32=0.0%, >=64=0.0% 00:41:17.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.202 complete : 0=0.0%, 4=93.4%, 8=0.9%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.202 issued rwts: total=582,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:17.202 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:17.202 filename2: (groupid=0, jobs=1): err= 0: pid=469254: Mon Nov 18 00:46:39 2024 00:41:17.202 read: IOPS=47, BW=192KiB/s (196kB/s)(1920KiB/10022msec) 00:41:17.202 slat (nsec): min=8255, max=57813, avg=16350.22, stdev=8943.36 00:41:17.202 clat (msec): min=207, max=506, avg=333.89, stdev=53.07 00:41:17.202 lat (msec): min=207, max=506, avg=333.90, stdev=53.07 00:41:17.202 clat percentiles (msec): 00:41:17.202 | 1.00th=[ 255], 5.00th=[ 259], 10.00th=[ 268], 20.00th=[ 284], 00:41:17.202 | 30.00th=[ 296], 40.00th=[ 309], 50.00th=[ 330], 60.00th=[ 355], 00:41:17.202 | 70.00th=[ 380], 80.00th=[ 388], 90.00th=[ 397], 95.00th=[ 397], 00:41:17.202 | 99.00th=[ 447], 99.50th=[ 506], 99.90th=[ 506], 99.95th=[ 506], 00:41:17.202 | 99.99th=[ 506] 00:41:17.202 bw ( KiB/s): min= 112, max= 256, per=3.21%, avg=185.60, stdev=65.54, samples=20 00:41:17.202 iops : min= 28, max= 64, avg=46.40, stdev=16.38, samples=20 00:41:17.202 lat (msec) : 250=0.83%, 500=98.33%, 750=0.83% 00:41:17.202 cpu : usr=98.28%, sys=1.27%, ctx=24, majf=0, minf=42 00:41:17.202 IO depths : 1=5.2%, 2=11.5%, 4=25.0%, 8=51.0%, 16=7.3%, 32=0.0%, >=64=0.0% 00:41:17.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.202 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.202 issued rwts: total=480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:17.202 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:17.202 00:41:17.202 Run status group 0 (all jobs): 00:41:17.202 READ: bw=5763KiB/s (5901kB/s), 192KiB/s-298KiB/s (196kB/s-305kB/s), io=56.9MiB (59.7MB), run=10022-10110msec 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:17.202 bdev_null0 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:17.202 [2024-11-18 00:46:39.526701] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:17.202 bdev_null1 00:41:17.202 00:46:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:17.203 00:46:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:17.203 00:46:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:17.203 00:46:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:17.203 00:46:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:17.203 00:46:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:17.203 00:46:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:17.203 00:46:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:17.203 00:46:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:17.203 00:46:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:17.203 00:46:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:17.203 00:46:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:17.203 00:46:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:17.203 00:46:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:41:17.203 00:46:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:41:17.203 00:46:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:41:17.203 00:46:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:41:17.203 00:46:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:41:17.203 00:46:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:17.203 00:46:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:17.203 00:46:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:17.203 { 00:41:17.203 "params": { 00:41:17.203 "name": "Nvme$subsystem", 00:41:17.203 "trtype": "$TEST_TRANSPORT", 00:41:17.203 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:17.203 "adrfam": "ipv4", 00:41:17.203 "trsvcid": "$NVMF_PORT", 00:41:17.203 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:17.203 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:17.203 "hdgst": ${hdgst:-false}, 00:41:17.203 "ddgst": ${ddgst:-false} 00:41:17.203 }, 00:41:17.203 "method": "bdev_nvme_attach_controller" 00:41:17.203 } 00:41:17.203 EOF 00:41:17.203 )") 00:41:17.203 00:46:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:17.203 00:46:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:17.203 00:46:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:17.203 00:46:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:17.203 00:46:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:17.203 00:46:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:17.203 00:46:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:17.203 00:46:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:17.203 00:46:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:41:17.203 00:46:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:17.203 00:46:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:17.203 00:46:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:17.203 00:46:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:17.203 00:46:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:17.203 00:46:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:41:17.203 00:46:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:17.203 00:46:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:17.203 00:46:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:17.203 00:46:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:17.203 00:46:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:17.203 { 00:41:17.203 "params": { 00:41:17.203 "name": "Nvme$subsystem", 00:41:17.203 "trtype": "$TEST_TRANSPORT", 00:41:17.203 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:17.203 "adrfam": "ipv4", 00:41:17.203 "trsvcid": "$NVMF_PORT", 00:41:17.203 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:17.203 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:17.203 "hdgst": ${hdgst:-false}, 00:41:17.203 "ddgst": ${ddgst:-false} 00:41:17.203 }, 00:41:17.203 "method": "bdev_nvme_attach_controller" 00:41:17.203 } 00:41:17.203 EOF 00:41:17.203 )") 00:41:17.203 00:46:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:17.203 00:46:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:17.203 00:46:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:17.203 00:46:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:41:17.203 00:46:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:41:17.203 00:46:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:17.203 "params": { 00:41:17.203 "name": "Nvme0", 00:41:17.203 "trtype": "tcp", 00:41:17.203 "traddr": "10.0.0.2", 00:41:17.203 "adrfam": "ipv4", 00:41:17.203 "trsvcid": "4420", 00:41:17.203 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:17.203 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:17.203 "hdgst": false, 00:41:17.203 "ddgst": false 00:41:17.203 }, 00:41:17.203 "method": "bdev_nvme_attach_controller" 00:41:17.203 },{ 00:41:17.203 "params": { 00:41:17.203 "name": "Nvme1", 00:41:17.203 "trtype": "tcp", 00:41:17.203 "traddr": "10.0.0.2", 00:41:17.203 "adrfam": "ipv4", 00:41:17.203 "trsvcid": "4420", 00:41:17.203 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:17.203 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:17.203 "hdgst": false, 00:41:17.203 "ddgst": false 00:41:17.203 }, 00:41:17.203 "method": "bdev_nvme_attach_controller" 00:41:17.203 }' 00:41:17.203 00:46:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:17.203 00:46:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:17.203 00:46:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:17.203 00:46:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:17.203 00:46:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:41:17.203 00:46:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:17.203 00:46:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:17.203 00:46:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:17.203 00:46:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:17.203 00:46:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:17.203 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:41:17.203 ... 00:41:17.203 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:41:17.203 ... 00:41:17.203 fio-3.35 00:41:17.203 Starting 4 threads 00:41:22.467 00:41:22.467 filename0: (groupid=0, jobs=1): err= 0: pid=470634: Mon Nov 18 00:46:45 2024 00:41:22.467 read: IOPS=1930, BW=15.1MiB/s (15.8MB/s)(75.5MiB/5004msec) 00:41:22.467 slat (nsec): min=4031, max=72616, avg=16360.57, stdev=9213.18 00:41:22.467 clat (usec): min=795, max=8142, avg=4087.85, stdev=432.41 00:41:22.467 lat (usec): min=803, max=8155, avg=4104.21, stdev=433.05 00:41:22.467 clat percentiles (usec): 00:41:22.467 | 1.00th=[ 2737], 5.00th=[ 3425], 10.00th=[ 3687], 20.00th=[ 3916], 00:41:22.467 | 30.00th=[ 4015], 40.00th=[ 4080], 50.00th=[ 4146], 60.00th=[ 4178], 00:41:22.467 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4359], 95.00th=[ 4490], 00:41:22.467 | 99.00th=[ 5407], 99.50th=[ 6063], 99.90th=[ 7308], 99.95th=[ 7832], 00:41:22.467 | 99.99th=[ 8160] 00:41:22.467 bw ( KiB/s): min=14976, max=15920, per=25.37%, avg=15446.20, stdev=316.59, samples=10 00:41:22.467 iops : min= 1872, max= 1990, avg=1930.70, stdev=39.67, samples=10 00:41:22.467 lat (usec) : 1000=0.04% 00:41:22.467 lat (msec) : 2=0.32%, 4=28.42%, 10=71.22% 00:41:22.467 cpu : usr=95.60%, sys=3.90%, ctx=11, majf=0, minf=0 00:41:22.467 IO depths : 1=0.5%, 2=14.5%, 4=57.9%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:22.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:22.467 complete : 0=0.0%, 4=92.0%, 8=8.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:22.467 issued rwts: total=9660,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:22.467 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:22.467 filename0: (groupid=0, jobs=1): err= 0: pid=470635: Mon Nov 18 00:46:45 2024 00:41:22.467 read: IOPS=1871, BW=14.6MiB/s (15.3MB/s)(73.1MiB/5002msec) 00:41:22.467 slat (nsec): min=6813, max=69458, avg=20958.96, stdev=8630.76 00:41:22.467 clat (usec): min=950, max=8076, avg=4199.19, stdev=575.49 00:41:22.467 lat (usec): min=967, max=8130, avg=4220.15, stdev=575.22 00:41:22.467 clat percentiles (usec): 00:41:22.467 | 1.00th=[ 2507], 5.00th=[ 3687], 10.00th=[ 3884], 20.00th=[ 3982], 00:41:22.467 | 30.00th=[ 4047], 40.00th=[ 4080], 50.00th=[ 4146], 60.00th=[ 4178], 00:41:22.467 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4555], 95.00th=[ 5080], 00:41:22.467 | 99.00th=[ 6587], 99.50th=[ 7046], 99.90th=[ 7635], 99.95th=[ 7701], 00:41:22.467 | 99.99th=[ 8094] 00:41:22.467 bw ( KiB/s): min=14736, max=15216, per=24.59%, avg=14970.90, stdev=146.51, samples=10 00:41:22.467 iops : min= 1842, max= 1902, avg=1871.30, stdev=18.37, samples=10 00:41:22.467 lat (usec) : 1000=0.04% 00:41:22.467 lat (msec) : 2=0.60%, 4=21.79%, 10=77.57% 00:41:22.467 cpu : usr=95.62%, sys=3.90%, ctx=8, majf=0, minf=9 00:41:22.467 IO depths : 1=0.2%, 2=17.3%, 4=56.0%, 8=26.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:22.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:22.467 complete : 0=0.0%, 4=91.4%, 8=8.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:22.467 issued rwts: total=9363,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:22.467 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:22.467 filename1: (groupid=0, jobs=1): err= 0: pid=470636: Mon Nov 18 00:46:45 2024 00:41:22.467 read: IOPS=1909, BW=14.9MiB/s (15.6MB/s)(74.6MiB/5004msec) 00:41:22.467 slat (nsec): min=3972, max=72468, avg=18886.77, stdev=10013.02 00:41:22.467 clat (usec): min=779, max=10363, avg=4118.48, stdev=471.79 00:41:22.467 lat (usec): min=793, max=10381, avg=4137.37, stdev=472.03 00:41:22.467 clat percentiles (usec): 00:41:22.467 | 1.00th=[ 2540], 5.00th=[ 3523], 10.00th=[ 3752], 20.00th=[ 3949], 00:41:22.467 | 30.00th=[ 4015], 40.00th=[ 4080], 50.00th=[ 4113], 60.00th=[ 4178], 00:41:22.467 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4424], 95.00th=[ 4621], 00:41:22.467 | 99.00th=[ 5932], 99.50th=[ 6456], 99.90th=[ 7439], 99.95th=[ 8455], 00:41:22.467 | 99.99th=[10421] 00:41:22.467 bw ( KiB/s): min=14976, max=15584, per=25.09%, avg=15280.00, stdev=197.41, samples=10 00:41:22.467 iops : min= 1872, max= 1948, avg=1910.00, stdev=24.68, samples=10 00:41:22.467 lat (usec) : 1000=0.02% 00:41:22.467 lat (msec) : 2=0.43%, 4=26.21%, 10=73.33%, 20=0.01% 00:41:22.467 cpu : usr=96.44%, sys=3.06%, ctx=10, majf=0, minf=0 00:41:22.467 IO depths : 1=0.4%, 2=19.7%, 4=53.8%, 8=26.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:22.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:22.467 complete : 0=0.0%, 4=91.2%, 8=8.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:22.467 issued rwts: total=9554,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:22.467 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:22.467 filename1: (groupid=0, jobs=1): err= 0: pid=470637: Mon Nov 18 00:46:45 2024 00:41:22.467 read: IOPS=1900, BW=14.8MiB/s (15.6MB/s)(74.3MiB/5003msec) 00:41:22.467 slat (nsec): min=3945, max=72278, avg=21565.65, stdev=10252.37 00:41:22.467 clat (usec): min=690, max=7591, avg=4123.11, stdev=529.01 00:41:22.467 lat (usec): min=714, max=7609, avg=4144.68, stdev=529.07 00:41:22.467 clat percentiles (usec): 00:41:22.467 | 1.00th=[ 2147], 5.00th=[ 3490], 10.00th=[ 3818], 20.00th=[ 3949], 00:41:22.467 | 30.00th=[ 4015], 40.00th=[ 4047], 50.00th=[ 4113], 60.00th=[ 4146], 00:41:22.467 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4424], 95.00th=[ 4883], 00:41:22.467 | 99.00th=[ 6128], 99.50th=[ 6652], 99.90th=[ 7177], 99.95th=[ 7439], 00:41:22.467 | 99.99th=[ 7570] 00:41:22.467 bw ( KiB/s): min=14944, max=15616, per=24.97%, avg=15204.80, stdev=226.28, samples=10 00:41:22.467 iops : min= 1868, max= 1952, avg=1900.60, stdev=28.29, samples=10 00:41:22.467 lat (usec) : 750=0.01%, 1000=0.12% 00:41:22.467 lat (msec) : 2=0.81%, 4=26.60%, 10=72.46% 00:41:22.467 cpu : usr=95.70%, sys=3.76%, ctx=6, majf=0, minf=0 00:41:22.467 IO depths : 1=0.2%, 2=20.8%, 4=53.0%, 8=26.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:22.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:22.467 complete : 0=0.0%, 4=90.9%, 8=9.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:22.467 issued rwts: total=9510,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:22.467 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:22.467 00:41:22.467 Run status group 0 (all jobs): 00:41:22.467 READ: bw=59.5MiB/s (62.4MB/s), 14.6MiB/s-15.1MiB/s (15.3MB/s-15.8MB/s), io=298MiB (312MB), run=5002-5004msec 00:41:22.467 00:46:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:41:22.467 00:46:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:22.467 00:46:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:22.467 00:46:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:22.467 00:46:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:22.467 00:46:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:22.467 00:46:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:22.467 00:46:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:22.467 00:46:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:22.467 00:46:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:22.467 00:46:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:22.467 00:46:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:22.467 00:46:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:22.467 00:46:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:22.467 00:46:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:22.467 00:46:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:41:22.467 00:46:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:22.467 00:46:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:22.467 00:46:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:22.467 00:46:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:22.468 00:46:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:22.468 00:46:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:22.468 00:46:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:22.468 00:46:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:22.468 00:41:22.468 real 0m24.038s 00:41:22.468 user 4m35.012s 00:41:22.468 sys 0m5.473s 00:41:22.468 00:46:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:22.468 00:46:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:22.468 ************************************ 00:41:22.468 END TEST fio_dif_rand_params 00:41:22.468 ************************************ 00:41:22.468 00:46:45 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:41:22.468 00:46:45 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:22.468 00:46:45 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:22.468 00:46:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:22.468 ************************************ 00:41:22.468 START TEST fio_dif_digest 00:41:22.468 ************************************ 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:22.468 bdev_null0 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:22.468 [2024-11-18 00:46:45.846491] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:22.468 { 00:41:22.468 "params": { 00:41:22.468 "name": "Nvme$subsystem", 00:41:22.468 "trtype": "$TEST_TRANSPORT", 00:41:22.468 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:22.468 "adrfam": "ipv4", 00:41:22.468 "trsvcid": "$NVMF_PORT", 00:41:22.468 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:22.468 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:22.468 "hdgst": ${hdgst:-false}, 00:41:22.468 "ddgst": ${ddgst:-false} 00:41:22.468 }, 00:41:22.468 "method": "bdev_nvme_attach_controller" 00:41:22.468 } 00:41:22.468 EOF 00:41:22.468 )") 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:22.468 "params": { 00:41:22.468 "name": "Nvme0", 00:41:22.468 "trtype": "tcp", 00:41:22.468 "traddr": "10.0.0.2", 00:41:22.468 "adrfam": "ipv4", 00:41:22.468 "trsvcid": "4420", 00:41:22.468 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:22.468 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:22.468 "hdgst": true, 00:41:22.468 "ddgst": true 00:41:22.468 }, 00:41:22.468 "method": "bdev_nvme_attach_controller" 00:41:22.468 }' 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:22.468 00:46:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:22.468 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:41:22.468 ... 00:41:22.468 fio-3.35 00:41:22.468 Starting 3 threads 00:41:34.668 00:41:34.668 filename0: (groupid=0, jobs=1): err= 0: pid=471386: Mon Nov 18 00:46:56 2024 00:41:34.668 read: IOPS=196, BW=24.6MiB/s (25.8MB/s)(247MiB/10043msec) 00:41:34.668 slat (nsec): min=4203, max=27378, avg=13851.45, stdev=1462.48 00:41:34.668 clat (usec): min=11630, max=51281, avg=15216.91, stdev=1526.42 00:41:34.668 lat (usec): min=11644, max=51296, avg=15230.76, stdev=1526.46 00:41:34.668 clat percentiles (usec): 00:41:34.668 | 1.00th=[12649], 5.00th=[13435], 10.00th=[13829], 20.00th=[14353], 00:41:34.668 | 30.00th=[14615], 40.00th=[14877], 50.00th=[15139], 60.00th=[15401], 00:41:34.668 | 70.00th=[15664], 80.00th=[15926], 90.00th=[16581], 95.00th=[16909], 00:41:34.668 | 99.00th=[17957], 99.50th=[18482], 99.90th=[46924], 99.95th=[51119], 00:41:34.668 | 99.99th=[51119] 00:41:34.668 bw ( KiB/s): min=24320, max=26112, per=32.16%, avg=25254.40, stdev=520.52, samples=20 00:41:34.668 iops : min= 190, max= 204, avg=197.30, stdev= 4.07, samples=20 00:41:34.668 lat (msec) : 20=99.75%, 50=0.20%, 100=0.05% 00:41:34.668 cpu : usr=93.76%, sys=5.63%, ctx=80, majf=0, minf=101 00:41:34.668 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:34.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:34.668 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:34.668 issued rwts: total=1975,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:34.668 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:34.668 filename0: (groupid=0, jobs=1): err= 0: pid=471387: Mon Nov 18 00:46:56 2024 00:41:34.668 read: IOPS=198, BW=24.8MiB/s (26.0MB/s)(250MiB/10044msec) 00:41:34.668 slat (nsec): min=4226, max=40549, avg=14382.23, stdev=1719.30 00:41:34.668 clat (usec): min=11700, max=52660, avg=15057.21, stdev=1490.74 00:41:34.668 lat (usec): min=11714, max=52674, avg=15071.59, stdev=1490.74 00:41:34.668 clat percentiles (usec): 00:41:34.668 | 1.00th=[12780], 5.00th=[13435], 10.00th=[13829], 20.00th=[14222], 00:41:34.668 | 30.00th=[14484], 40.00th=[14746], 50.00th=[15008], 60.00th=[15139], 00:41:34.668 | 70.00th=[15533], 80.00th=[15795], 90.00th=[16319], 95.00th=[16712], 00:41:34.668 | 99.00th=[17695], 99.50th=[17957], 99.90th=[45876], 99.95th=[52691], 00:41:34.668 | 99.99th=[52691] 00:41:34.668 bw ( KiB/s): min=24576, max=26624, per=32.50%, avg=25523.20, stdev=512.67, samples=20 00:41:34.668 iops : min= 192, max= 208, avg=199.40, stdev= 4.01, samples=20 00:41:34.668 lat (msec) : 20=99.75%, 50=0.20%, 100=0.05% 00:41:34.668 cpu : usr=93.94%, sys=5.44%, ctx=109, majf=0, minf=100 00:41:34.668 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:34.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:34.668 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:34.668 issued rwts: total=1996,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:34.668 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:34.668 filename0: (groupid=0, jobs=1): err= 0: pid=471389: Mon Nov 18 00:46:56 2024 00:41:34.668 read: IOPS=218, BW=27.3MiB/s (28.6MB/s)(274MiB/10045msec) 00:41:34.668 slat (nsec): min=4492, max=36932, avg=14607.71, stdev=2558.96 00:41:34.668 clat (usec): min=10635, max=50533, avg=13690.62, stdev=1244.98 00:41:34.668 lat (usec): min=10648, max=50548, avg=13705.23, stdev=1244.97 00:41:34.668 clat percentiles (usec): 00:41:34.668 | 1.00th=[11338], 5.00th=[12125], 10.00th=[12387], 20.00th=[12911], 00:41:34.668 | 30.00th=[13173], 40.00th=[13435], 50.00th=[13698], 60.00th=[13960], 00:41:34.668 | 70.00th=[14222], 80.00th=[14484], 90.00th=[14746], 95.00th=[15139], 00:41:34.668 | 99.00th=[15926], 99.50th=[16450], 99.90th=[21627], 99.95th=[21627], 00:41:34.668 | 99.99th=[50594] 00:41:34.668 bw ( KiB/s): min=26880, max=28928, per=35.71%, avg=28044.80, stdev=584.21, samples=20 00:41:34.668 iops : min= 210, max= 226, avg=219.10, stdev= 4.56, samples=20 00:41:34.668 lat (msec) : 20=99.82%, 50=0.14%, 100=0.05% 00:41:34.668 cpu : usr=94.29%, sys=5.21%, ctx=7, majf=0, minf=149 00:41:34.668 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:34.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:34.668 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:34.668 issued rwts: total=2192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:34.668 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:34.668 00:41:34.668 Run status group 0 (all jobs): 00:41:34.668 READ: bw=76.7MiB/s (80.4MB/s), 24.6MiB/s-27.3MiB/s (25.8MB/s-28.6MB/s), io=770MiB (808MB), run=10043-10045msec 00:41:34.668 00:46:56 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:41:34.668 00:46:56 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:41:34.668 00:46:56 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:41:34.668 00:46:56 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:34.668 00:46:56 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:41:34.668 00:46:56 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:34.668 00:46:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:34.668 00:46:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:34.668 00:46:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:34.668 00:46:56 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:34.668 00:46:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:34.668 00:46:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:34.668 00:46:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:34.668 00:41:34.668 real 0m11.065s 00:41:34.668 user 0m29.444s 00:41:34.668 sys 0m1.894s 00:41:34.668 00:46:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:34.668 00:46:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:34.668 ************************************ 00:41:34.668 END TEST fio_dif_digest 00:41:34.668 ************************************ 00:41:34.668 00:46:56 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:41:34.668 00:46:56 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:41:34.668 00:46:56 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:34.668 00:46:56 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:41:34.668 00:46:56 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:34.668 00:46:56 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:41:34.668 00:46:56 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:34.668 00:46:56 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:34.668 rmmod nvme_tcp 00:41:34.668 rmmod nvme_fabrics 00:41:34.668 rmmod nvme_keyring 00:41:34.668 00:46:56 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:34.668 00:46:56 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:41:34.668 00:46:56 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:41:34.668 00:46:56 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 465345 ']' 00:41:34.668 00:46:56 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 465345 00:41:34.668 00:46:56 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 465345 ']' 00:41:34.668 00:46:56 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 465345 00:41:34.668 00:46:56 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:41:34.668 00:46:56 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:34.668 00:46:56 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 465345 00:41:34.668 00:46:56 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:34.669 00:46:56 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:34.669 00:46:56 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 465345' 00:41:34.669 killing process with pid 465345 00:41:34.669 00:46:56 nvmf_dif -- common/autotest_common.sh@973 -- # kill 465345 00:41:34.669 00:46:56 nvmf_dif -- common/autotest_common.sh@978 -- # wait 465345 00:41:34.669 00:46:57 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:41:34.669 00:46:57 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:41:34.669 Waiting for block devices as requested 00:41:34.669 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:41:34.669 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:41:34.928 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:41:34.928 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:41:34.928 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:41:34.928 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:41:35.191 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:41:35.191 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:41:35.191 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:41:35.191 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:41:35.450 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:41:35.450 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:41:35.450 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:41:35.708 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:41:35.708 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:41:35.708 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:41:35.708 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:41:35.966 00:46:59 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:35.966 00:46:59 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:35.966 00:46:59 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:41:35.966 00:46:59 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:41:35.966 00:46:59 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:35.966 00:46:59 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:41:35.966 00:46:59 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:35.966 00:46:59 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:35.966 00:46:59 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:35.966 00:46:59 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:35.966 00:46:59 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:37.872 00:47:01 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:37.872 00:41:37.872 real 1m6.730s 00:41:37.872 user 6m31.335s 00:41:37.872 sys 0m16.909s 00:41:37.872 00:47:01 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:37.872 00:47:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:37.872 ************************************ 00:41:37.872 END TEST nvmf_dif 00:41:37.872 ************************************ 00:41:37.872 00:47:01 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:41:37.872 00:47:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:37.872 00:47:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:37.872 00:47:01 -- common/autotest_common.sh@10 -- # set +x 00:41:38.130 ************************************ 00:41:38.130 START TEST nvmf_abort_qd_sizes 00:41:38.130 ************************************ 00:41:38.130 00:47:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:41:38.130 * Looking for test storage... 00:41:38.130 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:38.130 00:47:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:41:38.130 00:47:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:41:38.130 00:47:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:41:38.130 00:47:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:41:38.130 00:47:01 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:38.130 00:47:01 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:38.130 00:47:01 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:38.130 00:47:01 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:41:38.130 00:47:01 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:41:38.130 00:47:01 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:41:38.130 00:47:01 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:41:38.130 00:47:01 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:41:38.130 00:47:01 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:41:38.130 00:47:01 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:41:38.130 00:47:01 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:38.130 00:47:01 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:41:38.130 00:47:01 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:41:38.130 00:47:01 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:38.130 00:47:01 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:38.130 00:47:01 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:41:38.130 00:47:01 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:41:38.130 00:47:01 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:38.130 00:47:01 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:41:38.130 00:47:01 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:41:38.130 00:47:01 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:41:38.130 00:47:01 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:41:38.130 00:47:01 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:38.130 00:47:01 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:41:38.130 00:47:01 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:41:38.130 00:47:01 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:38.130 00:47:01 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:38.130 00:47:01 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:41:38.131 00:47:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:38.131 00:47:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:41:38.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:38.131 --rc genhtml_branch_coverage=1 00:41:38.131 --rc genhtml_function_coverage=1 00:41:38.131 --rc genhtml_legend=1 00:41:38.131 --rc geninfo_all_blocks=1 00:41:38.131 --rc geninfo_unexecuted_blocks=1 00:41:38.131 00:41:38.131 ' 00:41:38.131 00:47:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:41:38.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:38.131 --rc genhtml_branch_coverage=1 00:41:38.131 --rc genhtml_function_coverage=1 00:41:38.131 --rc genhtml_legend=1 00:41:38.131 --rc geninfo_all_blocks=1 00:41:38.131 --rc geninfo_unexecuted_blocks=1 00:41:38.131 00:41:38.131 ' 00:41:38.131 00:47:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:41:38.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:38.131 --rc genhtml_branch_coverage=1 00:41:38.131 --rc genhtml_function_coverage=1 00:41:38.131 --rc genhtml_legend=1 00:41:38.131 --rc geninfo_all_blocks=1 00:41:38.131 --rc geninfo_unexecuted_blocks=1 00:41:38.131 00:41:38.131 ' 00:41:38.131 00:47:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:41:38.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:38.131 --rc genhtml_branch_coverage=1 00:41:38.131 --rc genhtml_function_coverage=1 00:41:38.131 --rc genhtml_legend=1 00:41:38.131 --rc geninfo_all_blocks=1 00:41:38.131 --rc geninfo_unexecuted_blocks=1 00:41:38.131 00:41:38.131 ' 00:41:38.131 00:47:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:38.131 00:47:01 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:41:38.131 00:47:01 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:38.131 00:47:01 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:38.131 00:47:01 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:38.131 00:47:01 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:38.131 00:47:01 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:38.131 00:47:01 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:38.131 00:47:01 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:38.131 00:47:01 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:38.131 00:47:01 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:38.131 00:47:01 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:38.131 00:47:01 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:41:38.131 00:47:01 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:41:38.131 00:47:01 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:38.131 00:47:01 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:38.131 00:47:01 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:38.131 00:47:01 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:38.131 00:47:01 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:38.131 00:47:01 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:41:38.131 00:47:01 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:38.131 00:47:01 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:38.131 00:47:01 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:38.131 00:47:01 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:38.131 00:47:01 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:38.131 00:47:01 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:38.131 00:47:01 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:41:38.131 00:47:01 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:38.131 00:47:01 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:41:38.131 00:47:01 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:38.131 00:47:01 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:38.131 00:47:01 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:38.131 00:47:01 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:38.131 00:47:01 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:38.131 00:47:01 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:38.131 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:38.131 00:47:01 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:38.131 00:47:01 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:38.131 00:47:01 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:38.131 00:47:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:41:38.131 00:47:01 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:38.131 00:47:01 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:38.131 00:47:01 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:38.131 00:47:01 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:38.131 00:47:01 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:38.131 00:47:01 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:38.131 00:47:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:38.131 00:47:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:38.131 00:47:01 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:38.131 00:47:01 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:38.131 00:47:01 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:41:38.131 00:47:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:41:40.663 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:41:40.663 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:41:40.663 Found net devices under 0000:0a:00.0: cvl_0_0 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:41:40.663 Found net devices under 0000:0a:00.1: cvl_0_1 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:40.663 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:40.664 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:41:40.664 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:40.664 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:40.664 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:40.664 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:40.664 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:40.664 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:40.664 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:40.664 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:40.664 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:40.664 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:40.664 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:40.664 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:40.664 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:40.664 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:40.664 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:40.664 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:40.664 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:40.664 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:40.664 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:40.664 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:40.664 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:40.664 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:40.664 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:40.664 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:40.664 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:40.664 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:40.664 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:40.664 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:41:40.664 00:41:40.664 --- 10.0.0.2 ping statistics --- 00:41:40.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:40.664 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:41:40.664 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:40.664 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:40.664 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:41:40.664 00:41:40.664 --- 10.0.0.1 ping statistics --- 00:41:40.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:40.664 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:41:40.664 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:40.664 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:41:40.664 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:41:40.664 00:47:03 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:41:41.598 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:41:41.598 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:41:41.598 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:41:41.598 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:41:41.598 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:41:41.598 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:41:41.598 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:41:41.598 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:41:41.598 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:41:41.598 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:41:41.598 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:41:41.598 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:41:41.598 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:41:41.598 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:41:41.598 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:41:41.598 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:41:42.536 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:41:42.536 00:47:06 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:42.536 00:47:06 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:42.537 00:47:06 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:42.537 00:47:06 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:42.537 00:47:06 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:42.537 00:47:06 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:42.795 00:47:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:41:42.795 00:47:06 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:42.795 00:47:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:42.795 00:47:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:42.795 00:47:06 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=476292 00:41:42.795 00:47:06 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:41:42.795 00:47:06 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 476292 00:41:42.795 00:47:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 476292 ']' 00:41:42.795 00:47:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:42.795 00:47:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:42.795 00:47:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:42.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:42.795 00:47:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:42.795 00:47:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:42.795 [2024-11-18 00:47:06.416545] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:41:42.795 [2024-11-18 00:47:06.416627] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:42.795 [2024-11-18 00:47:06.486548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:42.795 [2024-11-18 00:47:06.536726] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:42.795 [2024-11-18 00:47:06.536781] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:42.795 [2024-11-18 00:47:06.536795] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:42.795 [2024-11-18 00:47:06.536806] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:42.795 [2024-11-18 00:47:06.536815] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:42.795 [2024-11-18 00:47:06.538211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:42.795 [2024-11-18 00:47:06.538276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:41:42.795 [2024-11-18 00:47:06.538346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:42.795 [2024-11-18 00:47:06.538343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:41:43.052 00:47:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:43.053 00:47:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:41:43.053 00:47:06 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:43.053 00:47:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:43.053 00:47:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:43.053 00:47:06 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:43.053 00:47:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:41:43.053 00:47:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:41:43.053 00:47:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:41:43.053 00:47:06 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:41:43.053 00:47:06 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:41:43.053 00:47:06 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:88:00.0 ]] 00:41:43.053 00:47:06 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:41:43.053 00:47:06 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:41:43.053 00:47:06 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:41:43.053 00:47:06 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:41:43.053 00:47:06 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:41:43.053 00:47:06 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:41:43.053 00:47:06 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:41:43.053 00:47:06 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:88:00.0 00:41:43.053 00:47:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:41:43.053 00:47:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:41:43.053 00:47:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:41:43.053 00:47:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:43.053 00:47:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:43.053 00:47:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:43.053 ************************************ 00:41:43.053 START TEST spdk_target_abort 00:41:43.053 ************************************ 00:41:43.053 00:47:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:41:43.053 00:47:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:41:43.053 00:47:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:41:43.053 00:47:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:43.053 00:47:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:46.330 spdk_targetn1 00:41:46.330 00:47:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:46.330 00:47:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:46.330 00:47:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:46.330 00:47:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:46.330 [2024-11-18 00:47:09.540975] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:46.330 00:47:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:46.330 00:47:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:41:46.330 00:47:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:46.330 00:47:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:46.330 00:47:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:46.330 00:47:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:41:46.330 00:47:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:46.330 00:47:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:46.330 00:47:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:46.330 00:47:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:41:46.330 00:47:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:46.330 00:47:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:46.330 [2024-11-18 00:47:09.589321] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:46.330 00:47:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:46.330 00:47:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:41:46.330 00:47:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:41:46.330 00:47:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:41:46.330 00:47:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:41:46.330 00:47:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:41:46.330 00:47:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:41:46.330 00:47:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:41:46.330 00:47:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:41:46.330 00:47:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:41:46.330 00:47:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:46.330 00:47:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:41:46.330 00:47:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:46.330 00:47:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:41:46.330 00:47:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:46.330 00:47:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:41:46.330 00:47:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:46.330 00:47:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:41:46.330 00:47:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:46.330 00:47:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:46.330 00:47:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:46.330 00:47:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:49.614 Initializing NVMe Controllers 00:41:49.614 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:41:49.614 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:49.614 Initialization complete. Launching workers. 00:41:49.614 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12808, failed: 0 00:41:49.614 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1251, failed to submit 11557 00:41:49.614 success 723, unsuccessful 528, failed 0 00:41:49.614 00:47:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:49.615 00:47:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:52.901 Initializing NVMe Controllers 00:41:52.901 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:41:52.901 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:52.901 Initialization complete. Launching workers. 00:41:52.901 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8737, failed: 0 00:41:52.901 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1245, failed to submit 7492 00:41:52.901 success 321, unsuccessful 924, failed 0 00:41:52.901 00:47:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:52.901 00:47:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:56.212 Initializing NVMe Controllers 00:41:56.212 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:41:56.212 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:56.212 Initialization complete. Launching workers. 00:41:56.212 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30770, failed: 0 00:41:56.212 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2604, failed to submit 28166 00:41:56.212 success 505, unsuccessful 2099, failed 0 00:41:56.212 00:47:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:41:56.212 00:47:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:56.212 00:47:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:56.212 00:47:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:56.212 00:47:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:41:56.212 00:47:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:56.212 00:47:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:57.146 00:47:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:57.146 00:47:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 476292 00:41:57.146 00:47:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 476292 ']' 00:41:57.146 00:47:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 476292 00:41:57.146 00:47:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:41:57.146 00:47:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:57.146 00:47:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 476292 00:41:57.146 00:47:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:57.146 00:47:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:57.146 00:47:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 476292' 00:41:57.146 killing process with pid 476292 00:41:57.146 00:47:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 476292 00:41:57.146 00:47:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 476292 00:41:57.404 00:41:57.404 real 0m14.301s 00:41:57.404 user 0m54.206s 00:41:57.404 sys 0m2.491s 00:41:57.404 00:47:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:57.404 00:47:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:57.404 ************************************ 00:41:57.404 END TEST spdk_target_abort 00:41:57.404 ************************************ 00:41:57.404 00:47:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:41:57.404 00:47:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:57.404 00:47:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:57.404 00:47:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:57.404 ************************************ 00:41:57.404 START TEST kernel_target_abort 00:41:57.404 ************************************ 00:41:57.404 00:47:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:41:57.404 00:47:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:41:57.404 00:47:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:41:57.404 00:47:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:57.404 00:47:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:57.404 00:47:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:57.404 00:47:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:57.404 00:47:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:57.404 00:47:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:57.404 00:47:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:57.404 00:47:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:57.404 00:47:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:57.404 00:47:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:41:57.404 00:47:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:41:57.404 00:47:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:41:57.404 00:47:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:41:57.404 00:47:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:41:57.404 00:47:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:41:57.404 00:47:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:41:57.404 00:47:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:41:57.404 00:47:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:41:57.404 00:47:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:41:57.404 00:47:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:41:58.779 Waiting for block devices as requested 00:41:58.780 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:41:58.780 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:41:58.780 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:41:58.780 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:41:59.037 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:41:59.037 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:41:59.037 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:41:59.037 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:41:59.037 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:41:59.296 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:41:59.296 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:41:59.296 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:41:59.555 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:41:59.555 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:41:59.555 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:41:59.555 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:41:59.813 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:41:59.813 00:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:41:59.813 00:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:41:59.813 00:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:41:59.813 00:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:41:59.813 00:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:41:59.813 00:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:41:59.813 00:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:41:59.813 00:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:41:59.813 00:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:41:59.813 No valid GPT data, bailing 00:41:59.813 00:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:41:59.813 00:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:41:59.813 00:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:41:59.813 00:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:41:59.813 00:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:41:59.813 00:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:41:59.813 00:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:41:59.813 00:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:41:59.813 00:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:41:59.813 00:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:41:59.813 00:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:41:59.813 00:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:41:59.813 00:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:41:59.813 00:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:41:59.813 00:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:41:59.813 00:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:41:59.813 00:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:42:00.071 00:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:42:00.071 00:42:00.071 Discovery Log Number of Records 2, Generation counter 2 00:42:00.071 =====Discovery Log Entry 0====== 00:42:00.071 trtype: tcp 00:42:00.071 adrfam: ipv4 00:42:00.071 subtype: current discovery subsystem 00:42:00.071 treq: not specified, sq flow control disable supported 00:42:00.071 portid: 1 00:42:00.071 trsvcid: 4420 00:42:00.071 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:42:00.071 traddr: 10.0.0.1 00:42:00.071 eflags: none 00:42:00.071 sectype: none 00:42:00.071 =====Discovery Log Entry 1====== 00:42:00.071 trtype: tcp 00:42:00.071 adrfam: ipv4 00:42:00.071 subtype: nvme subsystem 00:42:00.071 treq: not specified, sq flow control disable supported 00:42:00.071 portid: 1 00:42:00.071 trsvcid: 4420 00:42:00.071 subnqn: nqn.2016-06.io.spdk:testnqn 00:42:00.071 traddr: 10.0.0.1 00:42:00.071 eflags: none 00:42:00.071 sectype: none 00:42:00.071 00:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:42:00.071 00:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:42:00.071 00:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:42:00.071 00:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:42:00.071 00:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:42:00.071 00:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:42:00.071 00:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:42:00.071 00:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:42:00.071 00:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:42:00.071 00:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:00.071 00:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:42:00.072 00:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:00.072 00:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:42:00.072 00:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:00.072 00:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:42:00.072 00:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:00.072 00:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:42:00.072 00:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:00.072 00:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:00.072 00:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:00.072 00:47:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:03.354 Initializing NVMe Controllers 00:42:03.354 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:03.354 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:03.354 Initialization complete. Launching workers. 00:42:03.354 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 56531, failed: 0 00:42:03.354 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 56531, failed to submit 0 00:42:03.354 success 0, unsuccessful 56531, failed 0 00:42:03.354 00:47:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:03.354 00:47:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:06.631 Initializing NVMe Controllers 00:42:06.631 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:06.632 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:06.632 Initialization complete. Launching workers. 00:42:06.632 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 101929, failed: 0 00:42:06.632 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25670, failed to submit 76259 00:42:06.632 success 0, unsuccessful 25670, failed 0 00:42:06.632 00:47:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:06.632 00:47:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:09.913 Initializing NVMe Controllers 00:42:09.913 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:09.913 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:09.913 Initialization complete. Launching workers. 00:42:09.913 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 96310, failed: 0 00:42:09.913 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24094, failed to submit 72216 00:42:09.913 success 0, unsuccessful 24094, failed 0 00:42:09.913 00:47:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:42:09.913 00:47:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:42:09.913 00:47:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:42:09.913 00:47:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:09.913 00:47:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:09.913 00:47:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:42:09.913 00:47:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:09.913 00:47:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:42:09.913 00:47:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:42:09.913 00:47:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:42:10.848 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:42:10.848 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:42:10.848 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:42:10.848 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:42:10.848 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:42:10.848 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:42:10.848 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:42:10.848 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:42:10.848 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:42:10.848 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:42:10.848 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:42:10.848 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:42:10.848 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:42:10.848 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:42:10.848 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:42:10.848 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:42:11.788 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:42:11.788 00:42:11.788 real 0m14.484s 00:42:11.788 user 0m6.773s 00:42:11.788 sys 0m3.248s 00:42:11.788 00:47:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:11.788 00:47:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:11.788 ************************************ 00:42:11.788 END TEST kernel_target_abort 00:42:11.788 ************************************ 00:42:11.788 00:47:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:42:11.788 00:47:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:42:11.788 00:47:35 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:11.788 00:47:35 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:42:11.788 00:47:35 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:11.788 00:47:35 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:42:11.788 00:47:35 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:11.788 00:47:35 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:11.788 rmmod nvme_tcp 00:42:11.788 rmmod nvme_fabrics 00:42:11.788 rmmod nvme_keyring 00:42:11.788 00:47:35 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:11.788 00:47:35 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:42:11.788 00:47:35 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:42:11.788 00:47:35 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 476292 ']' 00:42:11.788 00:47:35 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 476292 00:42:11.788 00:47:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 476292 ']' 00:42:11.788 00:47:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 476292 00:42:11.788 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (476292) - No such process 00:42:11.788 00:47:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 476292 is not found' 00:42:11.788 Process with pid 476292 is not found 00:42:11.788 00:47:35 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:42:11.788 00:47:35 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:13.163 Waiting for block devices as requested 00:42:13.163 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:42:13.163 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:13.421 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:13.421 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:13.421 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:13.421 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:13.678 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:13.678 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:13.678 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:13.678 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:13.935 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:13.935 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:13.935 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:13.935 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:14.195 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:14.195 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:14.195 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:14.456 00:47:38 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:14.456 00:47:38 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:14.456 00:47:38 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:42:14.456 00:47:38 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:42:14.456 00:47:38 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:14.456 00:47:38 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:42:14.456 00:47:38 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:14.456 00:47:38 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:14.456 00:47:38 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:14.457 00:47:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:14.457 00:47:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:16.362 00:47:40 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:16.362 00:42:16.362 real 0m38.424s 00:42:16.362 user 1m3.227s 00:42:16.362 sys 0m9.344s 00:42:16.362 00:47:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:16.362 00:47:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:16.362 ************************************ 00:42:16.362 END TEST nvmf_abort_qd_sizes 00:42:16.362 ************************************ 00:42:16.362 00:47:40 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:42:16.362 00:47:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:16.362 00:47:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:16.362 00:47:40 -- common/autotest_common.sh@10 -- # set +x 00:42:16.620 ************************************ 00:42:16.620 START TEST keyring_file 00:42:16.620 ************************************ 00:42:16.620 00:47:40 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:42:16.620 * Looking for test storage... 00:42:16.620 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:42:16.620 00:47:40 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:42:16.620 00:47:40 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:42:16.620 00:47:40 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:42:16.620 00:47:40 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:42:16.620 00:47:40 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:16.620 00:47:40 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:16.620 00:47:40 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:16.621 00:47:40 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:42:16.621 00:47:40 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:42:16.621 00:47:40 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:42:16.621 00:47:40 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:42:16.621 00:47:40 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:42:16.621 00:47:40 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:42:16.621 00:47:40 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:42:16.621 00:47:40 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:16.621 00:47:40 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:42:16.621 00:47:40 keyring_file -- scripts/common.sh@345 -- # : 1 00:42:16.621 00:47:40 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:16.621 00:47:40 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:16.621 00:47:40 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:42:16.621 00:47:40 keyring_file -- scripts/common.sh@353 -- # local d=1 00:42:16.621 00:47:40 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:16.621 00:47:40 keyring_file -- scripts/common.sh@355 -- # echo 1 00:42:16.621 00:47:40 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:42:16.621 00:47:40 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:42:16.621 00:47:40 keyring_file -- scripts/common.sh@353 -- # local d=2 00:42:16.621 00:47:40 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:16.621 00:47:40 keyring_file -- scripts/common.sh@355 -- # echo 2 00:42:16.621 00:47:40 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:42:16.621 00:47:40 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:16.621 00:47:40 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:16.621 00:47:40 keyring_file -- scripts/common.sh@368 -- # return 0 00:42:16.621 00:47:40 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:16.621 00:47:40 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:42:16.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:16.621 --rc genhtml_branch_coverage=1 00:42:16.621 --rc genhtml_function_coverage=1 00:42:16.621 --rc genhtml_legend=1 00:42:16.621 --rc geninfo_all_blocks=1 00:42:16.621 --rc geninfo_unexecuted_blocks=1 00:42:16.621 00:42:16.621 ' 00:42:16.621 00:47:40 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:42:16.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:16.621 --rc genhtml_branch_coverage=1 00:42:16.621 --rc genhtml_function_coverage=1 00:42:16.621 --rc genhtml_legend=1 00:42:16.621 --rc geninfo_all_blocks=1 00:42:16.621 --rc geninfo_unexecuted_blocks=1 00:42:16.621 00:42:16.621 ' 00:42:16.621 00:47:40 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:42:16.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:16.621 --rc genhtml_branch_coverage=1 00:42:16.621 --rc genhtml_function_coverage=1 00:42:16.621 --rc genhtml_legend=1 00:42:16.621 --rc geninfo_all_blocks=1 00:42:16.621 --rc geninfo_unexecuted_blocks=1 00:42:16.621 00:42:16.621 ' 00:42:16.621 00:47:40 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:42:16.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:16.621 --rc genhtml_branch_coverage=1 00:42:16.621 --rc genhtml_function_coverage=1 00:42:16.621 --rc genhtml_legend=1 00:42:16.621 --rc geninfo_all_blocks=1 00:42:16.621 --rc geninfo_unexecuted_blocks=1 00:42:16.621 00:42:16.621 ' 00:42:16.621 00:47:40 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:42:16.621 00:47:40 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:16.621 00:47:40 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:42:16.621 00:47:40 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:16.621 00:47:40 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:16.621 00:47:40 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:16.621 00:47:40 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:16.621 00:47:40 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:16.621 00:47:40 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:16.621 00:47:40 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:16.621 00:47:40 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:16.621 00:47:40 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:16.621 00:47:40 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:16.621 00:47:40 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:16.621 00:47:40 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:16.621 00:47:40 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:16.621 00:47:40 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:16.621 00:47:40 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:16.621 00:47:40 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:16.621 00:47:40 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:16.621 00:47:40 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:42:16.621 00:47:40 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:16.621 00:47:40 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:16.621 00:47:40 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:16.621 00:47:40 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:16.621 00:47:40 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:16.621 00:47:40 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:16.621 00:47:40 keyring_file -- paths/export.sh@5 -- # export PATH 00:42:16.621 00:47:40 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:16.621 00:47:40 keyring_file -- nvmf/common.sh@51 -- # : 0 00:42:16.621 00:47:40 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:16.621 00:47:40 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:16.621 00:47:40 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:16.621 00:47:40 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:16.621 00:47:40 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:16.621 00:47:40 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:16.621 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:16.621 00:47:40 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:16.621 00:47:40 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:16.621 00:47:40 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:16.621 00:47:40 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:42:16.621 00:47:40 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:42:16.621 00:47:40 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:42:16.621 00:47:40 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:42:16.621 00:47:40 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:42:16.621 00:47:40 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:42:16.621 00:47:40 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:42:16.621 00:47:40 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:16.621 00:47:40 keyring_file -- keyring/common.sh@17 -- # name=key0 00:42:16.621 00:47:40 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:16.621 00:47:40 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:16.621 00:47:40 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:16.621 00:47:40 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.kOa70EFlxI 00:42:16.621 00:47:40 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:16.621 00:47:40 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:16.621 00:47:40 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:42:16.621 00:47:40 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:42:16.621 00:47:40 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:42:16.621 00:47:40 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:42:16.621 00:47:40 keyring_file -- nvmf/common.sh@733 -- # python - 00:42:16.621 00:47:40 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.kOa70EFlxI 00:42:16.621 00:47:40 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.kOa70EFlxI 00:42:16.621 00:47:40 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.kOa70EFlxI 00:42:16.621 00:47:40 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:42:16.621 00:47:40 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:16.621 00:47:40 keyring_file -- keyring/common.sh@17 -- # name=key1 00:42:16.621 00:47:40 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:42:16.621 00:47:40 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:16.621 00:47:40 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:16.621 00:47:40 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.9pJyiOeu2Z 00:42:16.621 00:47:40 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:42:16.622 00:47:40 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:42:16.622 00:47:40 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:42:16.622 00:47:40 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:42:16.622 00:47:40 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:42:16.622 00:47:40 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:42:16.622 00:47:40 keyring_file -- nvmf/common.sh@733 -- # python - 00:42:16.880 00:47:40 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.9pJyiOeu2Z 00:42:16.880 00:47:40 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.9pJyiOeu2Z 00:42:16.880 00:47:40 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.9pJyiOeu2Z 00:42:16.880 00:47:40 keyring_file -- keyring/file.sh@30 -- # tgtpid=482066 00:42:16.880 00:47:40 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:42:16.880 00:47:40 keyring_file -- keyring/file.sh@32 -- # waitforlisten 482066 00:42:16.880 00:47:40 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 482066 ']' 00:42:16.880 00:47:40 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:16.880 00:47:40 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:16.880 00:47:40 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:16.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:16.880 00:47:40 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:16.880 00:47:40 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:16.880 [2024-11-18 00:47:40.509263] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:42:16.880 [2024-11-18 00:47:40.509378] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid482066 ] 00:42:16.880 [2024-11-18 00:47:40.579409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:16.880 [2024-11-18 00:47:40.630782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:17.139 00:47:40 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:17.139 00:47:40 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:42:17.139 00:47:40 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:42:17.139 00:47:40 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:17.139 00:47:40 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:17.139 [2024-11-18 00:47:40.907070] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:17.139 null0 00:42:17.139 [2024-11-18 00:47:40.939133] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:42:17.139 [2024-11-18 00:47:40.939557] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:42:17.139 00:47:40 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:17.139 00:47:40 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:17.139 00:47:40 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:42:17.139 00:47:40 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:17.139 00:47:40 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:42:17.139 00:47:40 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:17.139 00:47:40 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:42:17.139 00:47:40 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:17.139 00:47:40 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:17.139 00:47:40 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:17.398 00:47:40 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:17.398 [2024-11-18 00:47:40.963187] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:42:17.398 request: 00:42:17.398 { 00:42:17.398 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:42:17.398 "secure_channel": false, 00:42:17.398 "listen_address": { 00:42:17.398 "trtype": "tcp", 00:42:17.398 "traddr": "127.0.0.1", 00:42:17.398 "trsvcid": "4420" 00:42:17.398 }, 00:42:17.398 "method": "nvmf_subsystem_add_listener", 00:42:17.398 "req_id": 1 00:42:17.398 } 00:42:17.398 Got JSON-RPC error response 00:42:17.398 response: 00:42:17.398 { 00:42:17.398 "code": -32602, 00:42:17.398 "message": "Invalid parameters" 00:42:17.398 } 00:42:17.398 00:47:40 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:42:17.398 00:47:40 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:42:17.398 00:47:40 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:42:17.398 00:47:40 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:42:17.398 00:47:40 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:42:17.398 00:47:40 keyring_file -- keyring/file.sh@47 -- # bperfpid=482079 00:42:17.398 00:47:40 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:42:17.398 00:47:40 keyring_file -- keyring/file.sh@49 -- # waitforlisten 482079 /var/tmp/bperf.sock 00:42:17.398 00:47:40 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 482079 ']' 00:42:17.398 00:47:40 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:17.398 00:47:40 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:17.398 00:47:40 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:17.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:17.398 00:47:40 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:17.398 00:47:40 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:17.398 [2024-11-18 00:47:41.010766] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:42:17.398 [2024-11-18 00:47:41.010829] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid482079 ] 00:42:17.398 [2024-11-18 00:47:41.075442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:17.398 [2024-11-18 00:47:41.119890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:17.656 00:47:41 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:17.656 00:47:41 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:42:17.656 00:47:41 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.kOa70EFlxI 00:42:17.656 00:47:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.kOa70EFlxI 00:42:17.914 00:47:41 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.9pJyiOeu2Z 00:42:17.914 00:47:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.9pJyiOeu2Z 00:42:18.172 00:47:41 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:42:18.172 00:47:41 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:42:18.172 00:47:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:18.172 00:47:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:18.172 00:47:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:18.429 00:47:42 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.kOa70EFlxI == \/\t\m\p\/\t\m\p\.\k\O\a\7\0\E\F\l\x\I ]] 00:42:18.429 00:47:42 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:42:18.429 00:47:42 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:42:18.429 00:47:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:18.429 00:47:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:18.429 00:47:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:18.687 00:47:42 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.9pJyiOeu2Z == \/\t\m\p\/\t\m\p\.\9\p\J\y\i\O\e\u\2\Z ]] 00:42:18.687 00:47:42 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:42:18.687 00:47:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:18.687 00:47:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:18.687 00:47:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:18.687 00:47:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:18.687 00:47:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:18.944 00:47:42 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:42:18.944 00:47:42 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:42:18.944 00:47:42 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:18.944 00:47:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:18.944 00:47:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:18.944 00:47:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:18.944 00:47:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:19.202 00:47:42 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:42:19.202 00:47:42 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:19.202 00:47:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:19.460 [2024-11-18 00:47:43.135962] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:19.460 nvme0n1 00:42:19.460 00:47:43 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:42:19.460 00:47:43 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:19.460 00:47:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:19.460 00:47:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:19.460 00:47:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:19.460 00:47:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:19.719 00:47:43 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:42:19.719 00:47:43 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:42:19.719 00:47:43 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:19.719 00:47:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:19.719 00:47:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:19.719 00:47:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:19.719 00:47:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:19.979 00:47:43 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:42:19.979 00:47:43 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:20.236 Running I/O for 1 seconds... 00:42:21.170 10370.00 IOPS, 40.51 MiB/s 00:42:21.170 Latency(us) 00:42:21.170 [2024-11-17T23:47:44.992Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:21.170 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:42:21.170 nvme0n1 : 1.01 10421.49 40.71 0.00 0.00 12246.17 7330.32 23495.87 00:42:21.170 [2024-11-17T23:47:44.992Z] =================================================================================================================== 00:42:21.170 [2024-11-17T23:47:44.992Z] Total : 10421.49 40.71 0.00 0.00 12246.17 7330.32 23495.87 00:42:21.170 { 00:42:21.170 "results": [ 00:42:21.170 { 00:42:21.170 "job": "nvme0n1", 00:42:21.170 "core_mask": "0x2", 00:42:21.170 "workload": "randrw", 00:42:21.170 "percentage": 50, 00:42:21.170 "status": "finished", 00:42:21.170 "queue_depth": 128, 00:42:21.170 "io_size": 4096, 00:42:21.170 "runtime": 1.007342, 00:42:21.170 "iops": 10421.485453798214, 00:42:21.170 "mibps": 40.70892755389927, 00:42:21.170 "io_failed": 0, 00:42:21.170 "io_timeout": 0, 00:42:21.170 "avg_latency_us": 12246.170355411614, 00:42:21.170 "min_latency_us": 7330.322962962963, 00:42:21.170 "max_latency_us": 23495.86962962963 00:42:21.170 } 00:42:21.170 ], 00:42:21.170 "core_count": 1 00:42:21.170 } 00:42:21.170 00:47:44 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:21.170 00:47:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:21.428 00:47:45 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:42:21.428 00:47:45 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:21.428 00:47:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:21.428 00:47:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:21.428 00:47:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:21.428 00:47:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:21.685 00:47:45 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:42:21.685 00:47:45 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:42:21.685 00:47:45 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:21.685 00:47:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:21.685 00:47:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:21.685 00:47:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:21.685 00:47:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:21.943 00:47:45 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:42:21.943 00:47:45 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:21.943 00:47:45 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:42:21.943 00:47:45 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:21.943 00:47:45 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:42:21.943 00:47:45 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:21.943 00:47:45 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:42:21.943 00:47:45 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:21.943 00:47:45 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:21.943 00:47:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:22.201 [2024-11-18 00:47:46.008876] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:42:22.201 [2024-11-18 00:47:46.009505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1012990 (107): Transport endpoint is not connected 00:42:22.201 [2024-11-18 00:47:46.010496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1012990 (9): Bad file descriptor 00:42:22.201 [2024-11-18 00:47:46.011495] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:42:22.201 [2024-11-18 00:47:46.011515] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:42:22.201 [2024-11-18 00:47:46.011529] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:42:22.201 [2024-11-18 00:47:46.011544] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:42:22.201 request: 00:42:22.201 { 00:42:22.201 "name": "nvme0", 00:42:22.201 "trtype": "tcp", 00:42:22.201 "traddr": "127.0.0.1", 00:42:22.201 "adrfam": "ipv4", 00:42:22.201 "trsvcid": "4420", 00:42:22.201 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:22.201 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:22.201 "prchk_reftag": false, 00:42:22.201 "prchk_guard": false, 00:42:22.201 "hdgst": false, 00:42:22.201 "ddgst": false, 00:42:22.201 "psk": "key1", 00:42:22.201 "allow_unrecognized_csi": false, 00:42:22.201 "method": "bdev_nvme_attach_controller", 00:42:22.201 "req_id": 1 00:42:22.201 } 00:42:22.201 Got JSON-RPC error response 00:42:22.201 response: 00:42:22.201 { 00:42:22.201 "code": -5, 00:42:22.201 "message": "Input/output error" 00:42:22.201 } 00:42:22.459 00:47:46 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:42:22.459 00:47:46 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:42:22.459 00:47:46 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:42:22.459 00:47:46 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:42:22.459 00:47:46 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:42:22.459 00:47:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:22.459 00:47:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:22.459 00:47:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:22.459 00:47:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:22.459 00:47:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:22.717 00:47:46 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:42:22.717 00:47:46 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:42:22.717 00:47:46 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:22.717 00:47:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:22.717 00:47:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:22.717 00:47:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:22.717 00:47:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:23.005 00:47:46 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:42:23.005 00:47:46 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:42:23.005 00:47:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:23.293 00:47:46 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:42:23.293 00:47:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:42:23.600 00:47:47 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:42:23.600 00:47:47 keyring_file -- keyring/file.sh@78 -- # jq length 00:42:23.600 00:47:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:23.600 00:47:47 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:42:23.600 00:47:47 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.kOa70EFlxI 00:42:23.600 00:47:47 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.kOa70EFlxI 00:42:23.600 00:47:47 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:42:23.600 00:47:47 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.kOa70EFlxI 00:42:23.600 00:47:47 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:42:23.600 00:47:47 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:23.600 00:47:47 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:42:23.600 00:47:47 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:23.600 00:47:47 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.kOa70EFlxI 00:42:23.600 00:47:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.kOa70EFlxI 00:42:23.858 [2024-11-18 00:47:47.649498] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.kOa70EFlxI': 0100660 00:42:23.858 [2024-11-18 00:47:47.649533] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:42:23.858 request: 00:42:23.858 { 00:42:23.858 "name": "key0", 00:42:23.858 "path": "/tmp/tmp.kOa70EFlxI", 00:42:23.858 "method": "keyring_file_add_key", 00:42:23.858 "req_id": 1 00:42:23.858 } 00:42:23.858 Got JSON-RPC error response 00:42:23.858 response: 00:42:23.858 { 00:42:23.858 "code": -1, 00:42:23.858 "message": "Operation not permitted" 00:42:23.858 } 00:42:23.858 00:47:47 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:42:23.858 00:47:47 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:42:23.858 00:47:47 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:42:23.858 00:47:47 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:42:23.858 00:47:47 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.kOa70EFlxI 00:42:23.858 00:47:47 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.kOa70EFlxI 00:42:23.858 00:47:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.kOa70EFlxI 00:42:24.423 00:47:47 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.kOa70EFlxI 00:42:24.423 00:47:47 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:42:24.423 00:47:47 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:24.423 00:47:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:24.423 00:47:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:24.423 00:47:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:24.423 00:47:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:24.423 00:47:48 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:42:24.423 00:47:48 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:24.423 00:47:48 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:42:24.423 00:47:48 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:24.423 00:47:48 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:42:24.423 00:47:48 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:24.423 00:47:48 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:42:24.423 00:47:48 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:24.423 00:47:48 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:24.423 00:47:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:24.682 [2024-11-18 00:47:48.495801] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.kOa70EFlxI': No such file or directory 00:42:24.682 [2024-11-18 00:47:48.495840] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:42:24.682 [2024-11-18 00:47:48.495873] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:42:24.682 [2024-11-18 00:47:48.495886] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:42:24.682 [2024-11-18 00:47:48.495899] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:42:24.682 [2024-11-18 00:47:48.495910] bdev_nvme.c:6669:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:42:24.682 request: 00:42:24.682 { 00:42:24.682 "name": "nvme0", 00:42:24.682 "trtype": "tcp", 00:42:24.682 "traddr": "127.0.0.1", 00:42:24.682 "adrfam": "ipv4", 00:42:24.682 "trsvcid": "4420", 00:42:24.682 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:24.682 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:24.682 "prchk_reftag": false, 00:42:24.682 "prchk_guard": false, 00:42:24.682 "hdgst": false, 00:42:24.682 "ddgst": false, 00:42:24.682 "psk": "key0", 00:42:24.682 "allow_unrecognized_csi": false, 00:42:24.682 "method": "bdev_nvme_attach_controller", 00:42:24.682 "req_id": 1 00:42:24.682 } 00:42:24.682 Got JSON-RPC error response 00:42:24.682 response: 00:42:24.682 { 00:42:24.682 "code": -19, 00:42:24.682 "message": "No such device" 00:42:24.682 } 00:42:24.940 00:47:48 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:42:24.940 00:47:48 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:42:24.940 00:47:48 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:42:24.940 00:47:48 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:42:24.940 00:47:48 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:42:24.940 00:47:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:25.197 00:47:48 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:42:25.197 00:47:48 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:25.197 00:47:48 keyring_file -- keyring/common.sh@17 -- # name=key0 00:42:25.197 00:47:48 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:25.197 00:47:48 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:25.197 00:47:48 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:25.197 00:47:48 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.qYlsnCXeJq 00:42:25.197 00:47:48 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:25.197 00:47:48 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:25.197 00:47:48 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:42:25.197 00:47:48 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:42:25.197 00:47:48 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:42:25.197 00:47:48 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:42:25.197 00:47:48 keyring_file -- nvmf/common.sh@733 -- # python - 00:42:25.197 00:47:48 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.qYlsnCXeJq 00:42:25.197 00:47:48 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.qYlsnCXeJq 00:42:25.197 00:47:48 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.qYlsnCXeJq 00:42:25.197 00:47:48 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.qYlsnCXeJq 00:42:25.197 00:47:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.qYlsnCXeJq 00:42:25.455 00:47:49 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:25.455 00:47:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:25.713 nvme0n1 00:42:25.713 00:47:49 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:42:25.713 00:47:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:25.713 00:47:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:25.713 00:47:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:25.713 00:47:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:25.713 00:47:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:25.970 00:47:49 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:42:25.970 00:47:49 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:42:25.970 00:47:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:26.228 00:47:49 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:42:26.228 00:47:49 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:42:26.228 00:47:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:26.228 00:47:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:26.228 00:47:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:26.485 00:47:50 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:42:26.485 00:47:50 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:42:26.485 00:47:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:26.485 00:47:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:26.485 00:47:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:26.485 00:47:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:26.485 00:47:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:27.049 00:47:50 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:42:27.049 00:47:50 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:27.049 00:47:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:27.049 00:47:50 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:42:27.049 00:47:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:27.049 00:47:50 keyring_file -- keyring/file.sh@105 -- # jq length 00:42:27.307 00:47:51 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:42:27.307 00:47:51 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.qYlsnCXeJq 00:42:27.307 00:47:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.qYlsnCXeJq 00:42:27.873 00:47:51 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.9pJyiOeu2Z 00:42:27.873 00:47:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.9pJyiOeu2Z 00:42:27.873 00:47:51 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:27.873 00:47:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:28.452 nvme0n1 00:42:28.452 00:47:51 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:42:28.452 00:47:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:42:28.709 00:47:52 keyring_file -- keyring/file.sh@113 -- # config='{ 00:42:28.709 "subsystems": [ 00:42:28.709 { 00:42:28.709 "subsystem": "keyring", 00:42:28.709 "config": [ 00:42:28.709 { 00:42:28.709 "method": "keyring_file_add_key", 00:42:28.709 "params": { 00:42:28.709 "name": "key0", 00:42:28.709 "path": "/tmp/tmp.qYlsnCXeJq" 00:42:28.709 } 00:42:28.709 }, 00:42:28.709 { 00:42:28.709 "method": "keyring_file_add_key", 00:42:28.709 "params": { 00:42:28.709 "name": "key1", 00:42:28.709 "path": "/tmp/tmp.9pJyiOeu2Z" 00:42:28.709 } 00:42:28.709 } 00:42:28.709 ] 00:42:28.709 }, 00:42:28.709 { 00:42:28.709 "subsystem": "iobuf", 00:42:28.709 "config": [ 00:42:28.709 { 00:42:28.709 "method": "iobuf_set_options", 00:42:28.709 "params": { 00:42:28.709 "small_pool_count": 8192, 00:42:28.709 "large_pool_count": 1024, 00:42:28.709 "small_bufsize": 8192, 00:42:28.709 "large_bufsize": 135168, 00:42:28.709 "enable_numa": false 00:42:28.709 } 00:42:28.709 } 00:42:28.709 ] 00:42:28.709 }, 00:42:28.709 { 00:42:28.709 "subsystem": "sock", 00:42:28.709 "config": [ 00:42:28.709 { 00:42:28.709 "method": "sock_set_default_impl", 00:42:28.709 "params": { 00:42:28.709 "impl_name": "posix" 00:42:28.709 } 00:42:28.709 }, 00:42:28.709 { 00:42:28.709 "method": "sock_impl_set_options", 00:42:28.709 "params": { 00:42:28.709 "impl_name": "ssl", 00:42:28.709 "recv_buf_size": 4096, 00:42:28.709 "send_buf_size": 4096, 00:42:28.709 "enable_recv_pipe": true, 00:42:28.709 "enable_quickack": false, 00:42:28.709 "enable_placement_id": 0, 00:42:28.709 "enable_zerocopy_send_server": true, 00:42:28.709 "enable_zerocopy_send_client": false, 00:42:28.709 "zerocopy_threshold": 0, 00:42:28.709 "tls_version": 0, 00:42:28.709 "enable_ktls": false 00:42:28.709 } 00:42:28.709 }, 00:42:28.709 { 00:42:28.709 "method": "sock_impl_set_options", 00:42:28.709 "params": { 00:42:28.709 "impl_name": "posix", 00:42:28.709 "recv_buf_size": 2097152, 00:42:28.709 "send_buf_size": 2097152, 00:42:28.709 "enable_recv_pipe": true, 00:42:28.709 "enable_quickack": false, 00:42:28.709 "enable_placement_id": 0, 00:42:28.709 "enable_zerocopy_send_server": true, 00:42:28.709 "enable_zerocopy_send_client": false, 00:42:28.709 "zerocopy_threshold": 0, 00:42:28.709 "tls_version": 0, 00:42:28.709 "enable_ktls": false 00:42:28.709 } 00:42:28.709 } 00:42:28.709 ] 00:42:28.709 }, 00:42:28.709 { 00:42:28.709 "subsystem": "vmd", 00:42:28.709 "config": [] 00:42:28.709 }, 00:42:28.709 { 00:42:28.709 "subsystem": "accel", 00:42:28.709 "config": [ 00:42:28.709 { 00:42:28.709 "method": "accel_set_options", 00:42:28.709 "params": { 00:42:28.709 "small_cache_size": 128, 00:42:28.709 "large_cache_size": 16, 00:42:28.709 "task_count": 2048, 00:42:28.709 "sequence_count": 2048, 00:42:28.709 "buf_count": 2048 00:42:28.709 } 00:42:28.709 } 00:42:28.709 ] 00:42:28.709 }, 00:42:28.709 { 00:42:28.709 "subsystem": "bdev", 00:42:28.709 "config": [ 00:42:28.709 { 00:42:28.709 "method": "bdev_set_options", 00:42:28.709 "params": { 00:42:28.709 "bdev_io_pool_size": 65535, 00:42:28.709 "bdev_io_cache_size": 256, 00:42:28.709 "bdev_auto_examine": true, 00:42:28.709 "iobuf_small_cache_size": 128, 00:42:28.709 "iobuf_large_cache_size": 16 00:42:28.709 } 00:42:28.709 }, 00:42:28.709 { 00:42:28.709 "method": "bdev_raid_set_options", 00:42:28.709 "params": { 00:42:28.709 "process_window_size_kb": 1024, 00:42:28.709 "process_max_bandwidth_mb_sec": 0 00:42:28.709 } 00:42:28.709 }, 00:42:28.709 { 00:42:28.709 "method": "bdev_iscsi_set_options", 00:42:28.709 "params": { 00:42:28.709 "timeout_sec": 30 00:42:28.709 } 00:42:28.709 }, 00:42:28.709 { 00:42:28.709 "method": "bdev_nvme_set_options", 00:42:28.709 "params": { 00:42:28.709 "action_on_timeout": "none", 00:42:28.709 "timeout_us": 0, 00:42:28.709 "timeout_admin_us": 0, 00:42:28.709 "keep_alive_timeout_ms": 10000, 00:42:28.709 "arbitration_burst": 0, 00:42:28.709 "low_priority_weight": 0, 00:42:28.709 "medium_priority_weight": 0, 00:42:28.709 "high_priority_weight": 0, 00:42:28.709 "nvme_adminq_poll_period_us": 10000, 00:42:28.709 "nvme_ioq_poll_period_us": 0, 00:42:28.709 "io_queue_requests": 512, 00:42:28.709 "delay_cmd_submit": true, 00:42:28.709 "transport_retry_count": 4, 00:42:28.709 "bdev_retry_count": 3, 00:42:28.709 "transport_ack_timeout": 0, 00:42:28.709 "ctrlr_loss_timeout_sec": 0, 00:42:28.709 "reconnect_delay_sec": 0, 00:42:28.709 "fast_io_fail_timeout_sec": 0, 00:42:28.709 "disable_auto_failback": false, 00:42:28.709 "generate_uuids": false, 00:42:28.709 "transport_tos": 0, 00:42:28.709 "nvme_error_stat": false, 00:42:28.709 "rdma_srq_size": 0, 00:42:28.709 "io_path_stat": false, 00:42:28.709 "allow_accel_sequence": false, 00:42:28.709 "rdma_max_cq_size": 0, 00:42:28.709 "rdma_cm_event_timeout_ms": 0, 00:42:28.709 "dhchap_digests": [ 00:42:28.709 "sha256", 00:42:28.709 "sha384", 00:42:28.709 "sha512" 00:42:28.709 ], 00:42:28.709 "dhchap_dhgroups": [ 00:42:28.709 "null", 00:42:28.709 "ffdhe2048", 00:42:28.709 "ffdhe3072", 00:42:28.709 "ffdhe4096", 00:42:28.709 "ffdhe6144", 00:42:28.709 "ffdhe8192" 00:42:28.709 ] 00:42:28.709 } 00:42:28.709 }, 00:42:28.709 { 00:42:28.709 "method": "bdev_nvme_attach_controller", 00:42:28.709 "params": { 00:42:28.709 "name": "nvme0", 00:42:28.709 "trtype": "TCP", 00:42:28.709 "adrfam": "IPv4", 00:42:28.709 "traddr": "127.0.0.1", 00:42:28.709 "trsvcid": "4420", 00:42:28.709 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:28.709 "prchk_reftag": false, 00:42:28.709 "prchk_guard": false, 00:42:28.709 "ctrlr_loss_timeout_sec": 0, 00:42:28.709 "reconnect_delay_sec": 0, 00:42:28.709 "fast_io_fail_timeout_sec": 0, 00:42:28.709 "psk": "key0", 00:42:28.709 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:28.709 "hdgst": false, 00:42:28.709 "ddgst": false, 00:42:28.709 "multipath": "multipath" 00:42:28.709 } 00:42:28.709 }, 00:42:28.709 { 00:42:28.709 "method": "bdev_nvme_set_hotplug", 00:42:28.709 "params": { 00:42:28.709 "period_us": 100000, 00:42:28.709 "enable": false 00:42:28.709 } 00:42:28.709 }, 00:42:28.709 { 00:42:28.709 "method": "bdev_wait_for_examine" 00:42:28.709 } 00:42:28.709 ] 00:42:28.709 }, 00:42:28.709 { 00:42:28.709 "subsystem": "nbd", 00:42:28.709 "config": [] 00:42:28.709 } 00:42:28.709 ] 00:42:28.709 }' 00:42:28.709 00:47:52 keyring_file -- keyring/file.sh@115 -- # killprocess 482079 00:42:28.709 00:47:52 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 482079 ']' 00:42:28.709 00:47:52 keyring_file -- common/autotest_common.sh@958 -- # kill -0 482079 00:42:28.709 00:47:52 keyring_file -- common/autotest_common.sh@959 -- # uname 00:42:28.709 00:47:52 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:28.709 00:47:52 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 482079 00:42:28.709 00:47:52 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:42:28.709 00:47:52 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:42:28.709 00:47:52 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 482079' 00:42:28.709 killing process with pid 482079 00:42:28.709 00:47:52 keyring_file -- common/autotest_common.sh@973 -- # kill 482079 00:42:28.709 Received shutdown signal, test time was about 1.000000 seconds 00:42:28.709 00:42:28.709 Latency(us) 00:42:28.709 [2024-11-17T23:47:52.531Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:28.709 [2024-11-17T23:47:52.531Z] =================================================================================================================== 00:42:28.709 [2024-11-17T23:47:52.531Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:28.709 00:47:52 keyring_file -- common/autotest_common.sh@978 -- # wait 482079 00:42:28.966 00:47:52 keyring_file -- keyring/file.sh@118 -- # bperfpid=483558 00:42:28.966 00:47:52 keyring_file -- keyring/file.sh@120 -- # waitforlisten 483558 /var/tmp/bperf.sock 00:42:28.966 00:47:52 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 483558 ']' 00:42:28.966 00:47:52 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:28.966 00:47:52 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:42:28.966 00:47:52 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:28.966 00:47:52 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:42:28.966 "subsystems": [ 00:42:28.966 { 00:42:28.966 "subsystem": "keyring", 00:42:28.966 "config": [ 00:42:28.966 { 00:42:28.966 "method": "keyring_file_add_key", 00:42:28.966 "params": { 00:42:28.966 "name": "key0", 00:42:28.966 "path": "/tmp/tmp.qYlsnCXeJq" 00:42:28.966 } 00:42:28.966 }, 00:42:28.966 { 00:42:28.966 "method": "keyring_file_add_key", 00:42:28.966 "params": { 00:42:28.966 "name": "key1", 00:42:28.966 "path": "/tmp/tmp.9pJyiOeu2Z" 00:42:28.966 } 00:42:28.966 } 00:42:28.966 ] 00:42:28.966 }, 00:42:28.966 { 00:42:28.966 "subsystem": "iobuf", 00:42:28.966 "config": [ 00:42:28.966 { 00:42:28.966 "method": "iobuf_set_options", 00:42:28.966 "params": { 00:42:28.966 "small_pool_count": 8192, 00:42:28.966 "large_pool_count": 1024, 00:42:28.966 "small_bufsize": 8192, 00:42:28.966 "large_bufsize": 135168, 00:42:28.966 "enable_numa": false 00:42:28.966 } 00:42:28.966 } 00:42:28.966 ] 00:42:28.966 }, 00:42:28.966 { 00:42:28.966 "subsystem": "sock", 00:42:28.966 "config": [ 00:42:28.966 { 00:42:28.966 "method": "sock_set_default_impl", 00:42:28.966 "params": { 00:42:28.966 "impl_name": "posix" 00:42:28.966 } 00:42:28.966 }, 00:42:28.966 { 00:42:28.966 "method": "sock_impl_set_options", 00:42:28.966 "params": { 00:42:28.966 "impl_name": "ssl", 00:42:28.966 "recv_buf_size": 4096, 00:42:28.966 "send_buf_size": 4096, 00:42:28.966 "enable_recv_pipe": true, 00:42:28.966 "enable_quickack": false, 00:42:28.966 "enable_placement_id": 0, 00:42:28.966 "enable_zerocopy_send_server": true, 00:42:28.966 "enable_zerocopy_send_client": false, 00:42:28.966 "zerocopy_threshold": 0, 00:42:28.966 "tls_version": 0, 00:42:28.966 "enable_ktls": false 00:42:28.966 } 00:42:28.966 }, 00:42:28.966 { 00:42:28.966 "method": "sock_impl_set_options", 00:42:28.966 "params": { 00:42:28.966 "impl_name": "posix", 00:42:28.966 "recv_buf_size": 2097152, 00:42:28.966 "send_buf_size": 2097152, 00:42:28.966 "enable_recv_pipe": true, 00:42:28.966 "enable_quickack": false, 00:42:28.966 "enable_placement_id": 0, 00:42:28.966 "enable_zerocopy_send_server": true, 00:42:28.966 "enable_zerocopy_send_client": false, 00:42:28.966 "zerocopy_threshold": 0, 00:42:28.966 "tls_version": 0, 00:42:28.966 "enable_ktls": false 00:42:28.966 } 00:42:28.966 } 00:42:28.966 ] 00:42:28.966 }, 00:42:28.966 { 00:42:28.966 "subsystem": "vmd", 00:42:28.966 "config": [] 00:42:28.966 }, 00:42:28.966 { 00:42:28.966 "subsystem": "accel", 00:42:28.966 "config": [ 00:42:28.966 { 00:42:28.966 "method": "accel_set_options", 00:42:28.966 "params": { 00:42:28.966 "small_cache_size": 128, 00:42:28.966 "large_cache_size": 16, 00:42:28.966 "task_count": 2048, 00:42:28.966 "sequence_count": 2048, 00:42:28.966 "buf_count": 2048 00:42:28.966 } 00:42:28.966 } 00:42:28.966 ] 00:42:28.966 }, 00:42:28.966 { 00:42:28.966 "subsystem": "bdev", 00:42:28.966 "config": [ 00:42:28.966 { 00:42:28.966 "method": "bdev_set_options", 00:42:28.966 "params": { 00:42:28.966 "bdev_io_pool_size": 65535, 00:42:28.966 "bdev_io_cache_size": 256, 00:42:28.966 "bdev_auto_examine": true, 00:42:28.966 "iobuf_small_cache_size": 128, 00:42:28.966 "iobuf_large_cache_size": 16 00:42:28.966 } 00:42:28.966 }, 00:42:28.966 { 00:42:28.966 "method": "bdev_raid_set_options", 00:42:28.966 "params": { 00:42:28.966 "process_window_size_kb": 1024, 00:42:28.966 "process_max_bandwidth_mb_sec": 0 00:42:28.966 } 00:42:28.966 }, 00:42:28.966 { 00:42:28.966 "method": "bdev_iscsi_set_options", 00:42:28.966 "params": { 00:42:28.966 "timeout_sec": 30 00:42:28.966 } 00:42:28.966 }, 00:42:28.966 { 00:42:28.966 "method": "bdev_nvme_set_options", 00:42:28.966 "params": { 00:42:28.966 "action_on_timeout": "none", 00:42:28.966 "timeout_us": 0, 00:42:28.966 "timeout_admin_us": 0, 00:42:28.966 "keep_alive_timeout_ms": 10000, 00:42:28.966 "arbitration_burst": 0, 00:42:28.966 "low_priority_weight": 0, 00:42:28.967 "medium_priority_weight": 0, 00:42:28.967 "high_priority_weight": 0, 00:42:28.967 "nvme_adminq_poll_period_us": 10000, 00:42:28.967 "nvme_ioq_poll_period_us": 0, 00:42:28.967 "io_queue_requests": 512, 00:42:28.967 "delay_cmd_submit": true, 00:42:28.967 "transport_retry_count": 4, 00:42:28.967 "bdev_retry_count": 3, 00:42:28.967 "transport_ack_timeout": 0, 00:42:28.967 "ctrlr_loss_timeout_sec": 0, 00:42:28.967 "reconnect_delay_sec": 0, 00:42:28.967 "fast_io_fail_timeout_sec": 0, 00:42:28.967 "disable_auto_failback": false, 00:42:28.967 "generate_uuids": false, 00:42:28.967 "transport_tos": 0, 00:42:28.967 "nvme_error_stat": false, 00:42:28.967 "rdma_srq_size": 0, 00:42:28.967 "io_path_stat": false, 00:42:28.967 "allow_accel_sequence": false, 00:42:28.967 "rdma_max_cq_size": 0, 00:42:28.967 "rdma_cm_event_timeout_ms": 0, 00:42:28.967 "dhchap_digests": [ 00:42:28.967 "sha256", 00:42:28.967 "sha384", 00:42:28.967 "sha512" 00:42:28.967 ], 00:42:28.967 "dhchap_dhgroups": [ 00:42:28.967 "null", 00:42:28.967 "ffdhe2048", 00:42:28.967 "ffdhe3072", 00:42:28.967 "ffdhe4096", 00:42:28.967 "ffdhe6144", 00:42:28.967 "ffdhe8192" 00:42:28.967 ] 00:42:28.967 } 00:42:28.967 }, 00:42:28.967 { 00:42:28.967 "method": "bdev_nvme_attach_controller", 00:42:28.967 "params": { 00:42:28.967 "name": "nvme0", 00:42:28.967 "trtype": "TCP", 00:42:28.967 "adrfam": "IPv4", 00:42:28.967 "traddr": "127.0.0.1", 00:42:28.967 "trsvcid": "4420", 00:42:28.967 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:28.967 "prchk_reftag": false, 00:42:28.967 "prchk_guard": false, 00:42:28.967 "ctrlr_loss_timeout_sec": 0, 00:42:28.967 "reconnect_delay_sec": 0, 00:42:28.967 "fast_io_fail_timeout_sec": 0, 00:42:28.967 "psk": "key0", 00:42:28.967 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:28.967 "hdgst": false, 00:42:28.967 "ddgst": false, 00:42:28.967 "multipath": "multipath" 00:42:28.967 } 00:42:28.967 }, 00:42:28.967 { 00:42:28.967 "method": "bdev_nvme_set_hotplug", 00:42:28.967 "params": { 00:42:28.967 "period_us": 100000, 00:42:28.967 "enable": false 00:42:28.967 } 00:42:28.967 }, 00:42:28.967 { 00:42:28.967 "method": "bdev_wait_for_examine" 00:42:28.967 } 00:42:28.967 ] 00:42:28.967 }, 00:42:28.967 { 00:42:28.967 "subsystem": "nbd", 00:42:28.967 "config": [] 00:42:28.967 } 00:42:28.967 ] 00:42:28.967 }' 00:42:28.967 00:47:52 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:28.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:28.967 00:47:52 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:28.967 00:47:52 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:28.967 [2024-11-18 00:47:52.604257] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:42:28.967 [2024-11-18 00:47:52.604371] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid483558 ] 00:42:28.967 [2024-11-18 00:47:52.674421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:28.967 [2024-11-18 00:47:52.722010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:29.224 [2024-11-18 00:47:52.897702] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:29.224 00:47:53 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:29.224 00:47:53 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:42:29.224 00:47:53 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:42:29.224 00:47:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:29.224 00:47:53 keyring_file -- keyring/file.sh@121 -- # jq length 00:42:29.483 00:47:53 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:42:29.483 00:47:53 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:42:29.483 00:47:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:29.483 00:47:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:29.483 00:47:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:29.483 00:47:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:29.483 00:47:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:29.741 00:47:53 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:42:29.741 00:47:53 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:42:29.741 00:47:53 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:29.741 00:47:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:29.741 00:47:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:29.741 00:47:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:29.741 00:47:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:30.307 00:47:53 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:42:30.307 00:47:53 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:42:30.307 00:47:53 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:42:30.307 00:47:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:42:30.307 00:47:54 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:42:30.307 00:47:54 keyring_file -- keyring/file.sh@1 -- # cleanup 00:42:30.307 00:47:54 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.qYlsnCXeJq /tmp/tmp.9pJyiOeu2Z 00:42:30.307 00:47:54 keyring_file -- keyring/file.sh@20 -- # killprocess 483558 00:42:30.307 00:47:54 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 483558 ']' 00:42:30.307 00:47:54 keyring_file -- common/autotest_common.sh@958 -- # kill -0 483558 00:42:30.307 00:47:54 keyring_file -- common/autotest_common.sh@959 -- # uname 00:42:30.307 00:47:54 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:30.307 00:47:54 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 483558 00:42:30.566 00:47:54 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:42:30.566 00:47:54 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:42:30.566 00:47:54 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 483558' 00:42:30.566 killing process with pid 483558 00:42:30.566 00:47:54 keyring_file -- common/autotest_common.sh@973 -- # kill 483558 00:42:30.566 Received shutdown signal, test time was about 1.000000 seconds 00:42:30.566 00:42:30.566 Latency(us) 00:42:30.566 [2024-11-17T23:47:54.388Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:30.566 [2024-11-17T23:47:54.388Z] =================================================================================================================== 00:42:30.566 [2024-11-17T23:47:54.388Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:42:30.566 00:47:54 keyring_file -- common/autotest_common.sh@978 -- # wait 483558 00:42:30.566 00:47:54 keyring_file -- keyring/file.sh@21 -- # killprocess 482066 00:42:30.566 00:47:54 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 482066 ']' 00:42:30.566 00:47:54 keyring_file -- common/autotest_common.sh@958 -- # kill -0 482066 00:42:30.566 00:47:54 keyring_file -- common/autotest_common.sh@959 -- # uname 00:42:30.566 00:47:54 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:30.566 00:47:54 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 482066 00:42:30.566 00:47:54 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:30.566 00:47:54 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:30.566 00:47:54 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 482066' 00:42:30.566 killing process with pid 482066 00:42:30.566 00:47:54 keyring_file -- common/autotest_common.sh@973 -- # kill 482066 00:42:30.566 00:47:54 keyring_file -- common/autotest_common.sh@978 -- # wait 482066 00:42:31.132 00:42:31.132 real 0m14.583s 00:42:31.132 user 0m37.212s 00:42:31.132 sys 0m3.216s 00:42:31.132 00:47:54 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:31.132 00:47:54 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:31.132 ************************************ 00:42:31.132 END TEST keyring_file 00:42:31.132 ************************************ 00:42:31.132 00:47:54 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:42:31.132 00:47:54 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:42:31.132 00:47:54 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:42:31.132 00:47:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:31.132 00:47:54 -- common/autotest_common.sh@10 -- # set +x 00:42:31.132 ************************************ 00:42:31.132 START TEST keyring_linux 00:42:31.132 ************************************ 00:42:31.132 00:47:54 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:42:31.133 Joined session keyring: 139028083 00:42:31.133 * Looking for test storage... 00:42:31.133 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:42:31.133 00:47:54 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:42:31.133 00:47:54 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:42:31.133 00:47:54 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:42:31.391 00:47:54 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:42:31.391 00:47:54 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:31.391 00:47:54 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:31.391 00:47:54 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:31.391 00:47:54 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:42:31.391 00:47:54 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:42:31.391 00:47:54 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:42:31.391 00:47:54 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:42:31.391 00:47:54 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:42:31.391 00:47:54 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:42:31.391 00:47:54 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:42:31.391 00:47:54 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:31.391 00:47:54 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:42:31.391 00:47:54 keyring_linux -- scripts/common.sh@345 -- # : 1 00:42:31.391 00:47:54 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:31.391 00:47:54 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:31.391 00:47:54 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:42:31.391 00:47:54 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:42:31.391 00:47:54 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:31.391 00:47:54 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:42:31.391 00:47:54 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:42:31.391 00:47:54 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:42:31.391 00:47:54 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:42:31.391 00:47:54 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:31.391 00:47:54 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:42:31.391 00:47:54 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:42:31.391 00:47:54 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:31.391 00:47:54 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:31.391 00:47:54 keyring_linux -- scripts/common.sh@368 -- # return 0 00:42:31.391 00:47:54 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:31.391 00:47:54 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:42:31.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:31.391 --rc genhtml_branch_coverage=1 00:42:31.391 --rc genhtml_function_coverage=1 00:42:31.391 --rc genhtml_legend=1 00:42:31.391 --rc geninfo_all_blocks=1 00:42:31.391 --rc geninfo_unexecuted_blocks=1 00:42:31.391 00:42:31.391 ' 00:42:31.391 00:47:54 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:42:31.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:31.391 --rc genhtml_branch_coverage=1 00:42:31.391 --rc genhtml_function_coverage=1 00:42:31.391 --rc genhtml_legend=1 00:42:31.391 --rc geninfo_all_blocks=1 00:42:31.391 --rc geninfo_unexecuted_blocks=1 00:42:31.391 00:42:31.391 ' 00:42:31.391 00:47:54 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:42:31.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:31.391 --rc genhtml_branch_coverage=1 00:42:31.391 --rc genhtml_function_coverage=1 00:42:31.391 --rc genhtml_legend=1 00:42:31.391 --rc geninfo_all_blocks=1 00:42:31.391 --rc geninfo_unexecuted_blocks=1 00:42:31.391 00:42:31.391 ' 00:42:31.391 00:47:54 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:42:31.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:31.391 --rc genhtml_branch_coverage=1 00:42:31.391 --rc genhtml_function_coverage=1 00:42:31.391 --rc genhtml_legend=1 00:42:31.391 --rc geninfo_all_blocks=1 00:42:31.391 --rc geninfo_unexecuted_blocks=1 00:42:31.391 00:42:31.391 ' 00:42:31.391 00:47:54 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:42:31.391 00:47:54 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:31.391 00:47:54 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:42:31.391 00:47:54 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:31.392 00:47:54 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:31.392 00:47:54 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:31.392 00:47:54 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:31.392 00:47:54 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:31.392 00:47:54 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:31.392 00:47:54 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:31.392 00:47:54 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:31.392 00:47:54 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:31.392 00:47:54 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:31.392 00:47:54 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:31.392 00:47:54 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:31.392 00:47:54 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:31.392 00:47:54 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:31.392 00:47:54 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:31.392 00:47:54 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:31.392 00:47:54 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:31.392 00:47:54 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:42:31.392 00:47:54 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:31.392 00:47:54 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:31.392 00:47:54 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:31.392 00:47:54 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:31.392 00:47:54 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:31.392 00:47:54 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:31.392 00:47:54 keyring_linux -- paths/export.sh@5 -- # export PATH 00:42:31.392 00:47:54 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:31.392 00:47:54 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:42:31.392 00:47:54 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:31.392 00:47:54 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:31.392 00:47:54 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:31.392 00:47:54 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:31.392 00:47:54 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:31.392 00:47:54 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:31.392 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:31.392 00:47:54 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:31.392 00:47:54 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:31.392 00:47:54 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:31.392 00:47:54 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:42:31.392 00:47:54 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:42:31.392 00:47:54 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:42:31.392 00:47:54 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:42:31.392 00:47:54 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:42:31.392 00:47:54 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:42:31.392 00:47:54 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:42:31.392 00:47:54 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:42:31.392 00:47:54 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:42:31.392 00:47:54 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:31.392 00:47:54 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:42:31.392 00:47:54 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:42:31.392 00:47:54 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:31.392 00:47:54 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:31.392 00:47:54 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:42:31.392 00:47:54 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:42:31.392 00:47:54 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:42:31.392 00:47:54 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:42:31.392 00:47:54 keyring_linux -- nvmf/common.sh@733 -- # python - 00:42:31.392 00:47:55 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:42:31.392 00:47:55 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:42:31.392 /tmp/:spdk-test:key0 00:42:31.392 00:47:55 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:42:31.392 00:47:55 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:42:31.392 00:47:55 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:42:31.392 00:47:55 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:42:31.392 00:47:55 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:42:31.392 00:47:55 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:42:31.392 00:47:55 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:42:31.392 00:47:55 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:42:31.392 00:47:55 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:42:31.392 00:47:55 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:42:31.392 00:47:55 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:42:31.392 00:47:55 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:42:31.392 00:47:55 keyring_linux -- nvmf/common.sh@733 -- # python - 00:42:31.392 00:47:55 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:42:31.392 00:47:55 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:42:31.392 /tmp/:spdk-test:key1 00:42:31.392 00:47:55 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=484031 00:42:31.392 00:47:55 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:42:31.392 00:47:55 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 484031 00:42:31.392 00:47:55 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 484031 ']' 00:42:31.392 00:47:55 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:31.392 00:47:55 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:31.392 00:47:55 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:31.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:31.392 00:47:55 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:31.392 00:47:55 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:31.392 [2024-11-18 00:47:55.113496] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:42:31.392 [2024-11-18 00:47:55.113585] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid484031 ] 00:42:31.392 [2024-11-18 00:47:55.180006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:31.650 [2024-11-18 00:47:55.232325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:31.909 00:47:55 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:31.909 00:47:55 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:42:31.909 00:47:55 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:42:31.909 00:47:55 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:31.909 00:47:55 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:31.909 [2024-11-18 00:47:55.512791] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:31.909 null0 00:42:31.909 [2024-11-18 00:47:55.544845] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:42:31.909 [2024-11-18 00:47:55.545278] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:42:31.909 00:47:55 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:31.909 00:47:55 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:42:31.909 910109309 00:42:31.909 00:47:55 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:42:31.909 612963814 00:42:31.909 00:47:55 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=484044 00:42:31.909 00:47:55 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:42:31.909 00:47:55 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 484044 /var/tmp/bperf.sock 00:42:31.909 00:47:55 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 484044 ']' 00:42:31.909 00:47:55 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:31.909 00:47:55 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:31.909 00:47:55 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:31.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:31.909 00:47:55 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:31.909 00:47:55 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:31.909 [2024-11-18 00:47:55.613364] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:42:31.909 [2024-11-18 00:47:55.613446] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid484044 ] 00:42:31.909 [2024-11-18 00:47:55.681466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:31.909 [2024-11-18 00:47:55.727842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:32.167 00:47:55 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:32.167 00:47:55 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:42:32.167 00:47:55 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:42:32.167 00:47:55 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:42:32.424 00:47:56 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:42:32.424 00:47:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:42:32.683 00:47:56 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:42:32.683 00:47:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:42:32.941 [2024-11-18 00:47:56.726195] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:33.199 nvme0n1 00:42:33.199 00:47:56 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:42:33.199 00:47:56 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:42:33.199 00:47:56 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:42:33.199 00:47:56 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:42:33.199 00:47:56 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:42:33.199 00:47:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:33.457 00:47:57 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:42:33.457 00:47:57 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:42:33.457 00:47:57 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:42:33.458 00:47:57 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:42:33.458 00:47:57 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:33.458 00:47:57 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:33.458 00:47:57 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:42:33.715 00:47:57 keyring_linux -- keyring/linux.sh@25 -- # sn=910109309 00:42:33.715 00:47:57 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:42:33.715 00:47:57 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:42:33.716 00:47:57 keyring_linux -- keyring/linux.sh@26 -- # [[ 910109309 == \9\1\0\1\0\9\3\0\9 ]] 00:42:33.716 00:47:57 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 910109309 00:42:33.716 00:47:57 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:42:33.716 00:47:57 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:33.716 Running I/O for 1 seconds... 00:42:35.089 11624.00 IOPS, 45.41 MiB/s 00:42:35.089 Latency(us) 00:42:35.089 [2024-11-17T23:47:58.911Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:35.089 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:42:35.089 nvme0n1 : 1.01 11629.61 45.43 0.00 0.00 10941.44 3046.21 13981.01 00:42:35.089 [2024-11-17T23:47:58.911Z] =================================================================================================================== 00:42:35.089 [2024-11-17T23:47:58.911Z] Total : 11629.61 45.43 0.00 0.00 10941.44 3046.21 13981.01 00:42:35.089 { 00:42:35.089 "results": [ 00:42:35.089 { 00:42:35.089 "job": "nvme0n1", 00:42:35.089 "core_mask": "0x2", 00:42:35.089 "workload": "randread", 00:42:35.089 "status": "finished", 00:42:35.089 "queue_depth": 128, 00:42:35.089 "io_size": 4096, 00:42:35.089 "runtime": 1.01061, 00:42:35.089 "iops": 11629.609839601824, 00:42:35.089 "mibps": 45.428163435944626, 00:42:35.089 "io_failed": 0, 00:42:35.089 "io_timeout": 0, 00:42:35.089 "avg_latency_us": 10941.44214589813, 00:42:35.089 "min_latency_us": 3046.2103703703706, 00:42:35.089 "max_latency_us": 13981.013333333334 00:42:35.089 } 00:42:35.089 ], 00:42:35.089 "core_count": 1 00:42:35.089 } 00:42:35.089 00:47:58 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:35.089 00:47:58 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:35.089 00:47:58 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:42:35.089 00:47:58 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:42:35.089 00:47:58 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:42:35.089 00:47:58 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:42:35.089 00:47:58 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:42:35.089 00:47:58 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:35.347 00:47:59 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:42:35.347 00:47:59 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:42:35.347 00:47:59 keyring_linux -- keyring/linux.sh@23 -- # return 00:42:35.347 00:47:59 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:35.347 00:47:59 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:42:35.347 00:47:59 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:35.347 00:47:59 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:42:35.347 00:47:59 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:35.347 00:47:59 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:42:35.347 00:47:59 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:35.347 00:47:59 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:35.347 00:47:59 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:35.605 [2024-11-18 00:47:59.298999] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:42:35.605 [2024-11-18 00:47:59.299108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc48f0 (107): Transport endpoint is not connected 00:42:35.605 [2024-11-18 00:47:59.300101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc48f0 (9): Bad file descriptor 00:42:35.605 [2024-11-18 00:47:59.301101] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:42:35.605 [2024-11-18 00:47:59.301123] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:42:35.605 [2024-11-18 00:47:59.301136] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:42:35.605 [2024-11-18 00:47:59.301151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:42:35.605 request: 00:42:35.605 { 00:42:35.605 "name": "nvme0", 00:42:35.605 "trtype": "tcp", 00:42:35.605 "traddr": "127.0.0.1", 00:42:35.605 "adrfam": "ipv4", 00:42:35.605 "trsvcid": "4420", 00:42:35.605 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:35.605 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:35.605 "prchk_reftag": false, 00:42:35.605 "prchk_guard": false, 00:42:35.605 "hdgst": false, 00:42:35.605 "ddgst": false, 00:42:35.605 "psk": ":spdk-test:key1", 00:42:35.605 "allow_unrecognized_csi": false, 00:42:35.605 "method": "bdev_nvme_attach_controller", 00:42:35.605 "req_id": 1 00:42:35.605 } 00:42:35.605 Got JSON-RPC error response 00:42:35.605 response: 00:42:35.605 { 00:42:35.605 "code": -5, 00:42:35.605 "message": "Input/output error" 00:42:35.605 } 00:42:35.605 00:47:59 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:42:35.605 00:47:59 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:42:35.605 00:47:59 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:42:35.605 00:47:59 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:42:35.605 00:47:59 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:42:35.605 00:47:59 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:42:35.605 00:47:59 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:42:35.605 00:47:59 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:42:35.605 00:47:59 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:42:35.605 00:47:59 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:42:35.605 00:47:59 keyring_linux -- keyring/linux.sh@33 -- # sn=910109309 00:42:35.605 00:47:59 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 910109309 00:42:35.605 1 links removed 00:42:35.605 00:47:59 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:42:35.605 00:47:59 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:42:35.605 00:47:59 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:42:35.605 00:47:59 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:42:35.605 00:47:59 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:42:35.605 00:47:59 keyring_linux -- keyring/linux.sh@33 -- # sn=612963814 00:42:35.605 00:47:59 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 612963814 00:42:35.605 1 links removed 00:42:35.605 00:47:59 keyring_linux -- keyring/linux.sh@41 -- # killprocess 484044 00:42:35.605 00:47:59 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 484044 ']' 00:42:35.605 00:47:59 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 484044 00:42:35.605 00:47:59 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:42:35.605 00:47:59 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:35.605 00:47:59 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 484044 00:42:35.605 00:47:59 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:42:35.605 00:47:59 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:42:35.605 00:47:59 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 484044' 00:42:35.605 killing process with pid 484044 00:42:35.605 00:47:59 keyring_linux -- common/autotest_common.sh@973 -- # kill 484044 00:42:35.605 Received shutdown signal, test time was about 1.000000 seconds 00:42:35.605 00:42:35.605 Latency(us) 00:42:35.605 [2024-11-17T23:47:59.427Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:35.605 [2024-11-17T23:47:59.427Z] =================================================================================================================== 00:42:35.605 [2024-11-17T23:47:59.427Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:35.605 00:47:59 keyring_linux -- common/autotest_common.sh@978 -- # wait 484044 00:42:35.862 00:47:59 keyring_linux -- keyring/linux.sh@42 -- # killprocess 484031 00:42:35.862 00:47:59 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 484031 ']' 00:42:35.862 00:47:59 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 484031 00:42:35.862 00:47:59 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:42:35.862 00:47:59 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:35.862 00:47:59 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 484031 00:42:35.862 00:47:59 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:35.862 00:47:59 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:35.862 00:47:59 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 484031' 00:42:35.862 killing process with pid 484031 00:42:35.862 00:47:59 keyring_linux -- common/autotest_common.sh@973 -- # kill 484031 00:42:35.862 00:47:59 keyring_linux -- common/autotest_common.sh@978 -- # wait 484031 00:42:36.427 00:42:36.427 real 0m5.167s 00:42:36.427 user 0m10.201s 00:42:36.427 sys 0m1.664s 00:42:36.427 00:47:59 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:36.427 00:47:59 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:36.427 ************************************ 00:42:36.427 END TEST keyring_linux 00:42:36.427 ************************************ 00:42:36.427 00:48:00 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:42:36.427 00:48:00 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:42:36.427 00:48:00 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:42:36.427 00:48:00 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:42:36.427 00:48:00 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:42:36.427 00:48:00 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:42:36.427 00:48:00 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:42:36.427 00:48:00 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:42:36.427 00:48:00 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:42:36.427 00:48:00 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:42:36.427 00:48:00 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:42:36.427 00:48:00 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:42:36.427 00:48:00 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:42:36.427 00:48:00 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:42:36.427 00:48:00 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:42:36.427 00:48:00 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:42:36.427 00:48:00 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:42:36.427 00:48:00 -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:36.427 00:48:00 -- common/autotest_common.sh@10 -- # set +x 00:42:36.427 00:48:00 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:42:36.427 00:48:00 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:42:36.427 00:48:00 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:42:36.427 00:48:00 -- common/autotest_common.sh@10 -- # set +x 00:42:38.330 INFO: APP EXITING 00:42:38.330 INFO: killing all VMs 00:42:38.330 INFO: killing vhost app 00:42:38.330 INFO: EXIT DONE 00:42:39.265 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:42:39.525 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:42:39.525 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:42:39.525 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:42:39.525 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:42:39.525 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:42:39.525 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:42:39.525 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:42:39.525 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:42:39.525 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:42:39.525 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:42:39.525 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:42:39.525 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:42:39.525 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:42:39.525 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:42:39.525 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:42:39.525 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:42:40.907 Cleaning 00:42:40.907 Removing: /var/run/dpdk/spdk0/config 00:42:40.907 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:42:40.907 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:42:40.907 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:42:40.907 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:42:40.907 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:42:40.907 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:42:40.907 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:42:40.907 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:42:40.907 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:42:40.907 Removing: /var/run/dpdk/spdk0/hugepage_info 00:42:40.907 Removing: /var/run/dpdk/spdk1/config 00:42:40.907 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:42:40.907 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:42:40.907 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:42:40.907 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:42:40.907 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:42:40.907 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:42:40.907 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:42:40.907 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:42:40.907 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:42:40.907 Removing: /var/run/dpdk/spdk1/hugepage_info 00:42:40.907 Removing: /var/run/dpdk/spdk2/config 00:42:40.907 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:42:40.907 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:42:40.907 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:42:40.907 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:42:40.907 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:42:40.907 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:42:40.907 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:42:40.907 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:42:40.907 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:42:40.907 Removing: /var/run/dpdk/spdk2/hugepage_info 00:42:40.907 Removing: /var/run/dpdk/spdk3/config 00:42:40.907 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:42:40.907 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:42:40.907 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:42:40.907 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:42:40.907 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:42:40.907 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:42:40.907 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:42:40.907 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:42:40.907 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:42:40.907 Removing: /var/run/dpdk/spdk3/hugepage_info 00:42:40.907 Removing: /var/run/dpdk/spdk4/config 00:42:40.907 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:42:40.907 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:42:40.907 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:42:40.907 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:42:40.907 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:42:40.907 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:42:40.907 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:42:40.907 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:42:40.907 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:42:40.907 Removing: /var/run/dpdk/spdk4/hugepage_info 00:42:40.907 Removing: /dev/shm/bdev_svc_trace.1 00:42:40.907 Removing: /dev/shm/nvmf_trace.0 00:42:40.907 Removing: /dev/shm/spdk_tgt_trace.pid99160 00:42:40.907 Removing: /var/run/dpdk/spdk0 00:42:40.907 Removing: /var/run/dpdk/spdk1 00:42:40.907 Removing: /var/run/dpdk/spdk2 00:42:40.907 Removing: /var/run/dpdk/spdk3 00:42:40.907 Removing: /var/run/dpdk/spdk4 00:42:40.907 Removing: /var/run/dpdk/spdk_pid100177 00:42:40.907 Removing: /var/run/dpdk/spdk_pid100320 00:42:40.907 Removing: /var/run/dpdk/spdk_pid101036 00:42:40.907 Removing: /var/run/dpdk/spdk_pid101043 00:42:40.907 Removing: /var/run/dpdk/spdk_pid101301 00:42:40.907 Removing: /var/run/dpdk/spdk_pid102621 00:42:40.907 Removing: /var/run/dpdk/spdk_pid103548 00:42:40.907 Removing: /var/run/dpdk/spdk_pid103861 00:42:40.907 Removing: /var/run/dpdk/spdk_pid104058 00:42:40.907 Removing: /var/run/dpdk/spdk_pid104270 00:42:40.907 Removing: /var/run/dpdk/spdk_pid104467 00:42:40.907 Removing: /var/run/dpdk/spdk_pid104626 00:42:40.907 Removing: /var/run/dpdk/spdk_pid104786 00:42:40.907 Removing: /var/run/dpdk/spdk_pid104976 00:42:40.907 Removing: /var/run/dpdk/spdk_pid105284 00:42:40.907 Removing: /var/run/dpdk/spdk_pid107772 00:42:40.907 Removing: /var/run/dpdk/spdk_pid107940 00:42:40.907 Removing: /var/run/dpdk/spdk_pid108100 00:42:40.907 Removing: /var/run/dpdk/spdk_pid108103 00:42:40.907 Removing: /var/run/dpdk/spdk_pid108418 00:42:40.907 Removing: /var/run/dpdk/spdk_pid108535 00:42:40.907 Removing: /var/run/dpdk/spdk_pid108833 00:42:40.907 Removing: /var/run/dpdk/spdk_pid108858 00:42:41.174 Removing: /var/run/dpdk/spdk_pid109226 00:42:41.174 Removing: /var/run/dpdk/spdk_pid109259 00:42:41.174 Removing: /var/run/dpdk/spdk_pid109421 00:42:41.174 Removing: /var/run/dpdk/spdk_pid109548 00:42:41.174 Removing: /var/run/dpdk/spdk_pid109928 00:42:41.174 Removing: /var/run/dpdk/spdk_pid110303 00:42:41.174 Removing: /var/run/dpdk/spdk_pid110779 00:42:41.174 Removing: /var/run/dpdk/spdk_pid113026 00:42:41.174 Removing: /var/run/dpdk/spdk_pid115660 00:42:41.174 Removing: /var/run/dpdk/spdk_pid122671 00:42:41.174 Removing: /var/run/dpdk/spdk_pid123083 00:42:41.174 Removing: /var/run/dpdk/spdk_pid125605 00:42:41.174 Removing: /var/run/dpdk/spdk_pid125886 00:42:41.174 Removing: /var/run/dpdk/spdk_pid128408 00:42:41.174 Removing: /var/run/dpdk/spdk_pid132259 00:42:41.174 Removing: /var/run/dpdk/spdk_pid134325 00:42:41.174 Removing: /var/run/dpdk/spdk_pid140743 00:42:41.174 Removing: /var/run/dpdk/spdk_pid146027 00:42:41.174 Removing: /var/run/dpdk/spdk_pid147962 00:42:41.174 Removing: /var/run/dpdk/spdk_pid148630 00:42:41.175 Removing: /var/run/dpdk/spdk_pid159137 00:42:41.175 Removing: /var/run/dpdk/spdk_pid161310 00:42:41.175 Removing: /var/run/dpdk/spdk_pid217403 00:42:41.175 Removing: /var/run/dpdk/spdk_pid220693 00:42:41.175 Removing: /var/run/dpdk/spdk_pid224514 00:42:41.175 Removing: /var/run/dpdk/spdk_pid228805 00:42:41.175 Removing: /var/run/dpdk/spdk_pid228807 00:42:41.175 Removing: /var/run/dpdk/spdk_pid229454 00:42:41.175 Removing: /var/run/dpdk/spdk_pid230110 00:42:41.175 Removing: /var/run/dpdk/spdk_pid230654 00:42:41.175 Removing: /var/run/dpdk/spdk_pid231049 00:42:41.175 Removing: /var/run/dpdk/spdk_pid231114 00:42:41.175 Removing: /var/run/dpdk/spdk_pid231313 00:42:41.175 Removing: /var/run/dpdk/spdk_pid231448 00:42:41.175 Removing: /var/run/dpdk/spdk_pid231458 00:42:41.175 Removing: /var/run/dpdk/spdk_pid232113 00:42:41.175 Removing: /var/run/dpdk/spdk_pid232709 00:42:41.175 Removing: /var/run/dpdk/spdk_pid233307 00:42:41.175 Removing: /var/run/dpdk/spdk_pid233703 00:42:41.175 Removing: /var/run/dpdk/spdk_pid233751 00:42:41.175 Removing: /var/run/dpdk/spdk_pid233966 00:42:41.175 Removing: /var/run/dpdk/spdk_pid234861 00:42:41.175 Removing: /var/run/dpdk/spdk_pid235706 00:42:41.175 Removing: /var/run/dpdk/spdk_pid241532 00:42:41.175 Removing: /var/run/dpdk/spdk_pid269884 00:42:41.175 Removing: /var/run/dpdk/spdk_pid272796 00:42:41.175 Removing: /var/run/dpdk/spdk_pid273862 00:42:41.175 Removing: /var/run/dpdk/spdk_pid275176 00:42:41.175 Removing: /var/run/dpdk/spdk_pid275320 00:42:41.175 Removing: /var/run/dpdk/spdk_pid275454 00:42:41.175 Removing: /var/run/dpdk/spdk_pid275596 00:42:41.175 Removing: /var/run/dpdk/spdk_pid276046 00:42:41.175 Removing: /var/run/dpdk/spdk_pid277355 00:42:41.175 Removing: /var/run/dpdk/spdk_pid278212 00:42:41.175 Removing: /var/run/dpdk/spdk_pid278640 00:42:41.175 Removing: /var/run/dpdk/spdk_pid280129 00:42:41.175 Removing: /var/run/dpdk/spdk_pid280554 00:42:41.175 Removing: /var/run/dpdk/spdk_pid281106 00:42:41.175 Removing: /var/run/dpdk/spdk_pid283490 00:42:41.175 Removing: /var/run/dpdk/spdk_pid286798 00:42:41.175 Removing: /var/run/dpdk/spdk_pid286799 00:42:41.175 Removing: /var/run/dpdk/spdk_pid286800 00:42:41.175 Removing: /var/run/dpdk/spdk_pid289016 00:42:41.175 Removing: /var/run/dpdk/spdk_pid291332 00:42:41.175 Removing: /var/run/dpdk/spdk_pid295245 00:42:41.175 Removing: /var/run/dpdk/spdk_pid318188 00:42:41.175 Removing: /var/run/dpdk/spdk_pid321446 00:42:41.175 Removing: /var/run/dpdk/spdk_pid325272 00:42:41.175 Removing: /var/run/dpdk/spdk_pid326221 00:42:41.175 Removing: /var/run/dpdk/spdk_pid327309 00:42:41.175 Removing: /var/run/dpdk/spdk_pid328361 00:42:41.175 Removing: /var/run/dpdk/spdk_pid331154 00:42:41.175 Removing: /var/run/dpdk/spdk_pid333619 00:42:41.176 Removing: /var/run/dpdk/spdk_pid335975 00:42:41.176 Removing: /var/run/dpdk/spdk_pid340207 00:42:41.176 Removing: /var/run/dpdk/spdk_pid340214 00:42:41.176 Removing: /var/run/dpdk/spdk_pid343108 00:42:41.176 Removing: /var/run/dpdk/spdk_pid343241 00:42:41.176 Removing: /var/run/dpdk/spdk_pid343384 00:42:41.176 Removing: /var/run/dpdk/spdk_pid343650 00:42:41.176 Removing: /var/run/dpdk/spdk_pid343776 00:42:41.176 Removing: /var/run/dpdk/spdk_pid344849 00:42:41.176 Removing: /var/run/dpdk/spdk_pid346027 00:42:41.176 Removing: /var/run/dpdk/spdk_pid347210 00:42:41.176 Removing: /var/run/dpdk/spdk_pid348384 00:42:41.176 Removing: /var/run/dpdk/spdk_pid349565 00:42:41.176 Removing: /var/run/dpdk/spdk_pid350854 00:42:41.176 Removing: /var/run/dpdk/spdk_pid355169 00:42:41.176 Removing: /var/run/dpdk/spdk_pid355623 00:42:41.176 Removing: /var/run/dpdk/spdk_pid356915 00:42:41.176 Removing: /var/run/dpdk/spdk_pid357652 00:42:41.176 Removing: /var/run/dpdk/spdk_pid361376 00:42:41.176 Removing: /var/run/dpdk/spdk_pid363344 00:42:41.176 Removing: /var/run/dpdk/spdk_pid366768 00:42:41.176 Removing: /var/run/dpdk/spdk_pid370238 00:42:41.176 Removing: /var/run/dpdk/spdk_pid376713 00:42:41.176 Removing: /var/run/dpdk/spdk_pid381199 00:42:41.176 Removing: /var/run/dpdk/spdk_pid381202 00:42:41.176 Removing: /var/run/dpdk/spdk_pid394461 00:42:41.176 Removing: /var/run/dpdk/spdk_pid394986 00:42:41.176 Removing: /var/run/dpdk/spdk_pid395399 00:42:41.176 Removing: /var/run/dpdk/spdk_pid395808 00:42:41.176 Removing: /var/run/dpdk/spdk_pid396389 00:42:41.176 Removing: /var/run/dpdk/spdk_pid396791 00:42:41.176 Removing: /var/run/dpdk/spdk_pid397200 00:42:41.176 Removing: /var/run/dpdk/spdk_pid397719 00:42:41.176 Removing: /var/run/dpdk/spdk_pid400114 00:42:41.176 Removing: /var/run/dpdk/spdk_pid400374 00:42:41.176 Removing: /var/run/dpdk/spdk_pid404163 00:42:41.176 Removing: /var/run/dpdk/spdk_pid404223 00:42:41.176 Removing: /var/run/dpdk/spdk_pid407579 00:42:41.176 Removing: /var/run/dpdk/spdk_pid410188 00:42:41.176 Removing: /var/run/dpdk/spdk_pid417199 00:42:41.176 Removing: /var/run/dpdk/spdk_pid418096 00:42:41.176 Removing: /var/run/dpdk/spdk_pid420601 00:42:41.176 Removing: /var/run/dpdk/spdk_pid420763 00:42:41.176 Removing: /var/run/dpdk/spdk_pid423382 00:42:41.176 Removing: /var/run/dpdk/spdk_pid427076 00:42:41.177 Removing: /var/run/dpdk/spdk_pid429237 00:42:41.177 Removing: /var/run/dpdk/spdk_pid435476 00:42:41.177 Removing: /var/run/dpdk/spdk_pid440685 00:42:41.177 Removing: /var/run/dpdk/spdk_pid441945 00:42:41.177 Removing: /var/run/dpdk/spdk_pid442532 00:42:41.177 Removing: /var/run/dpdk/spdk_pid452694 00:42:41.177 Removing: /var/run/dpdk/spdk_pid455056 00:42:41.177 Removing: /var/run/dpdk/spdk_pid457570 00:42:41.177 Removing: /var/run/dpdk/spdk_pid462602 00:42:41.177 Removing: /var/run/dpdk/spdk_pid462611 00:42:41.177 Removing: /var/run/dpdk/spdk_pid465511 00:42:41.177 Removing: /var/run/dpdk/spdk_pid466903 00:42:41.177 Removing: /var/run/dpdk/spdk_pid468287 00:42:41.177 Removing: /var/run/dpdk/spdk_pid469049 00:42:41.177 Removing: /var/run/dpdk/spdk_pid470455 00:42:41.177 Removing: /var/run/dpdk/spdk_pid471323 00:42:41.177 Removing: /var/run/dpdk/spdk_pid476592 00:42:41.177 Removing: /var/run/dpdk/spdk_pid476985 00:42:41.177 Removing: /var/run/dpdk/spdk_pid477376 00:42:41.177 Removing: /var/run/dpdk/spdk_pid478926 00:42:41.177 Removing: /var/run/dpdk/spdk_pid479271 00:42:41.177 Removing: /var/run/dpdk/spdk_pid479602 00:42:41.177 Removing: /var/run/dpdk/spdk_pid482066 00:42:41.177 Removing: /var/run/dpdk/spdk_pid482079 00:42:41.177 Removing: /var/run/dpdk/spdk_pid483558 00:42:41.177 Removing: /var/run/dpdk/spdk_pid484031 00:42:41.177 Removing: /var/run/dpdk/spdk_pid484044 00:42:41.177 Removing: /var/run/dpdk/spdk_pid97482 00:42:41.437 Removing: /var/run/dpdk/spdk_pid98221 00:42:41.437 Removing: /var/run/dpdk/spdk_pid99160 00:42:41.437 Removing: /var/run/dpdk/spdk_pid99490 00:42:41.437 Clean 00:42:41.437 00:48:05 -- common/autotest_common.sh@1453 -- # return 0 00:42:41.437 00:48:05 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:42:41.437 00:48:05 -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:41.437 00:48:05 -- common/autotest_common.sh@10 -- # set +x 00:42:41.437 00:48:05 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:42:41.437 00:48:05 -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:41.437 00:48:05 -- common/autotest_common.sh@10 -- # set +x 00:42:41.437 00:48:05 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:42:41.437 00:48:05 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:42:41.437 00:48:05 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:42:41.437 00:48:05 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:42:41.437 00:48:05 -- spdk/autotest.sh@398 -- # hostname 00:42:41.437 00:48:05 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:42:41.695 geninfo: WARNING: invalid characters removed from testname! 00:43:13.770 00:48:35 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:16.311 00:48:39 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:19.607 00:48:42 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:22.149 00:48:45 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:25.445 00:48:48 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:27.997 00:48:51 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:31.293 00:48:54 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:43:31.293 00:48:54 -- spdk/autorun.sh@1 -- $ timing_finish 00:43:31.293 00:48:54 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:43:31.293 00:48:54 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:43:31.293 00:48:54 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:43:31.293 00:48:54 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:43:31.293 + [[ -n 6057 ]] 00:43:31.293 + sudo kill 6057 00:43:31.302 [Pipeline] } 00:43:31.316 [Pipeline] // stage 00:43:31.320 [Pipeline] } 00:43:31.334 [Pipeline] // timeout 00:43:31.338 [Pipeline] } 00:43:31.346 [Pipeline] // catchError 00:43:31.350 [Pipeline] } 00:43:31.363 [Pipeline] // wrap 00:43:31.367 [Pipeline] } 00:43:31.378 [Pipeline] // catchError 00:43:31.386 [Pipeline] stage 00:43:31.388 [Pipeline] { (Epilogue) 00:43:31.400 [Pipeline] catchError 00:43:31.401 [Pipeline] { 00:43:31.412 [Pipeline] echo 00:43:31.413 Cleanup processes 00:43:31.417 [Pipeline] sh 00:43:31.702 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:31.702 496945 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:31.719 [Pipeline] sh 00:43:32.010 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:32.010 ++ grep -v 'sudo pgrep' 00:43:32.010 ++ awk '{print $1}' 00:43:32.010 + sudo kill -9 00:43:32.010 + true 00:43:32.022 [Pipeline] sh 00:43:32.306 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:43:44.569 [Pipeline] sh 00:43:44.858 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:43:44.858 Artifacts sizes are good 00:43:44.876 [Pipeline] archiveArtifacts 00:43:44.886 Archiving artifacts 00:43:45.423 [Pipeline] sh 00:43:45.711 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:43:45.725 [Pipeline] cleanWs 00:43:45.735 [WS-CLEANUP] Deleting project workspace... 00:43:45.735 [WS-CLEANUP] Deferred wipeout is used... 00:43:45.742 [WS-CLEANUP] done 00:43:45.743 [Pipeline] } 00:43:45.759 [Pipeline] // catchError 00:43:45.770 [Pipeline] sh 00:43:46.053 + logger -p user.info -t JENKINS-CI 00:43:46.061 [Pipeline] } 00:43:46.076 [Pipeline] // stage 00:43:46.082 [Pipeline] } 00:43:46.095 [Pipeline] // node 00:43:46.100 [Pipeline] End of Pipeline 00:43:46.144 Finished: SUCCESS